The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 10 Issue 8

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Applying CRISPR-Cas9 Off-Target Editing on DNA based Steganography

Abstract: Different from cryptography which encodes data into an incomprehensible format difficult to decrypt, steganography hides the trace of data and therefore minimizes attention to the hidden data. To hide data, a carrier body must be utilized. In addition to the traditional data carriers including images, audios, and videos, DNA emerges as another promising data carrier due to its high capacity and complexity. Currently, DNA based steganography can be practiced with either biological DNA substances or digital DNA sequences. In this article, we present a digital DNA steganography approach that utilizes the CRISPR-Cas9 off-target editing such that the secret message is fragmented into multiple sgRNA homologous sequences in the genome. Retrieval of the hidden message mimics the Cas9 off-target detection process which can be accelerated by computer software. The feasibility of this approach is analyzed, and practical concerns are discussed.

Author 1: Hong Zhou
Author 2: Xiaoli Huan

Keywords: DNA; steganography; CRISPR; Cas9; sgRNA; off-target; substitution

PDF

Paper 2: A Literature Review on Medicine Recommender Systems

Abstract: Medicine recommender systems can assist the medical care providers with the selection of an appropriate medication for the patients. The advanced technologies available nowadays can help developing such recommendation systems which can lead to more concise decisions. Many existing medicine recommendation systems are developed based on different algorithms. Thus, it is crucial to understand the state-of-the-art developments of these systems, their advantages and disadvantages as well as areas which require more research. In this paper, we conduct a literature review on the existing solutions for medicine recommender systems, describe and compare them based on various features, and present future research directions.

Author 1: Benjamin Stark
Author 2: Constanze Knahl
Author 3: Mert Aydin
Author 4: Karim Elish

Keywords: Medicine; recommendation systems; healthcare; systematic review

PDF

Paper 3: Study and Analysis of Delay Sensitive and Energy Efficient Routing Approach

Abstract: Wireless Sensing Networks (WSNs) comprised of significant numbers of miniatures and reasonable sensor nodes, which sense data from surrounding and forwarded data toward the base station (BS) via multi-hop fashion through cluster head node (CHN). The random selection of CHN in WSNs is fully based on the nodes residing energy. The node residing energy and network sustainability is hot research issues of the day in WSNs. There are many deficiencies in less energy adaptive clustering hierarchy (LEACH) RP due to the rapid energy usage of ordinary and CHN because of direct communication to the base station. The rapid draining of node energy produces huge numbers of hole in the network causing retransmission of data packet, route update cost, and E2E delay. In this paper, the proposed Delay Sensitive and Energy Efficient (DSEE) Routing Protocol (RP) select CHN considering distance difference and amount of remaining energy of neighboring nodes. In this proposed approach, data fusion technology (DFT) was implemented to solve the problem of data redundancy, but it does not design a specific data fusion algorithm. At last, simulation experiments proved the superiority of the improved protocol LEACH-DSEE and finally, we compare this improved routing protocol with existing protocols by consideration metrics such as node death ratio, data packet delivery ration and node energy consumption.

Author 1: Babar Ali
Author 2: Tariq Mahmood
Author 3: Muhammad Ayzed Mirza
Author 4: Saleemullah Memon
Author 5: Muhammad Rashid
Author 6: Ekang Francis Ajebesone

Keywords: Multi-hop (MH); CHN; WSNs; BS; Data Fusion Technology (DFT); LEACH RP; DSEE

PDF

Paper 4: Analysis of Software Tools for Longitudinal Studies in Psychology

Abstract: Longitudinal studies allow to access the review of causal hypotheses directly. It means that they make possible causal relation between the order of impacts (i.e. life events, educational effects, etc.) and the consequences that then occur. Long-term data storage has specific requirements for software and methods of data storage and conversion. The paper introduces criteria for evaluating software tools in the context of their application in longitudinal studies in psychology. The study is devoted to the analysis of popular tools of psychological research based on the criteria, which were introduced.

Author 1: Pavel Kolyasnikov
Author 2: Evgeny Nikulchev
Author 3: Vladimir Belov
Author 4: Anastasia Silaeva
Author 5: Alexander Kosenkov
Author 6: Artem Malykh
Author 7: Zalina Takhirova
Author 8: Sergey Malykh

Keywords: Software tools for psychological research; choice of the tools for psychological research; longitudinal studies; criteria for software evaluation

PDF

Paper 5: An Ontological Model for Generating Complete, Form-based, Business Web Applications

Abstract: This paper presents an ontological model for specifying and automatically generating complete business Web applications. First, a modular and expandable ontological model for specifying form-based, business Web applications is developed and presented. Next, the technology used for transforming the ontological specification to Java executable code is explained. Finally, the results of applying the proposed model for specifying and generating an order management application are presented. Results showed that the application of an ontological model in a generative programming approach increases the level of abstraction. This approach is especially suitable for development of software families, where similar features are reused in multiple products/applications.

Author 1: Daniel Strmecki
Author 2: Ivan Magdalenic

Keywords: Ontology; model; generative; automatic; programming; development

PDF

Paper 6: Sound user Interface with Touch Panel for Data and Information Expression and its Application to Meteorological Data Representation

Abstract: Sound User Interface (SUI) with touch panel for representation of quantitative data and information together with its application to meteorological data representation is proposed. The proposed SUI is not a merely ear-con. Through experiments, it is found that the proposed SUI combined with visual perception makes meteorologist to understand meteorological data intuitively and is much understandable than ever in a comprehensive manner. It is also useful to hear “images” in particular, for blind person.

Author 1: Kohei Arai

Keywords: Audible; meteorology; remote sensing satellite; musical scale; multi-layered data; SUI

PDF

Paper 7: Local Average of Nearest Neighbors: Univariate Time Series Imputation

Abstract: The imputation of time series is one of the most important tasks in the homogenization process, the quality and precision of this process will directly influence the accuracy of the time series predictions. This paper proposes two simple algorithms, but quite powerful for univariate time series imputation process, which are based on the means of the nearest neighbors for the imputation of missing data. The first of them Local Average of Neighbors Neighbors (LANN) calculates the missing value from the average of the previous neighbor and the following neighbor to the missing value. The second Local Average of Neighbors Neighbors+ (LANN+), considers a threshold parameter, which allows to differentiate the calculation of the missing values according to the difference between the neighbors: for the differences less than or equal to the threshold the missing value is calculated through of LANN and for major differences the missing value is calculated from the average of the four closest neighbors, two previous and two subsequent to the missing value. Imputation results on different time series are promising.

Author 1: Anibal Flores
Author 2: Hugo Tito
Author 3: Carlos Silva

Keywords: Univariate time series; imputation; LANN; LANN+

PDF

Paper 8: Mortality Prediction based on Imbalanced New Born and Perinatal Period Data

Abstract: This study was carried out by the New York State Department of Health, between 2012 and 2016. This experiment relates to six supervised machine learning methods: Support Vector Machine (SVM), Logistic Regression (LR), Gradient Boosting (GB), Random Forest (RF), Deep Learning (DL) and the Ensemble Model, all of which are used in the prediction of infant mortality. This experiment applied ensemble model that concentrated on assigning different weights to different models per output class in order to obtain a better predictive performance for infant mortality. Efforts were made to measure the performance and compare the classifier accuracy of each model. Several criteria, including the area under ROC curve, were considered when comparing the ensemble model (GB, RF and DL) with the other five models (SVM, LR, DL, GB and RF). In terms of these different criteria, the ensemble model outperformed the others in predicting survival rates among infant patients given a balanced data set (the areas under the ROC curve for minor, moderate, major and extreme were 98%, 95%, 92% and 97% respectively, giving a total accuracy of 80.65%). For the imbalanced dataset, (the areas under the ROC curve for minor, moderate, major and extreme were 98%, 98%, 99% and 99% respectively, giving total accuracy increased to 97.44%). The results of the experiments used in this dissertation showed that using the ensemble model provided a better level of prediction for infant mortality than the other five models, based on the relative prediction accuracy for each model for each output class. Therefore, the ensemble model provides and extremely promises classifier in terms of predicting infant mortality.

Author 1: Wafa M AlShwaish
Author 2: Maali Ibr. Alabdulhafith

Keywords: Component; machine learning; support vector machine; logistic regression; gradient boosting; random forest; deep learning; ensemble model

PDF

Paper 9: Framework for Digital Data Access Control from Internal Threat in the Public Sector

Abstract: Information management is one of the main challenges in the public sector because the information is often exposed to threat risks, particularly internal ones. Information theft or misuse, which is attributed to human factors, affects the reputation of public sector organizations due to the loss of public trust in the security and confidentiality of the information and personal data that are hacked by internal parties. Most studies focus on general problem solving related to internal threats instead of digital personal data protection. Therefore, this study identifies the main security control elements for personal data access in the public sector, including information security management, human resource security, operational security, access control, and compliance. A comprehensive framework is developed based on the identified security control elements and validated using a case study. Data are collected using interview, observation, and document analysis techniques. The findings contribute to the management of information system security through a systematic approach to controlling internal threats in the public sector. This framework can serve as a guideline for the public sector in managing internal threats to reduce security incidents involving unauthorized access to digital personal data.

Author 1: Haslidah Halim
Author 2: Maryati Mohd Yusof

Keywords: Information management; internal threats; control framework; risk; information security; personal data access

PDF

Paper 10: Improving Usable-Security of Web based Healthcare Management System through Fuzzy AHP

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Nawaf Rasheed Alharbe

Keywords: Application security; application usability; usable-security; fuzzy analytic hierarchy process

PDF

Paper 11: Relationship Analysis on the Experience of Hospitalised Paediatric Cancer Patient in Malaysia using Text Analytics Approach

Abstract: The purpose of this study is to analyse the keyword relationships of paediatric cancer patient’s experiences whilst being hospitalised during the treatment session. This study collects data through 40 days of observations on 21 paediatric cancer patients. A combination of text analytical visualizations such as network analysis map and bubble graph to analyse the data is applied in this study. Through the analysis, keywords such as “cri” (crying), “lay” (laying), “sleep” (sleeping) and “watch” (watching) are found the most common activities that have been experienced by paediatric cancer patients when they were hospitalised. Based on this observation, it can be argued that these activities can be represented as the experience that they have whilst being in the hospital. Based on the findings, hospitalised paediatric cancer patient’s experience is limited due to the treatment protocol that requires them to be attached to intravenous line. Therefore, most of their activities are focused in bed such as sleeping, playing with their mobile, watching video and so on in bed. This study also offers a novel approach of transforming cancer patient data into useful knowledge about keyword relationship in paediatric cancer patient’s experience during their stay in the hospital. The incorporation of these two text analytics offers insights for researchers to understand the interesting hidden knowledge in the collection of unstructured data, and this information can be used by medical providers, psychologists, games designers and others to develop any applications that can assist their difficulties and ease their pain while warded in the hospital.

Author 1: Zuraini Zainol
Author 2: Puteri N.E. Nohuddin
Author 3: Nadhirah Rasid
Author 4: Hamidah Alias
Author 5: A. Imran Nordin

Keywords: Patient experience; paediatric; keyword relationship analysis; bubble graph; text network analysis

PDF

Paper 12: Prediction of Potential-Diabetic Obese-Patients using Machine Learning Techniques

Abstract: Diabetes is a disease that is chronic. Improper blood glucose control may cause serious complications in diabetic patients as heart and kidney disease, strokes, and blindness. Obesity is considered to be a massive risk factor of type 2 diabetes. Machine Learning has been applied to many medical health aspects. In this paper, two machine learning techniques were applied; Support Vector Machine (SVM) and Artificial Neural Network (ANN) to predict diabetes mellitus. The proposed techniques were applied on a real dataset from Al-Kasr Al-Aini Hospital in Giza, Egypt. The models were examined using four-fold cross validation. The results were conducted from two phases in which forecasting patients with fatty liver disease using Support Vector Machine in the first phase reached the highest accuracy of 95% when applied on 8 attributes. Then, Artificial Neural Network technique to predict diabetic patients were applied on the output of phase 1 and another different 8 attributes to predict non-diabetic, pre-diabetic and diabetic patients with accuracy of 86.6%.

Author 1: Raghda Essam Ali
Author 2: Hatem El-Kadi
Author 3: Soha Safwat Labib
Author 4: Yasmine Ibrahim Saad

Keywords: Obesity; diabetes; nonalcoholic fatty liver disease; artificial neural network; support vector machine

PDF

Paper 13: Detection of Chronic Kidney Disease using Machine Learning Algorithms with Least Number of Predictors

Abstract: Chronic kidney disease (CKD) is one of the most critical health problems due to its increasing prevalence. In this paper, we aim to test the ability of machine learning algorithms for the prediction of chronic kidney disease using the smallest subset of features. Several statistical tests have been done to remove redundant features such as the ANOVA test, the Pearson’s correlation, and the Cramer’s V test. Logistic regression, support vector machines, random forest, and gradient boosting algorithms have been trained and tested using 10-fold cross-validation. We achieve an accuracy of 99.1 according to F1-measure from Gradient Boosting classifier. Also, we found that hemoglobin has higher importance for both random forest and Gradient boosting in detecting CKD. Finally, our results are among the highest compared to previous studies but with less number of features reached so far. Hence, we can detect CKD at only $26.65 by performing three simple tests.

Author 1: Marwa Almasoud
Author 2: Tomas E Ward

Keywords: Chronic Kidney Disease (CKD); Random Forest (RF); Gradient Boosting (GB); Logistic Regression (LR); Support Vector Machines (SVM); Machine Learning (ML); prediction

PDF

Paper 14: Hadoop MapReduce for Parallel Genetic Algorithm to Solve Traveling Salesman Problem

Abstract: Achieving an optimal solution for NP-complete problems is a big challenge nowadays. The paper deals with the Traveling Salesman Problem (TSP) one of the most important combinatorial optimization problems in this class. We investigated the Parallel Genetic Algorithm to solve TSP. We proposed a general platform based on Hadoop MapReduce approach for implementing parallel genetic algorithms. Two versions of parallel genetic algorithms (PGA) are implemented, a Parallel Genetic Algorithm with Islands Model (IPGA) and a new model named an Elite Parallel Genetic Algorithm using MapReduce (EPGA) which improve the population diversity of the IPGA. The two PGAs and the sequential version of the algorithm (SGA) were compared in terms of quality of solutions, execution time, speedup and Hadoop overhead. The experimental study revealed that both PGA models outperform the SGA in terms of execution time, solution quality when the problem size is increased. The computational results show that the EPGA model outperforms the IPGA in term of solution quality with almost similar running time for all the considered datasets and clusters. Genetic Algorithms with MapReduce platform provide better performance for solving large-scale problems.

Author 1: Entesar Alanzi
Author 2: Hachemi Bennaceur

Keywords: Genetic algorithms; parallel genetic algorithms; Hadoop MapReduce; island model; traveling salesman problem

PDF

Paper 15: Twitter Sentiment Analysis in Under-Resourced Languages using Byte-Level Recurrent Neural Model

Abstract: Sentiment analysis in non-English language can be more challenging than the English language because of the scarcity of publicly available resources to build the prediction model with high accuracy. To alleviate this under-resourced problem, this article introduces the leverage of byte-level recurrent neural model to generate text representation for twitter sentiment analysis in the Indonesian language. As the main part of the proposed model training is unsupervised and does not require much-labeled data, this approach can be scalable by using a huge amount of unlabeled data that is easily gathered on the Internet, without much dependencies on human-generated resources. This paper also introduces an Indonesian dataset for general sentiment analysis. It consists of 10,806 twitter data (tweets) selected from a total of 454,559 gathered tweets which taken directly from twitter using twitter API. The 10,806 tweets are then classified into 3 categories, positive, negative, and neutral. This Indonesian dataset could help the development of Indonesian sentiment analysis especially general sentiment analysis and encouraged others to start publishing similar dataset in the future.

Author 1: Ridi Ferdiana
Author 2: Wiliam Fajar
Author 3: Desi Dwi Purwanti
Author 4: Artmita Sekar Tri Ayu
Author 5: Fahim Jatmiko

Keywords: Sentiment analysis; under-resourced problem; Indonesian dataset; twitter

PDF

Paper 16: Predicting Return Donor and Analyzing Blood Donation Time Series using Data Mining Techniques

Abstract: Since blood centers in most countries typically rely on volunteer donors to meet the hospitals' needs, donor retention is critical for blood banks. Identifying regular donors is critical for the advance planning of blood banks to guarantee a stable blood supply. In this research, donors' data was collected from a Saudi blood bank from 2017 to 2018. Machine learning algorithms such as logistic regression (LG), random forest (RF) and support vector classifier (SVC) were applied to develop and evaluate models for classifying blood donors as return and non-return donors. The natural imbalance of the donors' distribution required extra attention and considerations to produce classifiers with good performance. Thus, over-SMOTE sampling was tested. Experiments of different classifiers showed very similar performance results. In addition to the donors return classification, a time series analysis on the donors dataset was also considered to find any seasonal variations that could be captured and delivered to blood banks for better planning and decision making. After aggregating the donation count by month, results showed that the number of donations each year was stable except for two discovered drops in June and September, which for the two observed years coincided with two religious periods: Fasting and Performing Hajj.

Author 1: Anfal Saad Alkahtani
Author 2: Musfira Jilani

Keywords: Classification; machine learning; time series analysis; blood donation

PDF

Paper 17: Modified Farmland Fertility Optimization Algorithm for Optimal Design of a Grid-connected Hybrid Renewable Energy System with Fuel Cell Storage: Case Study of Ataka, Egypt

Abstract: In this paper, a Modified Farmland Fertility Optimization algorithm (MFFA) has been presented for optimal sizing of a grid connected hybrid system including photovoltaic (PV), wind turbines and fuel cell (FC). The system is optimal designed for providing a clean, reliable and affordable energy by adopting hybrid power systems. This system is very important for countries looking to achieve their sustainable development goals. MFFA is proposed in order to reduce the processing implementation time. The optimization method depends on the high reliability of the hybrid power supply, small fluctuation in the injected energy to the grid and high utilization of the wind and solar complementary characteristics. Moreover, MFFA is applied to minimize the cost of energy while satisfying the operational constraints. A real case study of a hybrid power system for Ataka region in Egypt is introduced to evaluate the performance of the proposed optimization method. Moreover, a comprehensive comparison between the proposed MFFA optimization technique and the conventional Farmland Fertility Algorithm (FFA) has been presented to validate the proposed MFFA.

Author 1: Ahmed A. Zaki Diab
Author 2: Sultan I. EL-Ajmi
Author 3: Hamdy M. Sultan
Author 4: Yahia B. Hassan

Keywords: Photovoltaic; wind; fuel cell; renewable energy; hybrid energy system; modified farmland fertility optimization

PDF

Paper 18: Smart Rubric-based Systematic Model for Evaluating and Prioritizing Academic Practices to Enhance the Education Outcomes

Abstract: Recently, the impact of free-market economy, globalization, and knowledge economy has become a challenging and focal to higher educational institutions, which resulted in radical change. Therefore, it became mandatory for the academic programs to prepare highly qualified graduates to meet the new challenges, through the implementation of well-defined academic standards. For this reason, the National Center for Academic Accreditation & Evaluation (NCAAA) in Kingdom of Saudi Arabia (KSA) defined a set of standards to ensure that quality of education in KSA is equivalent to the highest international standards. NCAAA standards contains of good criterions to guide the universities in evaluating their quality performance for improvement and obtain NCAAA accreditation. However, implementing NCAAA standards without supportive systems has been found to be a very complex task due to the existence of a large number of standard criterions, evaluation process occurs according to personal opinions, the lack of quality evaluation expertise, and manual calculation. This, in turn, leads to inaccurate evaluation, develops inaccurate improvement plans, and difficulty in obtaining NCAAA accreditation. Therefore, this paper introduces a systematic model that contain smart-rubrics that has been designed based on NCAAA quality performance evaluation elements supported with algorithms and mathematical models to reduce personal opinions, provide an accurate auto-evaluation, and auto-prioritization action plans for NCAAA standards. The proposed model will support academics and administrative by facilitating their NCAAA quality tasks with ease, an authenticate self-assessment, accurate action plans and simplifying accreditation tasks. Finally, the implementation of the model proved to have very efficient and effective results in supporting KSA education institution in accreditation tasks that will lead to enhance the quality of education and to obtain NCAAA accreditation.

Author 1: Mohammed Al-Shargabi

Keywords: Systematic Model; Smart Rubric; NCAAA Good Practices; quality of academic programs and universities; improvement action plan

PDF

Paper 19: Modeling of the Vegetable Oil Blends Composition

Abstract: The article presents a computer modeling of blends of vegetable oils for treatment-and-prophylactic and healthy nutrition. To solve this problem, based on biomedical requirements, models of vegetable oil blends have been developed, taking into account the required chemical composition, mass fractions of the main components of the product, structural correlations of the biological value of blends according to the criterion of fatty acid compliance (omega-6 to omega-3). The problem was solved by the method of complete enumeration (brute force), which belongs to the class of methods for finding a solution by exhausting all sorts of options. An automated research system has been developed and implemented to model the composition of mixtures of vegetable oils in accordance with a given target function of the ratio of omega-6: omega-3 polyunsaturated fatty acids (PUFAs). The use of an automated system allowed us to model prescription formulations of blends of oils, taking into account the constraints set and a given goal function. In this study, three alternatives were obtained for a blend composition with omega-6: omega-3 in the first and second variant 5: 1, and in the third - 10: 1, which makes it possible to use them for healthy and therapeutic nutrition.

Author 1: Olga S Voskanyan
Author 2: Igor A. Nikitin
Author 3: Marina A. Nikitina
Author 4: Maria V. Klokonos
Author 5: Daria A. Guseva
Author 6: Igor V. Zavalishin

Keywords: Modeling; brute force; vegetable oil blends; omega-6; omega-3

PDF

Paper 20: A Machine Learning Approach towards Detecting Dementia based on its Modifiable Risk Factors

Abstract: Dementia is considered one of the greatest global health and social care challenges in the 21st century. Fortunately, dementia can be delayed or possibly prevented by changes in lifestyle as dictated through known modifiable risk factors. These risk factors include low education, hypertension, obesity, hearing loss, depression, diabetes, physical inactivity, smoking, and social isolation. Other risk factors are non-modifiable and include aging and genetics. The main goal of this study is to demonstrate how machine learning methods can help predict dementia based on an individual’s modifiable risk factors profile. We use publicly available datasets for training algorithms to predict participant’ s cognitive state diagnosis, as cognitive normal or mild cognitive impairment or dementia. Several approaches were implemented using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) longitudinal study. The best classification results were obtained using both the Lancet and the Libra risk factor lists via longitudinal datasets, which outperformed cross-sectional baseline datasets. Moreover, using only data of the most recent visits provided even better results than using the complete longitudinal set. A binary classification (dementia vs. non-dementia) yielded approximately 92% accuracy, while the full multi-class prediction performance yielded to a 77% accuracy using logistic regression, followed by random forest with 92% and 70% respectively. The results demonstrate the utility of machine learning in the prediction of cognitive impairment based on modifiable risk factors and may encourage interventions to reduce the prevalence or severity of the condition in large populations.

Author 1: Reem Bin-Hezam
Author 2: Tomas E. Ward

Keywords: Machine learning; classification; data mining; data preparation; dementia; modifiable risk factors

PDF

Paper 21: Most Valuable Player Algorithm for Solving Minimum Vertex Cover Problem

Abstract: Minimum Vertex Cover Problem (MVCP) is a combinatorial optimization problem that is utilized to formulate multiple real-life applications. Owing to this fact, abundant research has been undertaken to discover valuable MVCP solutions. Most Valuable Player Algorithm (MVPA) is a recently developed metaheuristic algorithm that inspires its idea from team-based sports. In this paper, the MVPA_MVCP algorithm is introduced as an adaptation of the MVPA for the MVCP. The MVPA_MVCP algorithm is implemented using Java programming language and tested on a Microsoft Azure virtual machine. The performance of the MVPA_MVCP algorithm is evaluated analytically in terms of run time complexity. Its average-case run time complexity is ceased to Θ(I(|V|+|E|)), where I is the size of the initial population, |V| is the number of vertices and |E| is the number of edges of the tested graph. The MVPA_MVCP algorithm is evaluated experimentally in terms of the quality of gained solutions and the run time. The experimental results over 15 instances of DIMACS benchmark revealed that the MVPA_MVCP algorithm could, in the best case, get the best known optimal solution for seven data instances. Also, the experimental findings exposed that there is a direct relation between the number of edges of the graph under test and the run time.

Author 1: Hebatullah Khattab
Author 2: Ahmad Sharieh
Author 3: Basel A. Mahafzah

Keywords: Most valuable player algorithm; minimum vertex cover problem; metaheuristic algorithms; optimization problem

PDF

Paper 22: Non-linear Dimensionality Reduction-based Intrusion Detection using Deep Autoencoder

Abstract: The intrusion detection has become core part of any network of computers due to increasing amount of digital content available. In parallel, the data breaches and malware attacks have also grown in large numbers which makes the role of intrusion detection more essential. Even though many existing techniques are successfully used for detecting intruders but new variants of malware and attacks are being released every day. To counterfeit these new types of attacks, intrusion detection must be designed with state of art techniques such as Deep learning. At present the Deep learning techniques have a strong role in Natural Language Processing, Computer Vision and Speech Processing. This paper is focused on reviewing the role of deep learning techniques for intrusion detection and proposing an efficient deep Auto Encoder (AE) based intrusion detection technique. The intrusion detection is implemented in two stages with a binary classifier and multiclass classification algorithm (dense neural network). The performance of the proposed approach is presented and compared with parallel methods used for intrusion detection. The reconstruction error of the AE model is compared with the PCA and the performance of both anomaly detection and the multiclass classification is analyzed using metrics such as accuracy and false alarm rate. The compressed representation of the AE model helps to lessen the false alarm rate of both anomaly detection and attack classification using SVM and dense NN model respectively.

Author 1: S Sreenivasa Chakravarthi
Author 2: R. Jagadeesh Kannan

Keywords: Autoencoder; deep learning; principal component analysis; dense neural network; false alarm rate

PDF

Paper 23: Towards a Classification View of Personalized e-Learning with Social Collaboration Support

Abstract: With the emergence of Web 2.0 technologies, interaction and collaboration support in the educational field have been augmented. These types of support embrace researchers to enrich the e-learning environment with personalized characteristics with the utilization of the collaboration support outputs. Achieving this requires understanding the existing environments and highlights their eminence. As a result, there are many attempts to state the current status of personalized e-learning environment from different perspectives. However, these attempts targeted a specific view and direction which failed to provide us with the general view of the adoption of personalized e-learning environment with the support of social collaboration tools. This paper provides a classified view of the current status of personalized e-learning environments which incorporate social collaboration tools for providing the personalization feature. The classification adopts four different views to carry out the classification; these views are subject, purpose, method, and tool. The findings show that the utilization of the user-generated contents and social interaction functionalities for personalization is tight and not fully consumed. In short, the potential of providing personalized learning with social interaction and collaboration features remains not fully explored.

Author 1: Amal Al-Abri
Author 2: Yassine Jamoussi
Author 3: Zuhoor AlKhanjari
Author 4: Naoufel Kraiem

Keywords: Classification review; collaboration; personalized e-learning; social media

PDF

Paper 24: An Efficient Design of RPL Objective Function for Routing in Internet of Things using Fuzzy Logic

Abstract: The nature of the Low power and lossy networks (LLNs) requires having efficient protocols capable of handling the resource constraints. LLNs consist of networks that connect different type of devices which has constraints resources such as energy, memory and battery life. Using the standard routing protocols such as Open Shortest Path First (OSPF) is inefficient for LLNs due to the constraints that LLNs need. So, IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) was developed to accommodate these constraints. RPL is a distance vector protocol that used the object functions (OF) to define the best tree path. However, choosing a single metric for the OF found to be unable to accommodate applications requirements. In this paper, an enhanced (OF) is proposed namely; OFRRT-FUZZY relying on several metrics combined using Fuzzy Logic. In order to overcome the limitations of using a single metric, the proposed OFRRT-FUZZY considers node and link metrics. Namely, Received Signal Strength Indicator (RSSI), Remaining Energy (RE) and Throughput (TH). The proposed OFRRT-FUZZY is implemented under Cooja simulator and then results were compared with OF0, MHROF in order to find which OF provides more satisfactory results. And simulation results show that OFRRT-FUZZY outperformed OF0 and MHROF.

Author 1: Adeeb Saaidah
Author 2: Omar Almomani
Author 3: Laila Al-Qaisi
Author 4: Mohammed Kamel MADI

Keywords: RPL; Objective Function; IOT; fuzzy logic; LLNs

PDF

Paper 25: Modelling the Enterprise Architecture Implementation in the Public Sector using HOT-Fit Framework

Abstract: Enterprise architecture is very important to the public sector’s IT systems that are developed, organized, scaled up, maintained and strategized. Despite an extensive literature, the research of enterprise architecture is still at the early stage in public the sector and the reason to explain the acceptance, as well as the understanding of the implementation level of EA services still remains unclear. Therefore, this study examines the implementation of EA by measuring the Malaysian public sector’s influence factors of EA. Grounded by the Human-Organization-Technology (HOT-Fit) Model, this study proposes a conceptual framework by decomposing Human characteristics, Organizational characteristics and Technological characteristics as main categories in assessing the identified factors. A total of 92 respondents in the Malaysian public sector participated in this study. Structural Equation Modelling with Partial Least Square is the main statistical technique used in this study. The study has revealed that human characteristics such as knowledge and innovativeness to EA and technological characteristics such as relative advantage and complexity of EA influence its implementation by the Malaysian public sector. Based on the findings, the theoretical and practical implications of the study as well as limitations and future works are also discussed.

Author 1: Hasimi Sallehudin
Author 2: Nurhizam Safie Mohd Satar
Author 3: Nur Azaliah Abu Bakar
Author 4: Rogis Baker
Author 5: Farashazillah Yahya
Author 6: Ahmad Firdause Md Fadzil

Keywords: Enterprise architecture; public sector; HOT-Fit

PDF

Paper 26: Robot Arm Analysis based on Master Device Pneumatic Actuators

Abstract: Advances in technology have expanded the use of soft actuators in various fields especially in robotics, rehabilitation and medical field. Soft actuator development provides many advantages, primarily being simple structures, high power to weight ratio, good compliance, high water resistance and low production cost. However, most soft actuators suffer the problem of being oversized which could potentially hurt users as it is often made of hard materials such as steels and hard rigid plastics. Current drawbacks of soft actuator implementation in robotic arms are on its excessive weights, causing these robots to be difficult to set up by patients themselves which in turn makes it less applicable for home rehabilitation training program. Hence, there is a need to design a soft actuator which is safe and more flexible, especially for applications in areas of patients in rehabilitation area or in-house rehabilitation program. In this paper, we propose the design of robot arm using master device pneumatic actuator and analyse the implementation for the above purpose. The system comprises primarily of the master and slave arms, two accelerometers and two potentiometers providing references for attitude control, six quasi-servo valves, and SH-7125 microcontroller. Our proposed design exhibits functions of the actuator that has been generated from elastic deformation of extension and contraction of the cylinder structure when high pneumatic pressure is supplied to the chamber. The control performance of the device is investigated using simulation method, whereby the rational model of the robot arm and the quasi servo valve with the embedded controller is implemented and analysed. It is found that the analysed results of the model approved well with the desired values.

Author 1: Mohd Aliff
Author 2: Nor Samsiah Sani

Keywords: Trajectory control; master-slave control; robot arm; pneumatic cylinder

PDF

Paper 27: Attractiveness Analysis of Quiz Games

Abstract: Quiz games are played on platforms such as television game shows, radio game shows, and recently, on mobile apps. In this study, HQ Trivia and SongPop 2 were chosen as the benchmark. Each game data have been collected for the analysis and the game refinement measure was employed for the assessment that focuses on different elimination tournament system for each sample. The results show that games such as HQ Trivia, which applies single-round elimination tournament, has a lower value of game refinement, in which the game is highly skillfull. Meanwhile, games that apply a round-robin system, such as SongPop 2 have a higher value of game refinement, in which the game is very stochastic. SongPop 2 and HQ Trivia both have more than 5 million downloads in Google Play Store. It is concluded that different types of quiz games which apply different kinds of tournament style have different game refinement value.

Author 1: Tara Khairiyah Md Zali
Author 2: Nor Samsiah Sani
Author 3: Abdul Hadi Abd Rahman
Author 4: Mohd Aliff

Keywords: Quiz games; game refinement theory; attractiveness

PDF

Paper 28: A Survey: Agent-based Software Technology Under the Eyes of Cyber Security, Security Controls, Attacks and Challenges

Abstract: Recently, agent-based software technology has received wide attention by the research community due to its valuable benefits, such as reducing the load on networks and providing an efficient solution for the transmission challenge problem. However, the major concern in building agent-based systems is related to the security of agents. In this paper, we explore the techniques used to build controls that guarantee both the protection of agents against malicious destination machines and the protection of destination machines against malicious agents. In addition, statistical-based analyses are employed to evaluate the level of maturity of the protection techniques to preserve the protection goals (the code and data, state, and itinerary of the agent), with and without the threat of attacks. Challenges regarding the security of agents are presented and highlighted by seven research questions related to satisfying cyber security requirements, protecting the visiting agent and the visited host machine from each other, providing robustness against advanced attacks that target protection goals, quantifying the security in agent-based systems, and providing features of self-protection and self-communication to the agent itself.

Author 1: Bandar Alluhaybi
Author 2: Mohamad Shady Alrahhal
Author 3: Ahmed Alzhrani
Author 4: Vijey Thayananthan

Keywords: Agent; attack; cyber; security; requirement; maturity; protection goals

PDF

Paper 29: Indoor Positioning System using Regression-based Fingerprint Method

Abstract: Indoor Positioning System has opportunity to be used in different business platform. Based on past research, optimized localization method for Bluetooth Low Energy (BLE) to predict position of person or object with high accuracy has not been found yet. Most recent research that have solve Received Signal Strength (RSS) inconsistent value is using fingerprint method. This paper proposed a deep regression machine learning using convolutional neural network (CNN) with regression-based fingerprint model to estimate real position. The model used 5 nearest fingerprints as reference RSS values with their location (x or y) label as inputs to produce output of single value position (x or y), then repeat the process to produce second value of position to create complete coordinate of estimated position. To evaluate the proposed model, a comparison between training data with validation data using Root Mean Squared Error (RMSE) is used. The comparisons are with Multilayer Perceptron model and with the weighted sum method as benchmark. The experiment Gave results of mean distance and 90th percentile distance between proposed model with the benchmark. CNN model achieved accuracies of lower than 330cm at 90th percentile with mean distance lower than 185cm. Weighted sum model achieved accuracies lower than 360cm at 90th percentile with mean distance higher than 185cm, and MLP is in between them. The result demonstrates that the proposed method outperformed the benchmark methods.

Author 1: Reginald Putra Ghozali
Author 2: Gede Putra Kusuma

Keywords: Indoor positioning system; fingerprinting; regression machine learning; convolutional neural network

PDF

Paper 30: A New Security Model for Web Browser Local Storage

Abstract: In recent years, the web browser has taken over many roles of the traditional operating system, such as acting as a host platform for web applications. Web browser storage, where the web applications can save data locally was one of the new functionalities added in HTML5. However, web functionality has increased significantly since HTML5 was introduced. As web functionality increased, so did the threats facing web users. One of the most prevalent threats was the user’s privacy violations. This study examines the existing security issues related to the usage of web browser storage and proposes a new model to secure the data saved in the browser’s storage. The model was designed and implemented as a web browser extension to secure the saved data. The model was experimentally demonstrated and the result was evaluated.

Author 1: Thamer Al-Rousan
Author 2: Bassam Al-Shargabi
Author 3: Hasan Abualese

Keywords: HTML5; security; local storage

PDF

Paper 31: A Novel Approach for Dimensionality Reduction and Classification of Hyperspectral Images based on Normalized Synergy

Abstract: During the last decade, hyperspectral images have attracted increasing interest from researchers worldwide. They provide more detailed information about an observed area and allow an accurate target detection and precise discrimination of objects compared to classical RGB and multispectral images. Despite the great potentialities of hyperspectral technology, the analysis and exploitation of the large volume data remain a challenging task. The existence of irrelevant redundant and noisy images decreases the classification accuracy. As a result, dimensionality reduction is a mandatory step in order to select a minimal and effective images subset. In this paper, a new filter approach normalized mutual synergy (NMS) is proposed in order to detect relevant bands that are complementary in the class prediction better than the original hyperspectral cube data. The algorithm consists of two steps: images selection through normalized synergy information and pixel classification. The proposed approach measures the discriminative power of the selected bands based on a combination of their maximal normalized synergic information, minimum redundancy and maximal mutual information with the ground truth. A comparative study using the support vector machine (SVM) and k-nearest neighbor (KNN) classifiers is conducted to evaluate the proposed approach compared to the state of art band selection methods. Experimental results on three benchmark hyperspectral images proposed by the NASA “Aviris Indiana Pine”, “Salinas” and “Pavia University” demonstrated the robustness, effectiveness and the discriminative power of the proposed approach over the literature approaches.

Author 1: Asma Elmaizi
Author 2: Hasna Nhaila
Author 3: Elkebir Sarhrouni
Author 4: Ahmed Hammouch
Author 5: Nacir Chafik

Keywords: Hyperspectral images; target detection; pixel classification; dimensionality reduction; band selection; information theory; mutual information; normalized synergy

PDF

Paper 32: Muscle Electro Stimulator for the Reduction of Stretch Marks

Abstract: The problem of stretch marks is generated because the skin stretches abruptly in a short time; this change causes the skin to deform and widen, forming a roughness. This roughness is what is known as stretch marks. This document arose from the need to reduce the deformation in the skin that many people suffer. This deformation mainly is due to overweight, pregnancy, or during adolescence due to rapid growth. In this document, a device will be designed that will have the task of reducing skin roughness. One will use electro-stimulation as the primary technique to apply electrical impulses. This device can limit and control the electrical signals produced to control the movement of muscle fibers and skin. The results obtained show a remarkable reduction of stretch marks in one people after the application of electrical stimuli with the device. Research shows promising results.

Author 1: Paima-Sahuma Jaime
Author 2: Duran-Berrio Linder
Author 3: Palacios-Cusiyunca Chrisstopher
Author 4: Roman-Gonzalez Avid

Keywords: Electro stimulator; stretch marks; muscle fibers

PDF

Paper 33: Robust Video Content Authentication using Video Binary Pattern and Extreme Learning Machine

Abstract: Recently, due to easy accessibility of smartphones, digital cameras and other video recording devices, a radical enhancement has been experienced in the field of digital video technology. Digital videos have become very vital in court of law and media (print, electronic and social). On the other hand, a widely-spread availability of Video Editing Tools (VETs) have made video tampering very easy. Detection of this tampering is very important, because it may affect the understanding and interpretation of video contents. Existing techniques used for detection of forgery in video contents can be broadly categorized into active and passive. In this research a passive technique for video tampering detection in spatial domain is proposed. The technique comprises of two phases: 1) Extraction of features with proposed Video Binary Pattern (VBP) descriptor, and 2) Extreme Learning Machine (ELM) based classification. Experimental results on different datasets reveal that the proposed technique achieved accuracy 98.47%.

Author 1: Mubbashar Sadddique
Author 2: Khurshid Asghar
Author 3: Tariq Mehmood
Author 4: Muhammad Hussain
Author 5: Zulfiqar Habib

Keywords: Video forgery; spatial video forgery; passive forgery detection; Video Binary Pattern (VBP); feature extraction

PDF

Paper 34: Automated Greenhouses for the Reduction of the Cost of the Family Basket in the District of Villa El Salvador-Perú

Abstract: Today, the cost of the family basket is gradually increasing, not only globally but also in our country. This increase includes the demand for vegetables and fresh vegetables that allow people to improve their quality of life. Also, this consists of the search for a healthier and more natural diet with the help of existing technologies. Against this, we propose the implementation of an automated greenhouse, with sensors and actuators that allow controlling a microclimate for correct and efficient development of vegetables. With this proposal, we obtain a saving of 50% equivalent to 8.00 dollars, concerning the planting of lettuce against market prices, thus achieving a reduction in the family basket.

Author 1: Pedro Romero Huaroto
Author 2: Abraham Casanova Robles
Author 3: Nicolh Antony Ciriaco Susanibar
Author 4: Avid Roman-Gonzalez

Keywords: Germination; protocol i2c; hummus; atmega

PDF

Paper 35: Learning Analytics Framework for Adaptive E-learning System to Monitor the Learner’s Activities

Abstract: The adaptive e-learning system (AE-LS) research has long focused on the learner model and learning activities to personalize the learner’s experience. However, there are many unresolved issues that make it difficult for trainee teachers to obtain appropriate information about the learner's behavior. The evolution of the Learning Analytics (LA) offers new possibilities to solve problems of AE-LS. In this paper, we proposed a Business intelligence framework for AE-LS to monitor and manage the performance of the learner more effectively. The suggested architecture of the ALS proposes a data warehouse model that responds to these problems. It defines specifics measures and dimensions, which helps teachers and educational administrators to evaluate and analyze the learner’s activities. By analyzing these interactions, the adaptive e-learning analytic system (AE-LAS) has the potential to provide a predictive view of upcoming challenges. These predictions are used to evaluate the adaptation of the content presentation and improve the performance of the learning process.

Author 1: Salma EL Janati
Author 2: Abdelilah Maach
Author 3: Driss El Ghanami

Keywords: e-Learning; adaptive e-learning system; learner model; learning analytics; business intelligence; data warehouse; content presentation

PDF

Paper 36: Machine Learning Approaches for Predicting the Severity Level of Software Bug Reports in Closed Source Projects

Abstract: In Software Development Life Cycle, fixing defect bugs is one of the essential activities of the software maintenance phase. Bug severity indicates how major or minor the bug impacts on the execution of the system and how rapidly the developer should fix it. Triaging a vast amount of new bugs submitted to the software bug repositories is a cumbersome and time-consuming process. Manual triage might lead to a mistake in assigning the appropriate severity level for each bug. As a consequence, a delay for fixing severe software bugs will take place. However, the whole process of assigning the severity level for bug reports should be automated. In this paper, we aim to build prediction models that will be utilized to determine the class of the severity (severe or non-severe) of the reported bug. To validate our approach, we have constructed a dataset from historical bug reports stored in JIRA bug tracking system. These bug reports are related to different closed-source projects developed by INTIX Company located in Amman, Jordan. We compare eight popular machine learning algorithms, namely Naive Bayes, Naive Bayes Multinomial, Support Vector Machine, Decision Tree (J48), Random Forest, Logistic Model Trees, Decision Rules (JRip) and K-Nearest Neighbor in terms of accuracy, F-measure and Area Under the Curve (AUC). According to the experimental results, a Decision Tree algorithm called Logistic Model Trees achieved better performance compared to other machine learning algorithms in terms of Accuracy, AUC and F-measure with values of 86.31, 0.90 and 0.91, respectively.

Author 1: Aladdin Baarah
Author 2: Ahmad Aloqaily
Author 3: Zaher Salah
Author 4: Mannam Zamzeer
Author 5: Mohammad Sallam

Keywords: Software engineering; software maintenance; bug tracking system; bug severity; data mining; machine learning; severity prediction; closed-source projects

PDF

Paper 37: Steganography Performance over AWGN Channel

Abstract: Steganography can be performed using frequency domain or spatial domain. In spatial domain method, the least significant bits (LSB) is the mostly used method where the least significant bits of the image's pixels binary representation are used to carry the confidential data bits. On the other hand, secret data bits in the frequency domain technique are hidden using coefficients of the image frequency representation such as discrete cosine transform (DCT). Robustness against image attacks or channel's noise is a key requirement in steganography. In this paper, we study the performance of the steganography methods over a channel with Added White Gaussian Noise (AWGN). We use the bit error rate to evaluate the performance of each method over a channel with different noise levels. Simulation results show that the frequency domain technique is more robust and achieves better bit error rate in a noisy channel than the spatial domain method. Moreover, we enhanced the steganography system robustness by using convolution encoder and Viterbi decoder. The effect of the encoder’s parameters, such as rate and constraint length is evaluated.

Author 1: Fahd Alharbi

Keywords: Steganography; robustness; noise; AWGN; viterbi

PDF

Paper 38: Cloud Security based on the Homomorphic Encryption

Abstract: Cloud computing provides services rather than products; where it offers many benefits to clients who pay to use hardware and software resources. There are many advantages of using cloud computing such as low cost, easy to maintain, and available resources. The main challenge in the Cloud system is how to obtain a highly secured system against attackers. For this reason, methods were developed to increase the security level in different techniques. This paper aims to review these techniques with their security challenges by presenting the most popular cloud techniques and applications. Homomorphic Encryption method in cloud computing is presented in this paper as a solution to increase the security of the data. By using this method, a client can perform an operation on encrypted data without being decrypted which is the same result as the computation applied to decrypted data. Finally, the reviewed security techniques are discussed with some recommendations that might be used to raise the required security level in such a system.

Author 1: Waleed T Al-Sit
Author 2: Qussay Al-Jubouri
Author 3: Hani Al-Zoubi

Keywords: Cloud computing; homomorphic encryption; security

PDF

Paper 39: Video Analysis with Faces using Harris Detector and Correlation

Abstract: A procedure is presented to detect changes in a video sequence based on the Viola & Jones Method to obtain images of faces of persons; videos are taken of the web. The software allows to obtain images or frames separated from nose, mouth, and eyes but the case of the eyes is taken as an example. Change detection is done by using correlation and the Harris detector. The correlation results allow us to recognize changes in position in the person or movement of the camera if the individual remains fixed and vice versa. It is possible to analyze a part of the face thanks to the Harris detector; It is possible with detected reference points to recognize very small changes in the video sequence of a particular organ; such as the human eye, even when the image is of poor quality as is the case of videos downloaded from the Internet or taken with a low-resolution camera.

Author 1: Rodolfo Romero Herrera
Author 2: Francisco Gallegos Funes
Author 3: José Elias Romero Martínez

Keywords: Harris detect; Viola and Jones; Harris detector; correlation; video

PDF

Paper 40: Mapping of Independent Tasks in the Cloud Computing Environment

Abstract: Cloud computing is a technology that provides many resources and facility to share data. Due to the concept of open environment in the cloud computing the request or data increases quickly. So this problem can be solved by proper utilization of tasks along with available resources. Task scheduling algorithm plays an immense role in the cloud computing environment in minimizing the time required for completion of the task assigned to the resource available. There are several algorithms introduced to solve the problem of scheduling task of several kinds but all the developed algorithms are task dependent algorithms. The major criteria of the task scheduling algorithm are to optimize resource utilization in the diverse computing environment, so as to minimize makespan and execution time so that the accountability of healthcare industry that uses cloud computing can be enhanced. The proposed algorithm is designed to deal with variable length tasks by taking the advantages of the different heuristic algorithm and ensures optimum task scheduling with various available resources to enhance the quality of the healthcare system.

Author 1: Biswajit Nayak
Author 2: Sanjay Kumar Padhi

Keywords: Scheduling; mixed model; cloud computing; makespan; healthcare

PDF

Paper 41: Convolutional Neural Network Architecture for Plant Seedling Classification

Abstract: Weed control is a challenging problem that may face crops productivity. Weeds are perceived as an important problem because they conduce to reduce crop yields due to the expanding competition for nutrients, water, and sunlight besides they serve as hosts for diseases and pests. Thus, it is crucial to identify weeds in early growth in order to avoid their side effects on crops growth. Previous conventional machine learning technologies exploited for discriminating crops and weeding species faced challenges of effectiveness and reliability of weed detection at preliminary stages of growth. This work proposes the application of deep learning technique for plant seedling classification. A new Convolutional Neural Networks (CNN) architecture is designed to classify plant seedlings at their early growth stages. The presented technique is appraised using plant seedlings dataset. Average accuracy, precision, recall, and F1-score are utilized as evaluation metrics. The results reveal the capability of the proposed technique in discriminating among 12 species (3 crops and 9 weeds). The system achieved 94.38% average classification accuracy. The proposed system is compared with existing plant seedling systems. The results demonstrate that the proposed method outperforms the existing methods.

Author 1: Heba A Elnemr

Keywords: Deep learning; convolutional neural network; plant seedling classification; weed control

PDF

Paper 42: A Methodology for Engineering Domain Ontology using Entity Relationship Model

Abstract: Ontology engineering is an important aspect of semantic web vision to attain the meaningful representation of data. Although various techniques exist for the creation of ontology, most of the methods involve the number of complex phases, scenario-dependent ontology development, and poor validation of ontology. This research work presents a lightweight approach to build domain ontology using Entity Relationship (ER) model. Firstly, a detailed analysis of intended domain is performed to develop the ER model. In the next phase, ER to ontology (EROnt) conversion rules are outlined, and finally the system prototype is developed to construct the ontology. The proposed approach investigates the domain of information technology curriculum for the successful interpretation of concepts, attributes, relationships of concepts and constraints among the concepts of the ontology. The experts’ evaluation of accurate identification of ontology vocabulary shows that the method performed well on curriculum data with 95.75% average precision and 90.75% average recall.

Author 1: Muhammad Ahsan Raza
Author 2: M. Rahmah
Author 3: Sehrish Raza
Author 4: A. Noraziah
Author 5: Roslina Abd. Hamid

Keywords: Ontology engineering; semantic web; ontology validation; knowledge management

PDF

Paper 43: An Evaluation Model for Auto-generated Cognitive Scripts

Abstract: Autonomous intelligent agents have become a very important research area in Artificial Intelligence (AI). Socio-cultural situations are one challenging area in which autonomous intelligent agents can acquire new knowledge or modify existing one. Socio-cultural situations can be best represented in the form of cognitive scripts that can allow different techniques to be used to facilitate knowledge transfer between scripts. Conceptual blending has proven successful in enhancing the social dynamics of cognitive scripts, where information is transferred from similar contextual scripts to a target script resulting in a new blended script. To the extent of our knowledge, there is no computational model available to evaluate these newly generated cognitive scripts. This work aims to develop a computational model to evaluate cognitive scripts resulting from blending two or more linear cognitive scripts. The evaluation process involves: 1) using the GloVe similarity to check if the transferred events conceptually fit the target script; 2) using the semantic view of text coherence to decide on the optimal position(s) to place the transferred event(s) in the target script. Results show that the GloVe similarity can be applied successfully to preserve the contextual meaning of cognitive scripts. Additional results show that GloVe embedding gives higher accuracy over Universal Sentence Encoder (USE) and Smooth Inverse Frequency (SIF) embedding but this comes with a high computational cost. Future work will look into reducing the computational cost and enhancing the accuracy.

Author 1: Ahmed M ELMougi
Author 2: Rania Hodhod
Author 3: Yasser M. K. Omar

Keywords: Autonomous intelligent agents; socio-cultural situations; cognitive scripts; conceptual blending; contextual structural retrieval algorithms; text coherence; sentence embedding

PDF

Paper 44: Systematic Literature Review of Identifying Issues in Software Cost Estimation Techniques

Abstract: Software cost estimation plays a vital role in software project management. It is a process of predicting the effort and cost in terms of money and staff required for developing the software system. It is very much clear that software project will be successful if its estimated cost will be near to the real cost. When the project is at the acquisition stage the least details are available about a software project to be developed, due to which Problems arises in cost estimation. As the stages move on details increases for software development of software which is quite fruitful in cost estimating. However, it can be considered that estimating the software cost in the first phases will produce better results. In this research cost estimation techniques are discussed along with the issues in that particular technique and focus will be on understating the points or issues which cause hurdles or issues in estimating the cost of the software project.

Author 1: Muhammad Asif Saleem
Author 2: Rehan Ahmad
Author 3: Tahir Alyas
Author 4: Muhammad Idrees
Author 5: Asfandayar
Author 6: Asif Farooq
Author 7: Adnan Shahid Khan
Author 8: Kahawaja Ali

Keywords: Cost estimation; COCOMO; qualitative; use case point, PCA

PDF

Paper 45: A Novel Student Clustering Model for the Learning Simplification in Educational Environments

Abstract: Students’ clustering is considered to be a method of granting students many different ways of being able to learn by taking into account the student’s degree. The definition of clustering can be “a group of steps that divide a group of information or (things) in a group of valuable secondary classes, which are named clusters. This implies that a cluster can be a group of things that are “alike” among them and that can be “not alike” to things that are part of other types of clusters. Thus, in this study we want to make students into different clusters or groups in order to organize students and lecturers into more interactive ways, and to extract an education that is related to some particular information and to produce some comments and feedback to the academic teacher. Conclusion, this is the specific research effort that forms the final phase. In particular, it is the result of satisfaction when deviations exist in the artifact behaviour, which is derived from the (multiply) revised hypothetical predictions. Accordingly, the results are then declared as ‘good enough’.

Author 1: Khalaf Khatatneh
Author 2: Islam Khataleen
Author 3: Rami Alshwaiyat
Author 4: Mohammad Wedyan

Keywords: Clustering; students groups; education inter-active ways; students’ clustering; student’s degree; valuable secondary classes

PDF

Paper 46: Classifying Cardiotocography Data based on Rough Neural Network

Abstract: Cardiotocography is a medical device that monitors fetal heart rate and the uterine contraction during the period of pregnancy. It is used to diagnose and classify a fetus state by doctors who have challenges of uncertainty in data. The Rough Neural Network is one of the most common data mining techniques to classify medical data, as it is a good solution for the uncertainty challenge. This paper provides a simulation of Rough Neural Network in classifying cardiotocography dataset. The paper measures the accuracy rate and consumed time during the classification process. WEKA tool is used to analyse cardiotocography data with different algorithms (neural network, decision table, bagging, the nearest neighbour, decision stump and least square support vector machine algorithm). The comparison shows that the accuracy rates and time consumption of the proposed model are feasible and efficient.

Author 1: Belal Amin
Author 2: Mona Gamal
Author 3: A. A. Salama
Author 4: I.M. El-Henawy
Author 5: Khaled Mahfouz

Keywords: Accuracy rate; cardiotocography; data mining; rough neural network; WEKA tool

PDF

Paper 47: Hospital Queue Control System using Quick Response Code (QR Code) as Verification of Patient’s Arrival

Abstract: Hospital is an organization that primarily provides services in the form of examination, treatment, medical treatment and other diagnostic measures required by each patient in the limits of the technology and the means provided by the hospital. So that patients get the maximum service, hospitals have to provide the best services, one of the services that are being highlighted is the queue system. This study proposed a queue control system at the hospital using a quick response code (QR Code) as verification of the patient's arrival that serves to speed up the administrative process and provide effective services, to facilitate the patient in terms of the queuing line (registration) at the hospital. The queue control system uses the website as a registration and database repository for patient data. This website has two kinds of display form, as a patient or a doctor. This kind of display is useful to speed up the administrative process, so that registration can be done at any time by the patient, with patient entering/filling data themselves and their complaints or medical records that are automatically available on the website, so that patients do not need to come to the hospital with a medical record. Furthermore, the website will filter the data entered by the patient as the patient's complaints, so that patients can choose the hospital that provides services for patients’ illness complaints which are also showing hospital addresses to facilitate patients in finding the location of the hospital. After the administration process on the website, the system will provide a QR Code as a form of verification for patients who have registered, with their data already exists in the database. When the patient arrives at the hospital, the patient only needs to scan the QR code that already exists, without taking care of the administration back because at the time the system gives a QR Code, indicating that patients already had a hospital destination along with the queue number. It is expected that the queue control system implemented at the hospital using a quick response code (QR Code) as verification of the patient's arrival, could speed up the administrative process to be more effective and efficient.

Author 1: Ridho Hendra Yoga Perdana
Author 2: Hudiono
Author 3: Mochammad Taufik
Author 4: Amalia Eka Rakhmania
Author 5: Rohman Muhamad Akbar
Author 6: Zainul Arifin

Keywords: Queue; hospital; patient; quick response code; registration

PDF

Paper 48: Artificial Immune-based Algorithm for Academic Leadership Assessment

Abstract: Artificial Immune-based algorithm is inspired by the biological immune system as computational intelligence approach in data analysis. Negative selection algorithm is derived from immune-based algorithm’s family that used to recognize the pattern’s changes perform by the gene detectors in complementary state. Due to the self-recognition ability, this algorithm is widely used to recognize the abnormal data or non-self especially for fault diagnosis, pattern recognition, network security etc. In this study, the self-recognition performance proposed by the negative selection algorithm been considered as a potential technique in classifying employee’s competency. Assessing the employee’s performance in organization is an important task for human resource management people to identify the right candidate in job promotion assessment. Thus, this study attempts to propose an immune-based model in assessing academic leadership performance. There are three phases involved in experimental phase i.e. data acquisition and preparation; model development; and analysis and evaluation. The data consists of academic leadership proficiency was prepared as data-set for learning and detection processes. Several experiments were conducted using cross validation process on different model to identify the most accurate model. Therefore, the accuracy of NS classifier is considered acceptable enough for this academic leadership assessment case study. For enhancement, other immune-based algorithm or bio-inspired algorithms, such as genetic algorithm, particle swam optimization, ant colony optimization would also be considered as a potential algorithm for performance assessment.

Author 1: Hamidah Jantan
Author 2: Nur Hamizah Syafiqah Che Azemi
Author 3: Zulaiha Ali Othman

Keywords: Immune-based algorithm; negative selection algorithm; academic leadership; performance assessment

PDF

Paper 49: Antennas of Circular Waveguides

Abstract: The design of the circular waveguide antenna is proposed for displacement reflector antennas. For them, we use the frequencies of operation so that our waveguide generates the mode, (Transversal Electric), resulting in a high impedance bandwidth. The results obtained from the radiation pattern of the fabricated antenna give excellent results according to the numerical data. Used as a primary feed-in compensation, reflector decreases cross-polarization.

Author 1: Cusacani Guerrero
Author 2: Julio Agapito
Author 3: Roman-Gonzalez Avid

Keywords: Circular waveguide antenna; mode; microwave oven

PDF

Paper 50: Pedestrian Crossing Safety System at Traffic Lights based on Decision Tree Algorithm

Abstract: Pedestrians are one of the street users who have the right to get priority on security. Highway users such as vehicle drivers sometimes violate the traffic lights that is endanger pedestrians and make pedestrians feel insecure when crossing the street. Based on this problem, a tool is designed to provide a warning for the drivers or riders violating the traffic lights and prevent traffic accident by spraying water. The system is able to detect traffic violation based on changes in the value of the vehicle position on the stop line obtained from the Ultrasonic HC-SR04 sensor. When a violation is detected, a decision tree algorithm turns on the pump to spray water to the traffic violators as a deterrent effect. The results show that the vehicle located closest to the sensor has 94% precision, 88% recall and 85% accuracy, the vehicle located in the middle has 73% precision, 100% recall, and 75% accuracy, and the vehicle located furthest to the sensor has 75% precision, 100% recall and 80% accuracy.

Author 1: Denny Hardiyanto
Author 2: Iswanto
Author 3: Dyah Anggun Sartika
Author 4: Muamar Rojali

Keywords: Pedestrian safety; decision tree algorithm: traffic light; spraying water

PDF

Paper 51: Autonomous Monitoring System using Wi-Fi Economic

Abstract: In this project, it is presented the implementation of an autonomous monitoring system using solar panels and connecting to the network through Wi-Fi. The system will collect meteorological data and transmit in real-time to the web for the visualization and analysis of the results over temperature, humidity, and atmospheric pressure. The system will allow saving time and money, employing decision making and efficiency. For the development of this device, a small platform “Wemos D1” for the internet of things allows easy programming in the platform “Arduino IDE”.

Author 1: Michael Ames Ccoa Garay
Author 2: Avid Roman-Gonzalez

Keywords: Wemos d1 mini-skirt; Wi-Fi; sensor; internet

PDF

Paper 52: Impact of ICT on Students’ Academic Performance: Applying Association Rule Mining and Structured Equation Modeling

Abstract: Information and communication technology (ICT) plays a significant role in university students’ academic performance. This research examined the effect of ICT on the students’ academic performance at different private universities in Chittagong, Bangladesh. Primary data have been collected from the students of those universities using a survey questionnaire. Descriptive Statistics, Reliability Analysis, Confirmatory Factor Analysis, OLS regression, Structured Equation Modeling (SEM) and Data Mining algorithms such as Association rule mining and éclat have been employed to evaluate the comparative importance of the factors in identifying the academic performance of the students. From a statistical and mining perspective, overall results indicate that there is a significant relationship between ICT use and students’ academic performance. Also, student’s addiction to ICT has a significant influence on the comparative measurement in identifying the academic performance of the students. Finally, some recommendations are provided on the basis of the findings.

Author 1: Mohammad Aman Ullah
Author 2: Mohammad Manjur Alam
Author 3: Ahmed Shan-A-Alahi
Author 4: Mohammed Mahmudur Rahman
Author 5: Abdul Kadar Muhammad Masum
Author 6: Nasrin Akter

Keywords: Information and Communication Technology (ICT); student; academic; performance; association rule mining

PDF

Paper 53: WhatsApp as an Educational Support Tool in a Saudi University

Abstract: WhatsApp is a widely used social media app, growing in popularity across the Middle East, and the most popular in Saudi Arabia. In this paper, we investigate the usage of WhatsApp as an educational support tool in a Saudi university. An online survey was constructed to ascertain how students and staff feel about and utilize WhatsApp as part of their daily studies. It also aimed to gather their thoughts on other platforms offered by the university such as Blackboard and email. The survey was tested and the results analyzed for frequency distributions, mean score, and standard deviation. Our results from nearly 200 student and staff members reveals that WhatsApp is heavily utilized for a variety of educational support tasks and greatly preferred over the other platforms. We propose that WhatsApp has good potential to support not only student coordination, information dissemination and simple enquiries but also to support formal teaching and out-of-class learning.

Author 1: Ahmad J Reeves
Author 2: Salem Alkhalaf
Author 3: Mohamed A. Amasha

Keywords: Online learning; WhatsApp; e-learning; blackboard; communication; mobile learning

PDF

Paper 54: Real-Time Intelligent Parking Entrance Management

Abstract: To help improve the situation of urban transport in the city of Casablanca, we have studied and set up a smart parking system. In this paper, we evaluate the management of the parking entrance utilising artificial intelligence. In addition, we want to establish the limits of our solution and its ability to respond to different requests in real time.

Author 1: Sofia Belkhala
Author 2: Siham Benhadou
Author 3: Hicham Medromi

Keywords: Urban mobility; smart parking; IoT; artificial intelligence; agent; multi agent system; queuing theory

PDF

Paper 55: LUCIDAH Ligative and Unligative Characters in a Dataset for Arabic Handwriting

Abstract: Arabic script is inherently cursive, even when machine-printed. When connected to other characters, some Arabic characters may be optionally written in compact aesthetic forms known as ligatures. It is useful to distinguish ligatures from ordinary characters for several applications, especially automatic text recognition. Datasets that do not annotate these ligatures may confuse the recognition system training. Some popular datasets manually annotate ligatures, but no dataset (prior to this work) took ligatures into consideration from the design phase. In this paper, a detailed study of Arabic ligatures and a design for a dataset that considers the representation of ligative and unligative characters are presented. Then, pilot data collection and recognition experiments are conducted on the presented dataset and on another popular dataset of handwritten Arabic words. These experiments show the benefit of annotating ligatures in datasets by reducing error-rates in character recognition tasks.

Author 1: Yousef Elarian
Author 2: Irfan Ahmad
Author 3: Abdelmalek Zidouri
Author 4: Wasfi G. Al-Khatib

Keywords: Arabic ligatures; automatic text recognition; handwriting datasets; Hidden Markov Models

PDF

Paper 56: An Efficient Normalized Restricted Boltzmann Machine for Solving Multiclass Classification Problems

Abstract: Multiclass classification based on unlabeled images using computer vision and image processing is currently an important issue. In this research, we focused on the phenom-ena of constructing high-level features detector for class-driven unlabeled data. We proposed a normalized restricted Boltzmann machine (NRBM) to form a robust network model. The proposed NRBM is developed to achieve the goal of dimensionality reduc-tion and provide better feature extraction with enhancement in learning more appropriate features of the data. For increment in learning convergence rate and reduction in complexity of the NRBM, we add Polyak Averaging method when training update parameters. We train the proposed NRBM network model on five variants of Modified National Institute of Standards and Technology database (MNIST) benchmark dataset. The conducted experiments showed that the proposed NRBM is more robust to noisy data as compared to state-of-art approaches.

Author 1: Muhammad Aamir
Author 2: Nazri Mohd Nawi
Author 3: Fazli Wahid
Author 4: Hairulnizam Mahdin

Keywords: Multiclass classification; restricted Boltzmann ma-chine; Polyak averaging; image classification; Modified National Institute of Standards and Technology Datasets

PDF

Paper 57: Key Schedule Algorithm using 3-Dimensional Hybrid Cubes for Block Cipher

Abstract: A key scheduling algorithm is the mechanism that generates and schedules all session-keys for the encryption and decryption process. The key space of conventional key schedule algorithm using the 2D hybrid cubes is not enough to resist attacks and could easily be exploited. In this regard, this research proposed a new Key Schedule Algorithm based on coordinate geometry of a Hybrid Cube (KSAHC) using Triangular Coordinate Extraction (TCE) technique and 3-Dimensional (3D) rotation of Hybrid Cube surface (HCs) for the block cipher to achieve large key space that are more applicable to resist any attack on the secret key. The strength of the keys and ciphertext are tested using the Hybrid Cube Encryption Algorithm (HiSea) based on Brute Force, entropy, correlation assessment, avalanche effect and NIST randomness test suit which proves the proposed algorithm is suitable for the block cipher. The results show that the proposed KSAHC algorithm has performed better than existing algorithms and we remark that our proposed model may find potential applications in information security systems.

Author 1: Muhammad Faheem Mushtaq
Author 2: Sapiee Jamel
Author 3: Siti Radhiah B. Megat
Author 4: Urooj Akram
Author 5: Mustafa Mat Deris

Keywords: Encryption; decryption; key schedule algorithm; hybrid cube; block cipher

PDF

Paper 58: Web Service Testing Techniques: A Systematic Literature Review

Abstract: These days continual demands on loosely coupled systems have web service gives basic necessities to deliver resolution that are adaptable and sufficient to be work at runtime for maintaining the high quality of the system. One of the basic techniques to evaluate the quality of such systems is through testing. Due to the rapid popularization of web service, which is progressing and continuously increasing, testing of web service has become a basic necessity to maintain high quality of web service. The testing of the performance of Web service based applications is attracting extensive attention. In order to evaluate the performance of web services, it is essential to evaluate the QoS (Quality of Service) attributes such as interoperability, reusability, auditability, maintainability, accuracy and performance to improve the quality of service. The purpose of this study is to introduce the systematic literature review of web services testing techniques to evaluate the QoS attributes to make the testing technique better. With the intention of better testing quality in web services, this systematic literature review intends to evaluate what QoS parameters are necessary to provide better quality assurance. The focus of systematic literature is also to make sure that quality of testing can be encouraged for the present and future. Consequently, the main attention and motivation of the study is to provide an overview of recent research efforts of web service testing techniques from the research community. Each testing technique in web services has identified apparent standards, benefits, and restrictions. This systemic literature review provides a different testing resolution to industry to decide which testing technique is the most efficient and effective with the testing assignment agenda with available resources. As for the significance, it can be said that web service testing technique are still broadly open for improvements.

Author 1: Israr Ghani
Author 2: Wan M.N. Wan-Kadir
Author 3: Ahmad Mustafa

Keywords: Quality assurance; web service testing; web service testing techniques; web service component testing

PDF

Paper 59: Deep Transfer Learning Application for Automated Ischemic Classification in Posterior Fossa CT Images

Abstract: Computed Tomography (CT) imaging is one of the conventional tools used to diagnose ischemic in Posterior Fossa (PF). Radiologist commonly diagnoses ischemic in PF through CT imaging manually. However, such a procedure could be strenuous and time consuming for large scale images, depending on the expertise and ischemic visibility. With the rapid development of computer technology, automatic image classification based on Machine Learning (ML) is widely been developed as a second opinion to the ischemic diagnosis. The practical performance of ML is challenged by the emergence of deep learning applications in healthcare. In this study, we evaluate the performance of deep transfer learning models of Convolutional Neural Network (CNN); VGG-16, GoogleNet and ResNet-50 to classify the normal and abnormal (ischemic) brain CT images of PF. This is the first study that intensively studies the application of deep transfer learning for automated ischemic classification in the posterior part of brain CT images. The experimental results show that ResNet-50 is capable to achieve the highest accuracy performance in comparison to other proposed models. Overall, this automatic classification provides a convenient and time-saving tool for improving medical diagnosis.

Author 1: Anis Azwani Muhd Suberi
Author 2: Wan Nurshazwani Wan Zakaria
Author 3: Razali Tomari
Author 4: Ain Nazari
Author 5: Mohd Norzali Hj Mohd
Author 6: Nik Farhan Nik Fuad

Keywords: Deep learning; ischemic stroke; posterior fossa; classification; convolutional neural network; computed tomography; medical diagnosis

PDF

Paper 60: Effect of e-Commerce Platforms towards Increasing Merchant’s Income in Malaysia

Abstract: In 2018, ‘Hootsuite’ and ‘We Are Social’ have reported in their Digital Report that Malaysian Internet users had increased to 25.08 million users, representing 79% of Malaysian population. However, some reports indicate that even with the enhancement of digital technology, many merchants are still not using any e-commerce platform and being sceptic about it. With the pervasive increase of the Internet users, many other reports had been published to understand the relationship between e-commerce and the increasing of merchants’ income. Therefore, the objective of this research is to study the involvement of Malaysian merchants in e-commerce platforms. A sample of 1060 respondents had been selected randomly across Malaysia to participate in this research by answering a set of survey questions given online. In general, the results show that many merchants have realized the existence of e-commerce and they are familiar with it. However, they have only been utilizing it for purchasing goods and services. On the other hand, a few numbers of Malaysian merchants do engage with some type of e-commerce platforms to operate their businesses. In addition, the research is performed to identify the impact of using e-commerce on Malaysian merchants’ income.

Author 1: M Hafiz Yusoff
Author 2: Mohammad Ahmed Alomari
Author 3: Nurul Adilah Abdul Latiff
Author 4: Motea S. Alomari

Keywords: Income increase; e-commerce; merchant; Malaysia; e-retailing

PDF

Paper 61: Novel Adaptive Auto-Correction Technique for Enhanced Fingerprint Recognition

Abstract: Fingerprints are the most used biometric trait in applications where high level of security is required. Fingerprint image may vary due to various environmental conditions like temperature, humidity, weather etc. Hence, it is necessary to design a fingerprint recognition system that is robust against temperature variations. Existing techniques such as automated and non-automated techniques are not real time analysis (adaptive). In this paper, we propose an adaptive auto correction technique called Reference Auto-correction Algorithm. This proposed algorithm corrects user reference fingerprint template automatically based on captured fingerprint template and the matching score obtained on daily basis to improve the recognition rate. Analysis is carried out on 250 fingerprint templates stored in the database of 10-users captured at varying temperature from 25°C to 0°C. The experimental result shows 40% improvement in the recognition rate after applying auto correction algorithm.

Author 1: Thejaswini P
Author 2: Srikantaswamy R S
Author 3: Manjunatha A S

Keywords: Minutiae, Euclidean distance; artificial neural network; CN- crossing number; reference auto-correction; adaptive method; ISO template; auto-correction algorithm

PDF

Paper 62: Empirical Study of Segment Particle Swarm Optimization and Particle Swarm Optimization Algorithms

Abstract: In this paper, the performance of segment particle swarm optimization (Se-PSO) algorithm was compared with that of original particle swarm optimization (PSO) algorithm. Four different benchmark functions of Sphere, Rosenbrock, Rastrigin, and Griewank with asymmetric initial range settings (upper and lower boundaries values) were selected as the test functions. The experimental results showed that, the Se-PSO algorithm achieved better results in terms of faster convergences in all the testing cases compared to the original PSO algorithm. However, the experimental results further showed the Se-PSO as a promising optimization algorithm method in some other different fields.

Author 1: Mohammed Adam Kunna Azrag
Author 2: Tuty Asmawaty Abdul Kadir

Keywords: Se-PSO; PSO; sphere; Rosenbrock; Rastrigin; Griewank

PDF

Paper 63: An Efficient Deep Learning Model for Olive Diseases Detection

Abstract: Worldwide, plant diseases adversely influence both the quality and quantity of crop production. Thus, the early detection of such diseases proves efficient in enhancing the crop quality and reducing the production loss. However, the detection of plant diseases either via the farmers' naked eyes or their traditional tools or even within laboratories is still an error prone and time consuming process. The current paper presents a Deep Learning (DL) model with a view to developing an efficient detector of olive diseases. The proposed model is distinguishable from others in a number of novelties. It utilizes an efficient parameterized transfer learning model, a smart data augmentation with balanced number of images in every category, and it functions in more complex environments with enlarged and enhanced dataset. In contrast to the lately developed state-of-art methods, the results show that our proposed method achieves higher measurements in terms of accuracy, precision, recall, and F1-Measure.

Author 1: Madallah Alruwaili
Author 2: Saad Alanazi
Author 3: Sameh Abd El-Ghany
Author 4: Abdulaziz Shehab

Keywords: Deep learning; AlexNet; conventional neural networks; plant diseases; olive; feature extraction

PDF

Paper 64: Biotechnical System for Recording Phonocardiography

Abstract: The Phonocardiography is a graphical method of recording of the tones and noise generated by the heart with the help of the phonocardiogram machine. Cardiovascular disease (CVD) and heart failure (HF) are considered life-threatening and mostly cause death. The phonocardiograph signal (PCG) considers an indicator of abnormalities in the cardiovascular system. It provides the ability to carry out qualitative information and quantitative analysis of different tones and heart murmurs. PCG plays a major role in treatment, diagnosis and decision making of the clinical examination and biomedical research fields. The use of simple stethoscope for diagnosis the heart problem requires an experienced physician or doctors. Many people with CVD and HF are dying every day because of the lack of facilities that analysis the heart defects. Most of the low come countries suffer from a severe shortage in the Electrocardiogram (ECG) and PCG devices and trained physicians and doctors. The Poor healthcare system in these countries needs to be improved especially the problem of heart disease diagnostics. The PCG is a technique for recording and monitoring the cardiac acoustics by using a transducer and microphone. This paper attempts to design a cheap and simple biotechnical system for recording and monitoring PCG signal. The hardware was designed and implemented using the stethoscope, electret microphone, amplifiers, DC source, and jack for transmitting the PCG signal to the computer. The software was codded for monitoring and processing the PCG signal.

Author 1: Marwan Ahmed Ahmed Hamid
Author 2: Maria Abdullah
Author 3: Najeed Ahmed Khan
Author 4: Yasmin Mohammed Ahmed AL-Zoom

Keywords: Phonocardiograph (PCG); cardiovascular diseases (CVD); heart failure (HF); electrocardiograph (ECG)

PDF

Paper 65: A Guideline for Decision-making on Business Intelligence and Customer Relationship Management among Clinics

Abstract: Business intelligence offers the capability to gain insights and perform better in decision-making by using a particular set of technologies and tools. A company’s success to a certain extent depends on customers. The complementary of business intelligence and customer relationship management will improve the efficiency of organizations, hence increase productivity and revenue. Most research works on implementation of business intelligence and customer relationship management in organizations commonly concentrate on architecture, framework, and maturity model. The process on how to implement business intelligence and customer relationship management in an organization, especially in smaller domains has not yet been clarified which make some organizations unclear on how to implement business intelligence and customer relationship management. Thus, this study investigates the process involved in the implementation of business intelligence and customer relationship management among clinics. An infographic guideline was developed based on the six process of data mining which is known as Cross Industry Standard Process for Data Mining. Four elements of business intelligence decision-making process which were gather, store, access, and analyze were also included in the process of developing the guideline. Findings from an expert’s review show that the increase of Content Validity Index was 0.7, from 0.3 during the first iteration to 1.0 in the second iteration. Therefore, this result is acceptable. The guideline appears to be a useful instrument for practitioners to implement business intelligence and customer relationship management in their clinics, however the process involved in developing the guideline could be improvised from time to time.

Author 1: Nur Izzati Yusof
Author 2: Norziha Megat Mohd. Zainuddin
Author 3: Noor Hafizah Hassan
Author 4: Nilam Nur Amir Sjarif
Author 5: Suraya Yaacob
Author 6: Wan Azlan Wan Hassan

Keywords: Business intelligence; customer relationship management; decision-making; guideline

PDF

Paper 66: Cognitive Neural Network Classifier for Fault Management in Cloud Data Center

Abstract: Pro-actively handling the fault in data center is a means to allocate the VM to Host before failures, so that SLA meets for the tasks running in the data center. Existing solution [1] on fault prediction in datacenter is based on a single parameter of temperature and the fault tolerance is implemented as a reactive solution in terms of VM replication. Different from these works, a proactive fault tolerance with fault prediction based on deep learning with multiple parameters is proposed in this work. In this work Cognitive Neural Network (CNN) is used to predict the failure of hosts and initiate migration or avoid allocation to the hosts which has high probability of failures. Hosts in the data center are scored on failure probability (FP-Score) based on parameters collected at various levels using CNN. VM placement and migration policies are fine-tuned using FP-Score to manage the failure proactively.

Author 1: S Indirani
Author 2: C.Jothi Venkateswaran

Keywords: Deep learning; Cognitive Neural Network (CNN); FP-Score; fault tolerance; VM allocation; VM migration

PDF

Paper 67: Deep Learning Classification of Biomedical Text using Convolutional Neural Network

Abstract: In this digital era, the document entries have been increasing days by days, causing a situation where the volume of the document entries in overwhelming. This situation has caused people to encounter with problems such as congestion of data, difficulty in searching the intended information or even difficulty in managing the databases, for example, MEDLINE database which stores the documents related to the biomedical field. This research will specify the solution focusing in text classification of the biomedical abstracts. Text classification is the process of organizing documents into predefined classes. A standard text classification framework consists of feature extraction, feature selection and the classification stages. The dataset used in this research is the Ohsumed dataset which is the subset of the MEDLINE database. In this research, there is a total number of 11,566 abstracts selected from the Ohsumed dataset. First of all, feature extraction is performed on the biomedical abstracts and a list of unique features is produced. All the features in this list will be added to the multiword tokenizer lexicon for tokenizing phrases or compound word. After that, the classification of the biomedical texts is conducted using the deep learning network, Convolutional Neural Network which is an approach widely used in many domains such as pattern recognition, classification and so on. The goal of classification is to accurately organize the data into the correct predefined classes. The Convolutional Neural Network has achieved a result of 54.79% average accuracy, 61.00% average precision, 60.00% average recall and 60.50% average F1-score. In short, it is hoped that this research could be beneficial to the text classification area.

Author 1: Rozilawati Dollah
Author 2: Chew Yi Sheng
Author 3: Norhawaniah Zakaria
Author 4: Mohd Shahizan Othman
Author 5: Abd Wahid Rasib

Keywords: Convolutional neural network; biomedical text classification; compound term; Ohsumed dataset

PDF

Paper 68: Acquisition and Classification System of EMG Signals for Interpreting the Alphabet of the Sign Language

Abstract: Taking into account that in Peru, there is an increase in people with difficulties in speaking or communicating. According to the National Institute of Statistics and Informatics of Peru (INEI for its acronym in Spanish), around 80000 people use the gesturing language. For this reason, this research proposes to use the electromyography (EMG) signals to detect the hand movement and identify the alphabet of the sign language to provide essential communication to people who need it. The idea is to classify the signals and recognize the letters of the Spanish alphabet, interpreted in the Peruvian sign language. The results show the classification of the 27 letters of the alphabet with a general success rate of 93.9%.

Author 1: Alvarado-Diaz Witman
Author 2: Meneses-Claudio Brian
Author 3: Fiorella Flores-Medina
Author 4: Patricia Condori
Author 5: Natalia I. Vargas-Cuentas
Author 6: Avid Roman-Gonzalez

Keywords: Electromyography; EMG; sign language; essential communication; recognize the letters

PDF

Paper 69: Implementation of a Beowulf Cluster and Analysis of its Performance in Applications with Parallel Programming

Abstract: In the Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades, the permission to use the embedded systems laboratory was obtained. INTI-Lab researchers will use this laboratory to do different research related to the processing of large scale videos, climate predictions, climate change research, physical simulations, among others. This type of projects, demand a high complexity in their processes, carried out in ordinary computers that result in an unfavorable time for the researcher. For this reason, one opted for the implementation of a high-performance cluster architecture that is a set of computers interconnected to a local network. This set of computers tries to give a unique behavior to solve complex problems using parallel computing techniques. The intention is to reduce the time directly proportional to the number of machines, giving a similarity of having a low-cost supercomputer. Different performance tests were performed scaling from 1 to 28 computers to measure time reduction. The results will show if it is feasible to use the architecture in future projects that demand processes of high scientific complexity.

Author 1: Enrique Lee Huamaní
Author 2: Patricia Condori
Author 3: Avid Roman-Gonzalez

Keywords: High-performance cluster; distributed programming; computational parallelism; Beowulf cluster; high-efficiency computing

PDF

Paper 70: An Internet of Things (IOT) based Smart Parking Routing System for Smart Cities

Abstract: Recently, the number of cars on the road has been growing due to the increase in car manufacturing in parallel with customer services provided to help the new driver to buy cars at affordable prices. On the other hand, we find that the infrastructure of big cities cannot support this number of cars and with the disorganization of parking places in the city this problem will lead us to have serious problem in the city which involves the increase of drivers requests to find the nearest parking places to avoid traffic congestion in those areas. In parallel today we talk about the concept of smart cities and how can we use the evolution of the Internet of Things (IoT) to improve the quality of the smart city. Several efforts have been made on the Internet of Things to improve the reliability and the productivity of public infrastructure. Many problems have been handled and controlled by the IoT such as vehicle traffic congestion, road safety and the inefficient use of car parking spaces. This work introduces a novel technique based on a distributed cloud architecture of IoT to manage the parking systems combined with a distributed swarm intelligence technique using the Ant System algorithm to improve the process of finding the nearest car parking in the minimum time based on the state of traffic on this road. This prototype will help drivers to find the nearest car parking and improve the exploitation of the available car parking in the city.

Author 1: Elgarej Mouhcine
Author 2: Karouani Yassine
Author 3: El Fazazi Hanaa
Author 4: Khalifa Mansouri
Author 5: Youssfi Mohamed

Keywords: Intelligent traffic system; internet of things; swarm intelligent; ant colony optimization; vehicle routing system; multi-agent system; smart parking system; smart cities; cloud computing system

PDF

Paper 71: An Automated Approach for Identification of Non-Functional Requirements using Word2Vec Model

Abstract: Non-Functional Requirements (NFR) are embedded in functional requirements in requirements specification docu-ment. Identification of NFR from the requirement document is a challenging task. Ignorance of NFR identification in early stages of development increase cost and ultimately cause the failure of the system. The aim of this approach is to help the analyst and designers in architect and design of the system by identifying NFR from the requirements document. Several supervised learning-based solutions were reported in the literature. However, for accu-rate identification of NFR, a significant number of pre-categorized requirements are needed to train supervised text classifiers and system analysts perform the categorization process manually. This study proposed an automated semantic similarity based approach which does not needs pre-categorized requirements for identification of NFR from requirements documents. The approach uses an application of Word2Vec model and popular keywords for identification of NFR. Performance of approach is measured in term of precision-recall and F-measure by applying the approach to PROMISE-NFR dataset. The empirical evidence shows that the automated semi-supervised approach reduces manual human effort in the identification of NFR.

Author 1: Muhammad Younas
Author 2: Karzan Wakil
Author 3: Dayang N. A. Jawawi
Author 4: Muhammad Arif Shah
Author 5: Ahmad Mustafa

Keywords: Identification; non-functional requirements; seman-tic similarity; Word2Vec model

PDF

Paper 72: VoIP QoS Analysis over Asterisk and Axon Servers in LAN Environment

Abstract: Voice over IP (VoIP) is a developing technology and a key factor in both the emerging cyberspace engineering and also an accomplishment to set up its position in the telecom industry. VoIP technology is based on internet technology; where data packets switching system is used rather than circuit switching. Whereas, an analog signal is changed over into the digital signals in the full-duplex transmission. VoIP technology is replaced with the conventional public switched telephone network (PSTN) system due to the high flexibility and low-cost. The purpose of this research work is to deliberate experimental and computational performance in the view of quality of service (QoS) parameters of VoIP over local area network (LAN) network. The VoIP systems implementation is based on two different operating system framework (Linux and Windows), whereas, Linux-based and Windows-based private branch exchanges (PBXs), such as Asterisk (Linux-open source) and Axon (window-close source) are configured, installed and verified. QoS factors (such as packet loss, delay, jitter, etc.) are observed over the Asterisk and Axon PBXs in a LAN domain with the assistance of Paessler switch activity grapher (PRTG) monitoring tool. The validations of results are looked at for QoS parameters crosswise over both PBXs with data load (i.e., file transfer and HTTP traffic) during VoIP calls. The productivity and execution of Axon and Asterisk have been equated and analyzed over experimental based outcomes.

Author 1: Naveed Ali Khan
Author 2: Abdul Sattar Chan
Author 3: Kashif Saleem
Author 4: Zuhaibuddin Bhutto
Author 5: Ayaz Hussain

Keywords: VoIP; asterisk; axon; computational based QoS; LAN; packet loss

PDF

Paper 73: Privacy Preserving Data Mining Approach for IoT based WSN in Smart City

Abstract: Wireless Sensor Network (WSN) is one of the most fundamental technologies of Internet of Things (IoT). Various IoT devices are connected to the internet by making use of WSN composed of different sensor nodes and actuators, where these sensor nodes collaborate and accomplish their tasks dynamically. The main objective of deploying WSN-based applications is to make high precision real-time observations, and it is extremely challenging because of the limited computing power of the sensors operating under constrained environments, resource constraints like energy, computation speed, bandwidth and memory, huge volume of high speed, heterogeneous and fast-changing WSN data. These challenges encouraged the researchers to concentrate deeper on exploring data mining techniques to extract the required information from the fast-changing sensor data in WSN and thereby efficiently handle the massive data generated by the WSNs. The increasing need of data mining techniques for WSN has inspired us to propose a distributed data mining technique that effectively handles the data generated by the nodes in the WSN and prolongs the lifespan of the network. Our work provides a novel cluster based scheme to mine the sensors data without moving it to cluster head (CH) or base station (BS) to achieve maximum performance in a WSN environment. The basic idea of the proposed work is that local computations are performed by utilizing the computing power at each sensor node and then the minimum higher level statistical summaries are exchanged, which decreases the energy dissipation in communication as the amount of the sensor data transferred is considerably reduced, and thereby the sensor network lifetime is maximized and also preserve the privacy of the sensor data.

Author 1: Ahmed M Khedr
Author 2: Walid Osamy
Author 3: Ahmed Salim
Author 4: Abdel-Aziz Salem

Keywords: Distributed cluster-based algorithm; association rules; Internet of Things (IoT); privacy preserving; vertically and horizontally distributed databases; wireless sensor networks (WSN)

PDF

Paper 74: Efficient Distributed SPARQL Queries on Apache Spark

Abstract: RDF is a widely-accepted framework for describing metadata in the web due to its simplicity and universal graph-like data model. Owing to the abundance of RDF data, existing query techniques are rendered unsuitable. To this direction, we adopt the processing power of Apache Spark to load and query a large dataset much more quickly than classical approaches. In this paper, we have designed experiments to evaluate the performance of several queries ranging from single attribute selection to selection, filtering and sorting multiple attributes in the dataset. We further experimented with the performance of queries using distributed SPARQL query on Apache Spark GraphX and studied different stages involved in this pipeline. The execution of distributed SPARQL query on Apache Spark GraphX helped us study its performance and gave insights into which stages of the pipeline can be improved. The query pipeline comprised of Graph loading, Basic Graph Pattern and Result calculating. Our goal is to minimize the time during graph loading stage in order to improve overall performance and cut the costs of data loading.

Author 1: Saleh Albahli

Keywords: Semantic web; RDF; SPARQL; SPARK; GraphX; triple patterns

PDF

Paper 75: A Modular Aspect-Oriented Programming Approach of Join Point Interfaces

Abstract: This paper describes and analyzes the main differ-ences and advantages of the Join Point Interfaces (JPI) as an Aspect-Oriented Programming (AOP) approach for the modular software production concerning the standard aspect-oriented pro-gramming methodology for Java (AspectJ) to propose a structural modeling approach looking for modular software solutions. Using a Software Engineering point-of-view, we highlight the relevance of structural and conceptual design for JPI software applications. We model and implement a classic example of AOP using AspectJ and JPI as an application example to review their main difference and highlight the JPI consistency between products (models and code). Our proposal of UML JPI class diagrams allows the definition of oblivious classes which know about their JPI connections, an essential element to adapt and transform tradition like-AspectJ AOP solutions to their JPI version. Thus, for the modular software production and education, JPI seems an ideal software development approach.

Author 1: Cristian Vidal
Author 2: Erika Madariaga
Author 3: Claudia Jim´enez
Author 4: Luis Carter

Keywords: Aspect-Oriented Programming; AspectJ; JPI; class diagrams; UML

PDF

Paper 76: Artificial Potential Field Algorithm Implementation for Quadrotor Path Planning

Abstract: Potential field algorithm introduced by Khatib is well-known in path planning for robots. The algorithm is very simple yet provides real-time path planning and effective to avoid robot’s collision with obstacles. The purpose of the paper is to implement and modify this algorithm for quadrotor path planning. The conventional potential method is firstly applied to introduce challenging problems, such as not reachable goals due to local minima solutions or nearby obstacles (GNRON). This will be solved later by proposed modified algorithms. The first proposed modification is by adding virtual force to the repulsive potential force to prevent local minima solutions. Meanwhile, the second one is to prevent GNRON issue by adding virtual force and considering quadrotor’s distance to goal point on the repulsive potential force. The simulation result shows that the second modification is best applied to environment with GNRON issue whereas the first one is suitable only for environment with local minima traps. The first modification is able to reach goals in six random tests with local minima environment. Meanwhile, the second one is able to reach goals in six random tests with local minima environment, six random tests with GNRON environment, and six random tests with both local minima and GNRON environment.

Author 1: Iswanto Iswanto
Author 2: Alfian Ma’arif
Author 3: Oyas Wahyunggoro
Author 4: Adha Imam Cahyadi

Keywords: Quadrotor; path planning; GNRON (Goal Nonreachable with Obstacles Nearby); artificial potential field; local minima

PDF

Paper 77: New Transport Layer Security using Metaheuristics and New Key Exchange Protocol

Abstract: The easiness of data transmission is one of the information security flaws that needs to be handled rigorously. It makes eavesdropping, tampering and message forgery by malicious more simple. One of the protocols developed to secure communication between the client and the server consists of using Transport Layer Security (TLS). TLS is a cryptographic protocol that allows encryption using record protocol, authentication and data integrity. In this paper, a new TLS version is proposed, named Transport Layer Security with Metaheuristics (TLSM), which is based on a recently designed metaheuristic symmetric ciphering technique for data encryption, combined with hash function SHA-SBOX and a new method for private key exchange. Compared to the existing TLS versions, the suggested protocol outperform all of them in terms of level of security of the encrypted data, key management and execution time.

Author 1: Mohamed Kaddouri
Author 2: Mohammed Bouhdadi
Author 3: Zakaria Kaddouri
Author 4: Driss Guerchi

Keywords: Transport Layer Security (TLS); metaheuristic; symmetric ciphering algorithm; private key exchange; hash func-tion

PDF

Paper 78: Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Abstract: The study and classifying of opinions distilled from social media is called sentiment analysis. The goal of this study is to build an adaptive sentiment lexicon for Arabic language. Based on those lexicons the sentiments polarity classification can be improved. The classification problem will be stated as a mathematical programming problem. In this problem, we search a lexicon that optimizes the classification accuracy. A genetic algorithm is presented to solve the optimization problem. A meta-level feature is generated based on the adaptive lexicons provided by the genetic algorithm. The algorithm performance is supported by using it alongside n-gram features and Bing liu’s lexicon. In this work, lexicon-based and corpora-based approaches are integrated, and the lexicons are produced from the corpus. Five data sets are tested through experiments. The sentiments in all data sets are classified based on five polarity levels. A better understanding of words sentiment orientation, social media users’ culture and Arabic language can be achieved based on the lexicons generated by the proposed algorithm. Since stop words can contribute and add to the sentiment polarity, stop words will be considered and will not deleted. The results show that the F-measure is greater than 80 % in three data sets and the accuracy is greater than 80 % for all data sets. The proposed method out-performs the current methods in the literature in two of the datasets. Finally, in terms of F-measure, the proposed methods achieved better results for three datasets.

Author 1: Mahmoud B Rokaya
Author 2: Ahmed S. Ghiduk
Author 3: Mahmoud B. Rokaya
Author 4: Ahmed S. Ghiduk

Keywords: Sentiment analysis; sentiment lexicon; social media; twitter; optimization; mathematical programming; genetic algorithm; evolutionary computation; arabic language

PDF

Paper 79: An Efficient Segmentation of Retinal Blood Vessels using Singular Value Decomposition and Morphological Operator

Abstract: The extensive study on retinal fundus images has become an essential part in medical domain to detect pathologies including diabetic retinopathy, cataract, glaucoma, macular degeneration,etc.which are the major causes of blindness. Automatic extraction of tree-shaped and unique retinal vascular structure from retinal fundus images is most exigent task and when achieved successfully, becomes a perfect tool helping ophthalmologists to follow appropriate diagnostic measures. In this work, a novel scheme to segment retinal tree-like vascular structure from retinal images is proposed using the Singular Value Decomposition’s left singular vector matrix of the weighted l*a*b* color model of the input image. The left singular vector matrix which captures the relevant and useful features helps in effective conversion of the input RGB image to gray image. Next, the converted gray image is contrast enhanced using CLAHE method which enhances the tree-shaped vasculature of the retinal blood vessel structure giving a rich contrast gray image. Further processing is carried out normalizing the contrast enhanced gray image by removing the image’s background using a mean filter by which blood vessels become brighter. Later, the result of the difference between gray image and normalized filtered image is keyed-in as a constraint to perform ISODATA thresholding which globally segments the foreground vasculature from the image’s background then followed by conversion of the resultant image into binary image upon which morphological opened operation is applied to take away small and falsely segmented portions producing accurate segmentation. This new technique got tested upon images contained in DRIVE and STARE databases and a performance metric called “area covered” is also calculated in addition with common metrics for sampled input image. This novel approach is empirically proven and has attaineda segmentation accuracy of 97.48%.

Author 1: N. C Santosh Kumar
Author 2: Y. Radhika

Keywords: Singular value decomposition; left singular vector matrix; feature extraction; average filter; ISODAT Athresholding; morphological operators; stare anddrive databases

PDF

Paper 80: A Survey of Various Frameworks and Solutions in all Branches of Digital Forensics with a Focus on Cloud Forensics

Abstract: Digital forensics is a class of forensic science interested with the use of digital information produced, stored and transmitted by various digital devices as source of evidence in investigations and legal proceedings. Digital forensics can be split up to several classes such as computer forensics, network forensics, mobile forensics, cloud computing forensics, and IoT forensics. In recent years, cloud computing has emerged as a popular computing model in various areas of human life. However, cloud computing systems lack support for computer forensic investigations. The main goal of digital forensics is to prove the presence of a particular document in a given digital device. This paper presents a comprehensive survey of various frameworks and solutions in all classes of digital forensics with a focus on cloud forensics. We start by discussing different forensics classes, their frameworks, limitations and solutions. Then we focus on the methodological aspect and existing challenges of cloud forensics. Moreover, the detailed comparison discusses drawbacks, differences and similarities of several suggested cloud computing frameworks providing future research directions.

Author 1: Mohammed Khanafseh
Author 2: Mohammad Qatawneh
Author 3: Wesam Almobaideen

Keywords: Digital forensics; cloud forensics; investigation process; IoT forensics; examination stage; evidence

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org