The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 11 Issue 5

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A Service Scheduling Security Model for a Cloud Environment

Abstract: Scheduling tasks on a standalone system can be complex but applying it to a cloud environment can be even more complex because of the large amount of resources available. An added complexity in a Cloud environment is that of security. This paper addresses scheduling from a security point of view and presents a Scheduling Security Model and evaluates its effectiveness to meet user’s requirements with a number of worked examples with different scenarios.

Author 1: Abdullah Sheikh
Author 2: Malcolm Munro
Author 3: David Budgen

Keywords: Scheduling; security; model; cost; cloud computing

PDF

Paper 2: Robust Speed Control for Networked DC Motor System

Abstract: In this paper, we used observer-based H∞ output feedback control problems for the communication from the controller to dc motor and considered data packet dropout characterized by the Bernoulli random binary distribution the disturbance. The uncertain parameters have also been considered in the network-based dc motor system. Firstly, we used the robust H∞ output feedback control strategy to optimize controller gain and observer gain to guarantee the mean square stability. The observer-based H∞ output feedback has been designed to achieve robust speed control in the mean-square sense and optimize the parameters of the control system while guaranteeing the robust H∞ output feedback performance. Then, when data is transmitted in the control system, we illustrated that the system is stable and robust speed control can be achieved as well as the result realized.

Author 1: Sokliep Pheng
Author 2: Luo Xiaonan
Author 3: Rachana Lav
Author 4: Zhongshuai Wang
Author 5: Zetao Jiang

Keywords: The Networked DC Motor System; Observer Design; Robust Speed Control; Data Dropout; LMI

PDF

Paper 3: Forest Fire Detection System using LoRa Technology

Abstract: Millions of hectares of forest worldwide are affected annually by fires, which can lead to the loss of human lives, materials, destruction of natural flora and fauna but also can lead to the losses of raw materials. The problem is even greater in forests that are not guarded and do not have communication systems available. Thus, in recent years, have been proposed various systems that use devices based on Internet of Things (IoT) for real-time forest fire detection. In this paper, it is proposed a system capable of quickly detecting forest fires on long wide distance. In the development of this system it is used LoRa (Long Range) technology based on LoRaWAN (Long Range Wide Area Network) protocol which is capable to connect low power devices distributed on large geographical areas being an innovative and great solution for transmissions of a low data transfer rate and a low transmission power on high ranges, and because has a great efficiency.

Author 1: Nicoleta Cristina GAITAN
Author 2: Paula HOJBOTA

Keywords: LoRa; real-time; long range wide area network; internet of things

PDF

Paper 4: Detecting C&C Server in the APT Attack based on Network Traffic using Machine Learning

Abstract: APT (Advanced Persistent Threat) attack is a form of dangerous attack, it has clear intentions and targets. APT uses a variety of sophisticated, complex methods and technologies to attack on targets to gain confidential, sensitive information. Currently, the problem of detecting APT attacks still faces many challenges. The reason is APT attacks are designed specifically for each specific target, so it is difficult to detect them based on experiences or predefined rules. There are many different methods that are researched and applied to detect early signs of APT attacks in an organization. Today, one method of great concern is analyzing connections to detect a control server (C&C Server) in the APT attack campaign. This method has great practical significance because we just need to detect early the connection of malware to the control server, we will prevent quickly attack campaigns. In this paper, we propose a method to detect C&C Server based on network traffic analysis using machine learning.

Author 1: Cho Do Xuan
Author 2: Lai Van Duong
Author 3: Tisenko Victor Nikolaevich

Keywords: Advanced Persistent Threat (APT); abnormal behavior; network traffic; machine learning; APT detection; Control Server (C&C Server)

PDF

Paper 5: An Abnormal Behavior Detection Method using Optical Flow Model and OpenPose

Abstract: Abnormal behavior detection and recognition of pedestrian in escalator has always been a challenging task in intelligent video surveillance system. To cope this problem, a method combining optical flow vector of passenger with human skeleton extraction is proposed. At first, adaptive dual fractional order optical flow model is used to estimate the optical flow field under scenes with illumination changes, low contrast and uneven illumination. At the same time, the OpenPose deep convolutional neural network is used to extract body skeleton and persons in image can be located. Then, the optical flow field and the human skeleton are combined to obtain the optical flow vector of the passenger head. After that the optical flow field of the passenger head and the step of escalator under the passenger foot are used for abnormal behavior detection and recognition, random forest is employed to behavior classifier. Experimental results show that our proposed method and its improvement strategy can accurately estimate the optical flow field in real time of low contrast outdoor videos with insufficient illumination, uneven brightness and illumination changes, the accuracy of abnormal action detection and recognition can reach to 97.98% and 92.28%.

Author 1: Zhu Bin
Author 2: Xie Ying
Author 3: Luo Guohu
Author 4: Chen Lei

Keywords: Image sequence analysis; abnormal behavior recognition; fractional order variational optical flow model; random forest

PDF

Paper 6: Clash between Segment-level MT Error Analysis and Selected Lexical Similarity Metrics

Abstract: The aim of this paper is to evaluate the quality of popular machine translation engines on three texts of different genre in a scenario in which both source and target languages are morphologically rich. Translations are obtained from Google Translate and Microsoft Bing engines and German-Croatian is selected as the language pair. The analysis entails both human and automatic evaluation. The process of error analysis, which is time-consuming and often tiresome, is conducted in the user-friendly Windows 10 application TREAT. Prior to annotation, training is conducted in order to familiarize the annotator with MQM, which is used in the annotation task, and the interface of TREAT. The annotation guidelines elaborated with examples are provided. The evaluation is also conducted with automatic metrics BLEU and CHRF++ in order to assess their segment-level correlation with human annotations on three different levels–accuracy, mistranslation, and the total number of errors. Our findings indicate that neither the total number of errors, nor the most prominent error category and subcategory, show consistent and statistically significant segment-level correlation with the selected automatic metrics.

Author 1: Marija Brkic Bakaric
Author 2: Kristina Tonkovic
Author 3: Lucia Nacinovic Prskalo

Keywords: Machine translation; evaluation; error analysis; BLEU; CHRF++; MQM

PDF

Paper 7: Technologies for Making Reliable Decisions on a Variety of Effective Factors using Fuzzy Logic

Abstract: Problem of choosing methods has been presented for increasing the reliability of solutions while reducing the time resource for researching the state space of a particular object. Effective factors that can significantly affect the attainability of decision-making goals in a fuzzy process development environment are identified. Presented the selected factors in a classic form in natural language, which significantly increases the confidence in the decisions made by determining the maximum, minimum values of membership functions and the set of criteria by using manifest conflict situations when deciding. Information technologies and approaches for identifying the completeness of decision-making goals on a variety of effective factors are proposed. The effectiveness of the proposed solutions using the fuzzy inference procedure and the Zade-Mamdani’s approach is estimated. Software testing carried of the described methods for evaluating and deciding on a variety of factors out using object-oriented programming. Experimental testing of realized ideas has confirmed an increase in the reliability of making aim management decisions in various applied subject areas.

Author 1: Yousef Ibrahim Daradkeh
Author 2: Irina Tvoroshenko

Keywords: Attainability; completeness of decision-making; conflict; factor; membership function; state space

PDF

Paper 8: Adaptive Hybrid Synchronization Primitives: A Reinforcement Learning Approach

Abstract: The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.

Author 1: Fadai Ganjaliyev

Keywords: Spinning; sleeping; blocking; spin-then-block; spin-then-park; reinforcement learning

PDF

Paper 9: Creating Knowledge Environment during Lean Product Development Process of Jet Engine

Abstract: Organizations invest intense resources in their product development processes. This paper aims to create a knowledge environment using trade-off curves during the early stages of the set-based concurrent engineering (SBCE) process of an aircraft jet engine for a reduced noise level at takeoff. Data is collected from a range of products in the same family as the jet engine. Knowledge-based trade-off curves are used as a methodology to create and visualize knowledge from the collected data. Findings showed that this method provides designers with enough confidence to identify a set of design solutions during the SBCE applications.

Author 1: Zehra C. Araci
Author 2: Ahmed Al-Ashaab
Author 3: Muhammad Usman Tariq
Author 4: Jan H. Braasch
Author 5: M. C. Emre Simsekler

Keywords: Knowledge creation and visualization; knowledge management; new product development; lean product development; set-based concurrent engineering; trade-off curves; aircraft engine noise reduction

PDF

Paper 10: The Prototype of Thai Blockchain-based Voting System

Abstract: In this paper, the prototype of Thai voting sys-tem using blockchain technology (B-VoT) has been successfully designed and developed. Hyperledger Fabric was chosen to be a major blockchain infrastructure. The web-application was developed to allow voters to vote. In addition, real-time voting results will be shown on the same website after election period has passed. We connected the web-application and blockchain database together in order to store the votes as a blockchain transaction. With our Blockchain Internet Election, the voters can easily elect on the website and their votes is automatically stored in the blockchain database as a single blockchain transaction. This voting prototype can assure the data integrity because no one can modify any information or voting results. Therefore, our system could have a huge impact on the voting reliability as well as rebuild public trustworthiness in Thailand election.

Author 1: Krittaphas Wisessing
Author 2: Phattaradon Ekthammabordee
Author 3: Thattapon Surasak
Author 4: Scott C.-H. Huang
Author 5: Chakkrit Preuksakarn

Keywords: Blockchain; internet election; hyperledger fabric; data integrity; voting reliability

PDF

Paper 11: Satellite Image Database Search Engine which Allows Fuzzy Expression of Geophysical Parameters of Queries

Abstract: Satellite image database search engine which allows fuzzy expression of geophysical parameters of queries is proposed. A search engine based on knowledge based system which allows fuzzy expression of queries is proposed. A prototype system is created and tested. Whereas conventional search systems had to know in advance the functions of the search system, information about search keys, etc., the search engine proposed here guided the search conditions in a conversational form, by allowing ambiguous expressions (six adverb language hedges) at that time, the user is released from such annoyance. To make this possible, a membership function for each attribute information is defined, and a search condition refinement by fuzzy logic is introduced. The results show that the system accepts a fuzzy expression of query as well as a comprehensive dialogues between users and the system.

Author 1: Kohei Arai

Keywords: Search engine; fuzzy expression; knowledge base system; membership function

PDF

Paper 12: Difficulties in Teaching Online with Blackboard Learn Effects of the COVID-19 Pandemic in the Western Branch Colleges of Qassim University

Abstract: The global COVID-19 pandemic has compelled educational institutions to shift from face-to-face teaching methods to fully online courses. This was possible with the help of information technology advances, which led to the creation of Blackboard Learn, a Learning Management System (LMS). By transitioning their systems to this newly developed LMS, the western branch colleges of Qassim University in the Kingdom of Saudi Arabia were able to support e-learning. To investigate the influence of online learning e-courses on educational institutions and learning outcomes, this paper intends to perform surveys on both faculties and students. The survey mainly focuses on course objectives, practical skills, faculty member’s responses regarding query and discussion, explanations on applied courses, problem-solving, and improving teamwork skills. A comprehensive investigation of the faculties reveals that 59.08% of faculty members believe it is challenging to facilitate course objectives due to the lack of practical lab work and other detailed knowledge exchange on applied courses, which leads to the faculties being unsatisfied with online courses when compared with traditional systems. Moreover, 77.17% of the students think it is difficult to have discussions during online courses in order to solve queries, and this diminishes their problem-solving capability. In addition, with an online course system, there is no way to physically collaborate in teams and work on team projects to improve teamwork abilities.

Author 1: Fahad Alturise

Keywords: COVID-19; blackboard learn; e-learning; learning management system; pandemic; difficulties; Qassim University

PDF

Paper 13: Using Geographical Information System for Mapping Public Schools Distribution in Jeddah City

Abstract: Geographical Information System (GIS) remains a unique tool use for school mapping for a clear understanding of the nature, planning, and distribution of educational facilities. The study carried out a GIS analysis for male primary and secondary schools’ distribution in Jeddah city, Saudi Arabia, to show the significance of using GIS tools to assist the educational planning authorities. To understand, re-plan and address the location, distribution and availability challenges of the schools in Jeddah city. A Geodatabase for the study area was created, which incorporates education and population data collected from authorities. Spatial and network analyses are utilised to understand the location distribution, students’ density, and the accessibility of the schools in the study region. The analyses results identified the services and students’; density, directional growth of the schools, drive-time service areas and served and un-served populace for the authorities in Saudi Arabia to make better planning decisions, address present and future challenges in the provision of primary schools to residents and most importantly to improve educational services. The findings revealed that shorter travel distances found in the denser (central) part of the city and some regions that need more schools.

Author 1: Abdulkader A. Murad
Author 2: Abdulmuakhir I. Dalhat
Author 3: Ammar A Naji

Keywords: GIS; school mapping; educational facilities; geodatabase; spatial and network analysis

PDF

Paper 14: An Efficient Approach for Storage of Big Data Streams in Distributed Stream Processing Systems

Abstract: Besides, centralized managing, processing and querying, the storage is one of the important components of a big data management. There is always a huge requirement of storing immense volumes of heterogeneous data in different formats. In big data steam processing applications, the storage is given a priority and always plays a big role in historical data analysis. During stream processing, some of the incoming data and the intermediate results are always a good source of future samples. These samples can be used for the future evaluation to eliminate the numerous mistakes of storing and maintaining the big data streams. Hence, a big data stream application requires an efficient support for storage of historical queries. The researchers, scientist and academicians are working hard to develop a sophisticated mechanism that is needed for storage to keep the most useful data for the future references by means of stream archive storage. However, a stream processing system can’t store the whole incoming stream data for future references. A technique is needed to get rid of the expired data and free the space for more incoming data in an archive storage. Hence keeping in view, the storage space limitation, integration issues and its associated cost, we try to optimize the stream archive storage and free more space for future data. The proposed enhanced algorithm will help to delete the obsolete data (retention or expired) and free the space for the new incoming data in a distributed platform. Our paper presents an Enhanced Time Expired Algorithm (ETEA) for stream archived storage in a distributed environment for removing the obsolete data based on time expiration and providing a space for the new incoming data for historical data analysis during the skew time (Hot Spots).We also evaluated the efficiency of our algorithm using the skew factor. The experimental results show that our approach is 98% efficient and fast than other conventional techniques.

Author 1: Sultan Alshamrani
Author 2: Quadri Waseem
Author 3: Abdullah Alharbi
Author 4: Wael Alosaimi
Author 5: Hamza Turabieh
Author 6: Hashem Alyami

Keywords: Distributed stream databases; storage optimization; stream archive storage; time expiration

PDF

Paper 15: Comparative Analysis of Methodologies for Domain Ontology Development: A Systematic Review

Abstract: Interlocking Institutional Worlds (IWs) is a concept explaining the need to interoperate between institutions (or players), to solve problems of common interest in a given domain. Managing knowledge in the IWs domain is complex, but promoting knowledge sharing based on standards and common terms agreeable to all players is essential and is something that must be established. In this sense, ontologies, as a conceptual tool and a key component of knowledge-based systems, have been used by organizations for effective knowledge management of the domain of discourse. There are many methodologies that have been proposed by several researchers during the last decade. However, designing a domain ontology for IWs needs a well-defined ontology development methodology. Therefore, in this article, a survey has been conducted to compare ontology development methodologies between 2015 and 2020. The purpose of this survey is to identify limitations and benefits of previously developed ontology development methodologies. The criteria for the comparison of methodologies has been derived from evolving trends in literature. Our findings give some guidelines that help to define a suitable methodology for designing any domain ontology under the domain of interlocking institutional worlds.

Author 1: Abdul Sattar
Author 2: Ely Salwana Mat Surin
Author 3: Mohammad Nazir Ahmad
Author 4: Mazida Ahmad
Author 5: Ahmad Kamil Mahmood

Keywords: Knowledge management; interlocking institutional worlds; domain ontology; comparison of methodologies; ontology development

PDF

Paper 16: Performance Analysis of Transient Fault-Injection and Fault-Tolerant System for Digital Circuits on FPGA

Abstract: A Fault-Tolerant System is necessary to improve the reliability of digital circuits with the presence of Fault Injection and also improves the system performance with better Fault Coverage. In this work, an efficient Transient Fault-Injection system (FIS) and Fault-Tolerant System (FTS) are designed for digital circuits. The FIS includes Berlekamp Massey Algorithm (BMA) based LFSRs, with fault logic followed by one – hot-encoder register, which generates the faults. The FTS is designed using Triple-Modular-Redundancy (TMR) and Dual Modular- Redundancy (DMR). The TMR module is designed using the Majority Voter Logic (MVL), and DMR is designed using Self-Voter Logic (SVL) for digital circuits such as synchronous and asynchronous circuits. The four different MVL approaches are designed in the TMR module for digital circuits. The FIS-FTS module is designed on Xilinx-ISE 14.7 environment and implemented on Artix-7 FPGA. The synthesis results include chip area, gate count, delay, and power are analyzed along with fault tolerance, and coverage for given digital circuits. The fault tolerance is analyzed using Modelsim-simulator. The FIS-FTS module covers an average of 99.17% fault coverage for both synchronous and asynchronous circuits.

Author 1: Sharath Kumar Y N
Author 2: Dinesha P

Keywords: Digital circuits; transient fault; fault injection; fault tolerant; triple modular redundancy; dual modular redundancy; majority voter logic; self-voter logic

PDF

Paper 17: Evaluation of the Diffusion Phenomenon using Information from Twitter

Abstract: Social media services, including social networking services (SNSs) and microblogging services, are gaining prominence. SNSs have a variety of information on products and services, such as product introductions, utilization methods, and reviews. It is important for companies to utilize SNSs to understand the various ways of engaging with them. Against this backdrop, numerous studies have focused on marketing activities (e.g., consumer behavior and sales promotion) using information on the internet from sources such as SNSs, blogs, and news sites. In particular, to understand the dissemination of information on the Internet, various researchers have undertaken studies pertaining to the diffusion phenomenon occurring in the real world. Here, topic diffusion is a phenomenon whereby a certain topic is shared with several other users. In this study, we aimed to evaluate the diffusion phenomenon on Twitter. In particular, we focused on the state of a targeted topic and analyzed the estimation of the topic using natural language processing (NLP) and time series analysis. First, we collected tweets containing four titles of animation broadcasts using hashtags. Approximately 250,000 tweets were posted on Twitter in a month. Second, we used NLP methods such as morphological analysis and N-gram analysis to characterize the contents of each title. Third, using the time series data for the tweets, we created a mixture model that replicated the diffusion phenomenon. We clustered the diffusion phenomenon using this model. Finally, we combined the features related to the content of the tweets and the results of the clustering of the diffusion phenomenon and evaluated them.

Author 1: Kohei Otake
Author 2: Takashi Namatame

Keywords: Twitter; diffusion phenomenon; natural language processing; mixture model

PDF

Paper 18: Machine Learning and Statistical Modelling for Prediction of Novel COVID-19 Patients Case Study: Jordan

Abstract: As of December 2019, the world’s view on life has been changed due to ongoing COVID-19 pandemic. This requires the use of all kinds of technology to help identify coronavirus patients and control the spread of this disease. In this paper, an online questionnaire was developed as a tool to collect data. This data was used as an input for various prediction models based on statistical model (Logistic Regression, LR) and machine learning model (Support Vector Machine, SVM, and Multi-Layer Perceptron, MLP). These models were utilized to predict potential patients of COVID-19 based on their signs and symptoms. The MLP has shown the best accuracy (91.62%) compared to the other models. Meanwhile, the SVM has shown the best precision 91.67%.

Author 1: Ebaa Fayyoumi
Author 2: Sahar Idwan
Author 3: Heba AboShindi

Keywords: Novel COVID-19; machine learning; logistic regression; support vector machine; multi-layer perceptron

PDF

Paper 19: Analytical Study between Human Urban Planning and Geographic Information Systems: “The Case of the City of Casablanca”

Abstract: Since the early 1910s, the city of Casablanca has experienced urban and civic expansion so that its population has become a regional center, but this expansion has not been achieved in an organized and equal manner. This has resulted in significant overlap between systematic and structured reconstruction and random reconstruction, and if the geographic researcher is able to study this transformation by observation in the field where he described the phenomena and visualized them with the naked eye, then urbanization of information systems is one of the methods of expression that allows him to study it successfully and more precisely. The study of the stages of urbanization that the city of Casablanca has gone through using urbanization of information systems will give actors and researchers a clear vision of how this expansion can be achieved, it’s positive and negative implications, and its prospects, and will thus help to prepare and manage the urban area of the city. This study period extended from 1910 to 2020, and we relied on a set of documents, satellite photos, aerial photos and old maps.

Author 1: Mohamed Rtal
Author 2: Mostafa Hanoun

Keywords: Casablanca; human urban; urbanization of information systems; urban expansion

PDF

Paper 20: Clustering-Based Trajectory Outlier Detection

Abstract: The improvement in mobile computing techniques has generated massive trajectory data, which represent the mobility of moving objects like vehicles, animals, and people. Mining trajectory data and especially outlier detection in trajectory data is an attractive and challenging topic that fascinated many researchers. In this paper, we propose a Clustering-Based Trajectory Outlier Detection algorithm (CB-TOD). The proposed algorithm partitions a trajectory into line segments and decreases those line segments to a smaller set (Summary-trajectory SS(t)) without affecting the spatial properties of the original trajectory. After that the CB-TOD algorithm using a clustering method to detect the cluster with the smallest number of segments for a trajectory and a small number of neighbors to be sub-trajectory outliers for this trajectory. Also, our proposed algorithm can detect outlier trajectories in the dataset. The main advantage of CB-TOD algorithm is reducing the computational time for outlier detection especially for big trajectory data without affecting the efficiency of the outlier detection results. Experimental results demonstrate that CB-TOD outperforms the state of art existing algorithms in identifying outlier sub-trajectories and also outlier trajectories in real trajectory dataset.

Author 1: Eman O. Eldawy
Author 2: Hoda M.O. Mokhtar

Keywords: Data mining; outlier detection; trajectory data processing; clustering

PDF

Paper 21: Measuring the Similarity between the Sanskrit Documents using the Context of the Corpus

Abstract: Identifying the similarity between two documents is a challenging but important task. It benefits various applications like recommender systems, plagiarism detection and so on. To process any text document one of the popularly used approaches is document term matrix (DTM). The proposed approach processes the oldest, untouched, one of the morphologically critical languages, Sanskrit and builds a document term matrix for Sanskrit (DTMS) and Document synset matrix Sanskrit (DSMS). DTMS uses the frequency of the term whereas DSMS uses the frequency of synset instead of term and contributes to the dimension reduction. The proposed approach considers the semantics and context of the corpus to solve the problem of polysemy. More than 760 documents including Subhashitas and stories are processed together. F1 Score, precision, Matthews Correlation coefficient (MCC) which is the most balanced measure and accuracy are used to prove the betterment of the proposed approach.

Author 1: Jatinderkumar R. Saini
Author 2: Prafulla B. Bafna

Keywords: Cosine; dimension reduction; sanskrit; synset; matthews correlation coefficient

PDF

Paper 22: An Efficient Model for Mining Outlier Opinions

Abstract: In the internet era, opinion mining became a critical technique used in many applications. The internet offers a featured chance for users to express and share their views and experiences anywhere and at any time through various methods as online reviews, personal blogs, Facebook, Twitter and companies’ websites. Such treasure of online data generated by users play an essential role in decision-making process and have the ability to make radical changes in several fields. Although the opinionated text can provide significantly invaluable information for the wide community either are individuals, business, or government, the outlier or anomaly opinions could have the same impact but in opposite manner which harm these fields. Consequently, there is an urge to develop techniques to detect the outlier opinions and avoid their negative impacts on several application domains which rely on opinion mining. In this paper, an efficient model for mining outlier opinions has been proposed. The proposed MOoM model, stands for Mining Outlier Opinion Model, offers for the first time the ability to mine outlier opinions from product’s free-text reviews. Accordingly, it can help the decision makers to improve the overall sentiment analysis process and perform further analysis on the outlier opinions to get better understanding for them and avoid their negative impact. The proposed model consists of three modules; Data preprocessing module, Opinion mining module, and outlier opinions detection module. The proposed model utilizes the lexicon-based approach to extract sentiment polarity from each review in the dataset. Also, it uses the Distance-based outlier detection algorithm to produce a graded list of review holders with outlier opinions. Experimental study is presented to evaluate the proposed model and the results proved the model’s ability to detect outlier opinions in the product reviews effectively. The model is adaptable to be used in other fields rather than product’s reviews by customizing its modules’ layers.

Author 1: Neama Hassan
Author 2: Laila A. Abd-Elmegid
Author 3: Yehia K. Helmy

Keywords: Opinion mining; sentiment analysis; anomaly detection; outliers; reviews; text analysis; natural language processing; rapidminer

PDF

Paper 23: Coronavirus Social Engineering Attacks: Issues and Recommendations

Abstract: During the current coronavirus pandemic, cybercriminals are exploiting people’s anxieties to steal confidential information, distribute malicious software, perform ransomware attacks and use other social engineering attacks. The number of social engineering attacks is increasing day by day due to people's failure to recognize the attacks. Therefore, there is an urgent need for solutions to help people understand social engineering attacks and techniques. This paper helps individuals and industry by reviewing the most common coronavirus social engineering attacks and provides recommendations for responding to such an attack. The paper also discusses the psychology behind social engineering and introduces security awareness as a solution to reduce the risk of social engineering attacks.

Author 1: Ahmed Alzahrani

Keywords: Social engineering; coronavirus; COVID-19; phishing; vishing; smishing; scams; working remotely; cybersecurity; security awareness; human security behavior

PDF

Paper 24: Hybrid based Energy Efficient Cluster Head Selection using Camel Series Elephant Herding Optimization Algorithm in WSN

Abstract: The rapid growth in wireless technology is enabling a variety of advances in wireless sensor networks (WSNs). By providing the sensing capabilities and efficient wireless communication, WSNs are becoming important factor in day to day life. WSNs have many commercial, industrial and telecommunication applications. Maximizing network lifespan is a primary objective in wireless sensor networks as the sensor nodes are powered by a non-rechargeable battery. The main challenges in wireless sensor networks (WSNs) are area of coverage, network’s lifetime and aggregating. Balanced node establishment (clustering) is the foremost strategy for extending the entire network's lifetime by aggregating the sensed information at the head of the cluster. The recent research trend suggests Meta-heuristic algorithms for the intelligent selection of ideal Cluster Heads (CHs). The existing Cluster Head Selection (CHS) algorithm suffers from the inconsistent trade-offs between exploration – exploitation and global best examine constraints. In this research, a novel Camel series Elephant Herding Optimization (CSEHO) algorithm is proposed to enhance the random occurrences of Camel algorithm by the Elephant Herding Optimization algorithm for optimal CHS. The Camel algorithm imitates the itinerant actions of a camel in the desert for the scavenging procedure. The visibility monitoring condition of the camel algorithm improves the efficiency of exploitation, whereas the exploration inefficiency of a Camel algorithm is compensated optimally by the Elephant Herding Optimization operator (Clan and separator). The superior performance of the proposed CSEHO algorithm is validated by comparing its performance with various other existing CHS algorithms. The overall attainment of the offered CSEHO algorithm is 21.01%, 31.21%, 44.08%, 67.51%, and 85.66%, better than that of EHO, CA, PSO, LEACH, and DT, respectively.

Author 1: N. Lavanya
Author 2: T. Shankar

Keywords: Camel algorithm; Cluster Head Selection; Elephant Herding Optimization; meta-heuristic algorithm; network lifespan; wireless sensor network

PDF

Paper 25: Indexed Metrics for Link Prediction in Graph Analytics

Abstract: With the explosive growth of the Internet and the desire to harness the value of the information it contains, the prediction of possible links (relationships) between key players in social networks based on graph-theory principles has garnered great attention in recent years. Consequently, many fields of scientific research have converged in the development of graph analysis techniques to examine the structure of social networks with a very large number of users. However, the relationship between persons within the social network may not be evident when the data-capture process is incomplete or a relationship may have not yet developed between participants who will establish some form of actual interaction in the future. As such, the link-prediction metrics for certain social networks such as criminal networks, which tend to have highly inaccurate data records, may need to incorporate additional circumstantial factors (metadata) to improve their predictive accuracy. One of the key difficulties in link-prediction methods is extracting the structural attributes necessary for the classification of links. In this research, we analysed a few key structural attributes of a network-oriented dataset based on proposed social network analysis (SNA) metrics for the development of link-prediction models. By combining structural features and metadata, the objective of this research was to develop a prediction model that leverages the deep reinforcement learning (DRL) classification technique to predict links/edges even on relatively small-scale datasets, which can constrain the ability to train supervised machine-learning models that have adequate predictive accuracy.

Author 1: Marcus Lim
Author 2: Azween Abdullah
Author 3: NZ Jhanjhi
Author 4: Mahadevan Supramaniam

Keywords: Link prediction; social network analysis; criminal network; deep reinforcement learning

PDF

Paper 26: Radar GPR Application to Explore and Study Archaeological Sites: Case Study

Abstract: The issue of exploring and searching for archaeological sites is very important for a greater knowledge of the history of ancient nations and peoples. Recently, Ground Penetrating Radar GPR technology appeared to detect objects buried and study as depth as tens of meters. This work aims to apply this technique in studying and exploring some archaeological sites using the 500 MHz antenna. This study has proven its effectiveness and success. Also, one of the important programs used in the processing of the obtained data is called Reflexw.

Author 1: Ahmed Faize
Author 2: Gamil Alsharahi
Author 3: Mohammed Hamdaoui

Keywords: Archaeological sites; exploring; ground penetrating radar; processing data

PDF

Paper 27: Generalized Approach to Analysis of Multifractal Properties from Short Time Series

Abstract: The paper considers a generalized approach to the time series multifractal analysis. The focus of research is on the correct estimation of multifractal characteristics from short time series. Based on numerical modeling and estimating, the main disadvantages and advantages of the sample fractal characteristics obtained by three methods: the multifractal fluctuation detrended analysis, wavelet transform modulus maxima and multifractal analysis using discrete wavelet transform are studied. The generalized Hurst exponent was chosen as the basic characteristic for comparing the accuracy of the methods. A test statistic for determining the monofractal properties of a time series using the multifractal fluctuation detrended analysis is proposed. A generalized approach to estimating the multifractal characteristics of short time series is developed and practical recommendations for its implementation are proposed. A significant part of the study is devoted to practical applications of fractal analysis. The proposed approach is illustrated by the examples of multifractal analysis of various real fractal time series.

Author 1: Lyudmyla Kirichenko
Author 2: Abed Saif Ahmed Alghawli
Author 3: Tamara Radivilova

Keywords: Fractal time series; multifractal analysis; estimation of multifractal characteristics; generalized Hurst exponent; practical applications of fractal analysis

PDF

Paper 28: Play-Centric Designing of a Serious Game Prototype for Low Vision Children

Abstract: Currently, with the advancement of Information and Communications Technology (ICT), gaming industry becomes one of the fastest growing industries. This trend leads to development of serious games as alternative tool for creating effective learning experience. However, most educational applications such as serious games mainly used visuals such graphics and animations that pose challenges for low vision children. Visually impaired users especially children with low vision would face difficulty using the applications. They have problems to see highly visual elements of the games. Accessibility refers to how a certain software or application is accessible to disabled users. Several accessibility aspects should be considered when designing user interfaces for children with low vision. Thus, a game designed to fulfill their needs is needed. However, the challenge of serious game design is not only to consider users’ accessibility needs, but also the playability aspects as well so that the visually impaired children can enjoy playing regardless of their disabilities. This paper presents a study on designing a low fidelity serious game prototype for low vision children using play-centric design approach, focusing on playability to obtain feedback from low vision children. Then, based on users’ feedbacks, the game prototype will be refined to improve the game design.

Author 1: Nurul Izzah Othman
Author 2: Nor Azan Mat Zin
Author 3: Hazura Mohamed

Keywords: Serious game; play-centric design; accessibility; low vision; low fidelity prototype

PDF

Paper 29: Principal Component Analysis on Morphological Variability of Critical Success Factors for Enterprise Resource Planning

Abstract: The concept of critical success factors (CSFs) has been widely used as a measure to tackle the hurdles associated with numerous implementations of enterprise resource planning (ERP) systems. This study evaluates the morphological variability of CSFs using the analytical principal component analysis technique to identify principal components (PCs) that can be adopted for a successful ERP system implementation. The dataset of 205 CSFs from 127 different studies was evaluated for the morphological variability in those studies. According to the results, 66 PCs were identified and ranked accordingly. The first 49 PCs with eigenvalues greater than 1 accounted for 89.67 % of the variability recorded. The first 6 PCs respectively accounted for 13.67%, 19.37%, 24.67%, 29.41%, 33.52% and 36.94% cumulative variations. In general, the graphical illustration of the study results show the palpable division between the taxonomic groups for 3 PCs.

Author 1: Ayogeboh Epizitone
Author 2: Oludayo. O. Olugbara

Keywords: Enterprise resources; morphological variability; principal component; resource planning; success factor

PDF

Paper 30: A Comprehensive Science Mapping Analysis of Textual Emotion Mining in Online Social Networks

Abstract: Textual Emotion Mining (TEM) tackles the problem of analyzing the text in terms of the emotions, it expresses or evokes. It focuses on a series of approaches, methods, and tools to help understand human emotions. The understanding would play a pivotal role in developing relevant systems to meet human needs. This work has drawn significant interest from researchers worldwide. This article carries out a science mapping analysis of TEM literature indexed in the Web of Science (WoS), to provide quantitative and qualitative insight into the TEM research. To explain the evolution of mainstream contents, various bibliometric indicators and metrics are used which identify annual publication counts, authorship patterns, performance of countries/regions, and institutes. To further supplement this study, various types of network analysis are also performed like co-citation analysis, co-occurrence analysis, bibliographic coupling, and co-authorship pattern analysis. Additionally, a fairly comprehensive manual analysis of top-cited and most-used journal and proceeding papers is also conducted to understand the growth and evolution of this domain. As per the authors’ knowledge, this manuscript provides the first thorough investigation of TEM's research status through a bibliometric examination of scientific publications. Expedient results are recorded that will allow TEM researchers to uncover the growth pattern, seek collaborations, enhance the selection of research topics, and gain a holistic view of the aggregate progress in the domain. The presented facts and analysis of TEM will help the researchers’ fraternity to carry out the future study.

Author 1: Shivangi Chawla
Author 2: Monica Mehrotra

Keywords: Emotion mining; emotion models; bibliometric analysis; science mapping analysis; co-citation analysis; network analysis

PDF

Paper 31: Prediction of Heart Diseases (PHDs) based on Multi-Classifiers

Abstract: At present, the number of articles on Heart Disease Detection (HDD) based on classification searched by Google Scholar search engine exceeds 17,000. The medical sector is one of the most important fields that benefit from ML. Heart diseases (HDs) are considered to be the leading cause of death worldwide, as it is difficult for doctors to predict them earlier. Therefore, the HDD is highly required. Today, the health sector contains huge data that has hidden information where this information can be considered as essential to make diagnostic decisions. In this paper, a new diagnostic model for the detection of HDs is on a multi-classifier applied to the heart disease dataset, which consists of 270 instances and 13 attributes. Our multi-classifier is composed of Artificial Neural Network (ANN), Naïve Bays (NB), J48, and REPTree classifiers, which select the most accurate of them. In addition, the most effective feature on prediction is determined by applying feature selection using the “GainRatioAttributeEval” technique and "Ranker" method based on the full tainting set. Experimental results show that the NB classifier is the best, and our model yields over 85% accuracy using the WEKA tool.

Author 1: Amirah Al Shammari
Author 2: Haneen al Hadeaf
Author 3: Hedia Zardi

Keywords: Classification; diseases; heart-attack; multi-classifier; heart disease detection

PDF

Paper 32: A New Approach to Predicting Learner Performance with Reduced Forgetting

Abstract: The work on predicting learner performance allows researchers through machine learning methods to participate in the improvement of e-learning. This improvement allows, little by little, e-learning to be promoted and adopted by several educational structures around the world. Neural networks, widely used in various performance prediction works, have made several exploits. However, factors that are highly influential in the field of learning have not been explored in machine learning models. For this reason, our study attempts to show the importance of the forgetting factor in the learning system. Thus, to contribute to the improvement of accuracy in performance predictions. The interest being to draw the attention of researchers in this field to very influential factors that are not exploited. Our model takes into account the study of the forgetting factor in neural networks. The objective is to show the importance of attenuation the forgetting, on the quality of performance predictions in e-learning. Our model is compared to those based on Random Forest and linear regression algorithms. The results of our study show first that neural networks (95.20%) are better than Random Forest (95.15%) and linear regression (93.80%). Then, with the attenuation of forgetting, these algorithms give 96.63%, 95.85% and 93.80% respectively. This work allowed us to show the great relevance of oblivion in neural networks. Thus, the exploration of other unexploited factors will make better performance prediction models.

Author 1: Dagou Dangui Augustin Sylvain Legrand KOFFI
Author 2: Tchimou N’TAKPE
Author 3: Assohoun ADJE
Author 4: Souleymane OUMTANAGA

Keywords: Performance prediction; e-learning; artificial neural networks; forgetting factor

PDF

Paper 33: Genetic Algorithm with Comprehensive Sequential Constructive Crossover for the Travelling Salesman Problem

Abstract: The travelling salesman problem (TSP) is a very famous NP-hard problem in operations research as well as in computer science. To solve the problem several genetic algorithms (GAs) are developed which depend primarily on crossover operator. The crossover operators are classified as distance-based crossover operators and blind crossover operators. The distance-based crossover operators use distances between nodes to generate the offspring(s), whereas blind crossover operators are independent of any kind of information of the problem, except follow the problem’s constraints. Selecting better crossover operator can lead to successful GA. Several crossover operators are available in the literature for the TSP, but most of them are not leading good GA. In this study, we propose reverse greedy sequential constructive crossover (RGSCX) and then comprehensive sequential constructive crossover (CSCX) for developing better GAs for solving the TSP. The usefulness of our proposed crossover operators is shown by comparing with some distance-based crossover operators on some TSPLIB instances. It can be concluded from the comparative study that our proposed operator CSCX is the best crossover in this study for the TSP.

Author 1: Zakir Hussain Ahmed

Keywords: Genetic algorithm; reverse greedy sequential constructive crossover; comprehensive sequential constructive crossover; travelling salesman problem; NP-hard

PDF

Paper 34: A Workflow Scheduling Algorithm for Reducing Data Transfers in Cloud IaaS

Abstract: The cloud IaaS easily offers to have homogeneous multi-core machines (whether they are "bare metal" machines or virtual machines). On each of these machines, there can be high-performance input-output SSD disks. That allows to distribute the files produced during the execution of the workflow to different machines in order to minimize the additional costs associated with transferring these files. In this paper, we propose a scheduling algorithm called WSRDT (Workflow Scheduling Reducing Data Transfers) whose purpose is to minimize the makespan (execution time) of data-intensive workflows by reducing transfers data between dependent tasks on the network. Intermediate files produced by tasks are stored locally on the disk of the machine where the tasks were executed. We experimentally verify that the increase in the number of cores per machine reduces the additional cost due to data transfers on the network. Experiences with a veritable workflow show those advantages of the algorithms presented. Data-driven scheduling significantly reduces the execution time and the volume of data transferred on the network, our approach outperforms one of the best state-of-the-art algorithms that we have adapted with our hypotheses.

Author 1: Jean Edgard GNIMASSOUN
Author 2: Tchimou N’TAKPE
Author 3: Gokou Hervé Fabrice DIEDIE
Author 4: Souleymane OUMTANAGA

Keywords: Workflow scheduling; makespan reduction; multi-cores virtual machine; data-intensive workflows; IaaS cloud

PDF

Paper 35: Minimization of Spectrum Fragmentation for Improvement of the Quality of Service in Multifiber Elastic Optical Networks

Abstract: Internet data traffic is still growing considerably in recent decades. In view of this exponential and dynamic growth, elastic optical networks are emerging as a promising solution for today's flexibly allocated bandwidth transmission technologies. The setup and release of dynamic connections with different spectrum bandwidths and data rates leads over time to spectrum fragmentation in the network. However, single-fiber eastic optical networks are faced with the problem of optical spectrum fragmentation. Spectrum fragmentation refers to small blocks, isolated, non-aligned spectrum segments which is a critical issue for elastic optical network researchers.With the advent of multifiber, this fragmentation ratio has become more pronounced, resulting in a high blocking ratio in multifiber elastic optical networks. In this paper, we propose a new routing and spectrum allocation algorithm to minimize fragmentation in multifiber elastic optical networks. In the first step, we define different virtual topologies as many as there is fiber, for each virtual topology, the k shortest paths are determined to find the candidate paths between the source and the destination according to the minimization of a proposed parameter called allocation cost. In the second step, we apply the resource allocation algorithm followed by the choice of the optimal path with a minimum energy cost. Blocking probability and spectrum utilization are used to evaluate the performance of our algorithm. Simulation results show the effectiveness of our proposed approach and algorithm.

Author 1: Boris Stephane ZOUNEME
Author 2: Georges Nogbou ANOH
Author 3: Souleymane OUMTANAGA

Keywords: Routing and spectrum allocation; fragmentation; multifiber elastic network; quality of service

PDF

Paper 36: The Effect of Firm’s Size on Corporate Performance

Abstract: The purpose of this study is to determine the effect of capital structure on firm’s financial performance that is conducted on 55 manufacturing sector listed companies in Indonesia Stock Exchange. The data analysis is conducted using R Studio software. Study is used data panel analysis with random effect model. The result of this study are (1) firm's size has no effect on firm's financial performance which is proxied by return-on-assets; (2) firm's size has no effect on firm's financial performance which is proxied by market-to-book-value.

Author 1: Meiryani
Author 2: Olivia
Author 3: Jajat Sudrajat
Author 4: Zaidi Mat Daud

Keywords: Firm size; financial performance; return on assets; market to book value

PDF

Paper 37: Image Captioning using Deep Learning: A Systematic Literature Review

Abstract: Auto Image captioning is defined as the process of generating captions or textual descriptions for images based on the contents of the image. It is a machine learning task that involves both natural language processing (for text generation) and computer vision (for understanding image contents). Auto image captioning is a very recent and growing research problem nowadays. Day by day various new methods are being introduced to achieve satisfactory results in this field. However, there are still lots of attention required to achieve results as good as a human. This study aims to find out in a systematic way that what different and recent methods and models are used for image captioning using deep learning? What methods are implemented to use those models? And what methods are more likely to give good results. For doing so we have performed a systematic literature review on recent studies from 2017 to 2019 from well-known databases (Scopus, Web of Sciences, IEEEXplore). We found a total of 61 prime studies relevant to the objective of this research. We found that CNN is used to understand image contents and find out objects in an image while RNN or LSTM is used for language generation. The most commonly used datasets are MS COCO used in all studies and flicker 8k and flicker 30k. The most commonly used evaluation matrix is BLEU (1 to 4) used in all studies. It is also found that LSTM with CNN has outperformed RNN with CNN. We found that the two most promising methods for implementing this model are Encoder Decoder, and attention mechanism and a combination of them can help in improving results to a good scale. This research provides a guideline and recommendation to researchers who want to contribute to auto image captioning.

Author 1: Murk Chohan
Author 2: Adil Khan
Author 3: Muhammad Saleem Mahar
Author 4: Saif Hassan
Author 5: Abdul Ghafoor
Author 6: Mehmood Khan

Keywords: Image Captioning; Deep Learning; Neural Network; Recurrent Neural Network (RNN); Convolution Neural Network (CNN); Long Short Term Memory (LSTM)

PDF

Paper 38: A Trait-based Deep Learning Automated Essay Scoring System with Adaptive Feedback

Abstract: Numerous Automated Essay Scoring (AES) systems have been developed over the past years. Recent advances in deep learning have shown that applying neural network approaches to AES systems has accomplished state-of-the-art solutions. Most neural-based AES systems assign an overall score to given essays, even if they depend on analytical rubrics/traits. The trait evaluation/scoring helps to identify learners’ levels of performance. Besides, providing feedback to learners about their writing performance is as important as assessing their level. Producing adaptive feedback to the learners requires identifying the strengths/weaknesses and the magnitude of influence of each trait. In this paper, we develop a framework that strengthens the validity and enhances the accuracy of a baseline neural-based AES model with respect to traits evaluation/scoring. We extend the model to present a method based on essay traits prediction to give trait-specific adaptive feedback. We explored multiple deep learning models for the automatic essay scoring task, and we performed several analyses to get some indicators from these models. The results show that Long Short-Term Memory (LSTM) based system outperformed the baseline study by 4.6% in terms of quadratic weighted Kappa (QWK). Moreover, the prediction of the traits scores enhance the efficiency of the prediction of the overall score. Our extended model is used in the iAssistant, an educational module that provides trait-specific adaptive feedback to learners.

Author 1: Mohamed A. Hussein
Author 2: Hesham A. Hassan
Author 3: Mohammad Nassef

Keywords: AES system; trait evaluation; adaptive feedback; deep learning; neural networks; ASAP

PDF

Paper 39: Mutual Coexistence in WBANs: Impact of Modulation Schemes of the IEEE 802.15.6 Standard

Abstract: Due to the mobility of subjects carrying wireless Body Area Networks (WBANs), a BAN may be found in an environment that contains other adjacent BANs, which may influence its proper functioning. The purpose of this paper is to study the effect of interference between adjacent BANs on the performance of a reference BAN in terms of packet loss rate (PLR), while considering the following four parameters: the distance separating adjacent BANs, the number of nodes and traffic payload of an interferer BAN, and the transmission data rate. The study is conducted for the two modulation schemes proposed by the IEEE 802.15.6 standard in the 2.4 GHz narrow band, which are: Differential Binary Phase Shift Keying (DBPSK) modulation and Differential Quadrature Phase Shift Keying (DQPSK) modulation. Simulation results have shown that the adoption of a lower-order modulation such as DBPSK can reduce the effect of interference among adjacent BANs.

Author 1: Marwa BOUMAIZ
Author 2: Mohammed EL GHAZI
Author 3: Mohammed FATTAH
Author 4: Anas BOUAYAD
Author 5: Moulhime EL BEKKALI

Keywords: Body Area Network (BAN); mutual coexistence; interference; Differential Binary Phase Shift Keying (DBPSK); Differential Quadrature Phase Shift Keying (DQPSK)

PDF

Paper 40: Ensemble Machine Learning Model for Higher Learning Scholarship Award Decisions

Abstract: The role of higher learning in Malaysia is to ensure high quality educational ecosystems in developing individual potentials to fulfill the national aspiration. To implement this role with success, scholarship offer is an important part of strategic plan. Since the increasing number of undergraduates’ student every year, the government must consider to apply a systematic strategy to manage the scholarship offering to ensure the scholarship recipient must be selected in effective way. The use of predictive model has shown effective can be made. In this paper, an ensemble knowledge model is proposed to support the scholarship award decision made by the organization. It generates list of eligible candidates to reduce human error and time taken to select the eligible candidate manually. Two approached of ensemble are presented. Firstly, ensembles of model and secondly ensembles of rule-based knowledge. The ensemble learning techniques, namely, boosting, bagging, voting and rules-based ensemble technique and five base learners’ algorithm, namely, J48, Support Vector Machine (SVM), Artificial Neuron Network (ANN), Naïve Bayes (NB) and Random Tree (RT) are used to develop the model. Total of 87,000 scholarship application data are used in modelling process. The result on accuracy, precision, recall and F-measure measurement shows that the ensemble voting techniques gives the best accuracy of 86.9% compare to others techniques. This study also explores the rules obtained from the rules-based model J48 and Apriori and managed to select the best rules to develop an ensemble rules-based models which is improved the study for classification model for scholarship award.

Author 1: Wirawati Dewi Ahmad
Author 2: Azuraliza Abu Bakar

Keywords: Scholarship classification; ensemble learning; rules-based classification; rules-based ensemble

PDF

Paper 41: Where is the Highest Rate of Children with Anemia in Peru? An Answer using Grey Systems

Abstract: Anemia, or chronic malnutrition, in children under 5 is a social problem that affects the infant crucially in his or her growth and cognitive development. However, this problem presents itself in different ways in each department of Peru. Thus, in this work, the methodology of grey clustering, which is based on grey systems theory, was applied. The case study was conducted in the 25 departments of Peru, to analyze the departments most affected by chronic malnutrition in children under 5 years of age. The results of the study were that the departments of Cajamarca, Huancavelica, Loreto and Pasco have the highest rate of children with anemia; this may be due to the fact that these departments have greater poverty or negligence with respect to proper food handling. The results of this study could help local authorities such as the Ministry of Health to combat malnutrition, and also serve as a basis for future studies to evaluate the social impact of other conditions on health from a mathematical perspective.

Author 1: Alexi Delgado
Author 2: Nicolly Perez Ccancce

Keywords: Anemia; CTWF; grey systems; grey clustering

PDF

Paper 42: Mapping UML Sequence Diagram into the Web Ontology Language OWL

Abstract: In this paper, we propose a new mapping technique from the OMG’s UML modeling language into the Web Ontology Language (OWL) to serve the Semantic Web. UML (Unified Modeling Language) is widely accepted and used as a standardized modeling language in Object-Oriented Analysis (OOA) and Design (OOD) approach by domain experts to model real-world objects in software development. On the other hand, the conceptualization, which is represented in OWL, is designed to process the content of information rather than just present the information. Therefore, the matter of migrating UML to OWL is becoming an energetic research domain. OWL (Web Ontology Language) is a Semantic Web language designed for defining ontologies on the Web. An ontology is a formal specification naming and definition of shared data. This technique describes how to map UML Models into OWL and allows us to keep semantic of UML sequence diagrams such as messages, the sequence of messages, guard invariant, etc. to make data of UML sequence diagrams machine-readable.

Author 1: Mo’men Elsayed
Author 2: Nermeen Elkashef
Author 3: Yasser F.Hassan

Keywords: Mapping; Unified Modeling Language; UML; sequence diagram; ontology; Web Ontology Language; OWL

PDF

Paper 43: Intelligent Fleet Management System for Open Pit Mine

Abstract: Fleet management systems are currently used to coordinate mobility and delivery services in a wide range of areas. However, their traditional control architecture becomes a critical bottleneck in open and dynamic environments, where scalability and autonomy are key factors in their success. In this article, we propose an intelligent distributed Fleet Management System architecture for an open pit mine that allows mining vehicles control in a real time context, according to users’ requirements. Enriched by an intelligence layer made possible by the use of high-performance artificial intelligence algorithms and a reliable and efficient perception mechanism based on Internet of Things technologies and governed by an smart and integrated decision system that allows the fleet management system to improve its agility and its response to user requirements, our architecture presents numerous contributions to the domain. These contributions enable the fleet management system to meet the interoperability and autonomy requirements of the most widely used standards in the field, such as ISA 95.

Author 1: Hajar BNOUACHIR
Author 2: Meryiem CHERGUI
Author 3: Nadia MACHKOUR
Author 4: Mourad ZEGRARI
Author 5: Aziza CHAKIR
Author 6: Laurent DESHAYES
Author 7: Atae SEMMAR
Author 8: Hicham MEDROMI

Keywords: Fleet management system; open pit mine; monitoring; architectures; artificial intelligence; real time system

PDF

Paper 44: A Human Gait Recognition Against Information Theft in Smartphone using Residual Convolutional Neural Network

Abstract: The genuine user of the smartphone is identified and information theft is prevented by continuous authentication, which is one of the most emerging features in biometrics application. A person is recognized by analysing the physiological or behavioural attributes is defined as biometrics. The physiological qualities include iris acknowledgment, impression of finger, palm and face geometry are used in the biometric validation frameworks. In the existing entry-point authentication techniques, a confidential information is lost because of internal attacks, while identifying the genuine user of the smartphone. Therefore, a biometric validation framework is designed in this research study to differentiate an authorized user by recognizing the gait. In order to identify the unauthorized smartphone access, a human gait recognition is carried out by implementing a Residual Convolutional Neural Network (RCNN) approach. A personal information of end user in smartphone is secured and presented a better solution from unauthorized access by proposed architecture. The performance of RCNN method is compared with the existing Deep Neural Network (DNN) in terms of classification accuracy. The simulation results showed that the RCNN method achieved 98.15% accuracy, where DNN achieved 95.67% accuracy on OU-ISIR dataset.

Author 1: Gogineni Krishna Chaitanya
Author 2: Krovi.Raja Sekhar

Keywords: Authentication; biometric analysis; genuine user; information loss; residual convolutional neural network; smartphone

PDF

Paper 45: The Cuckoo Feature Filtration Method for Intrusion Detection (Cuckoo-ID)

Abstract: Intrusion Detection Systems (IDSs) play a crucial role in keeping online systems secure from attacks. However, these systems usually face the challenge of needing to handle and analyze a vast volume of data in order to achieve intrusion detection. Feature filtration is a solution that overcomes this challenge by focusing on the characteristic network features that play a significant role in enabling these systems to achieve high detection rates. This paper presents an intelligent cuckoo feature filtration method that is intended to prune away insignificant network features. Then, an IDS (the Cuckoo-ID ) is designed in which an eXtended Classifier System (XCS) uses the filtered features for improving the rate of detection of network intrusions. Thus, the main objective of Cuckoo-ID is to maximize the detection rate (DR) and minimize the false alarm rate (FAR). Experiments were then run on the KDDcup’99 dataset to test the intrusion detection (ID) efficiency of the proposed system. The results showed that cuckoo filtration does profoundly raise the ID rate of the entire system. Finally, the DR and FAR of Cuckoo-ID were compared with those of intrusion detection methods that depend on network feature filtration.

Author 1: Wafa Alsharafat

Keywords: Cuckoo algorithm; feature filtration; intrusion detection; XCS; detection rate

PDF

Paper 46: Segmentation of Fuzzy Enhanced Mammogram Mass Images by using K-Mean Clustering and Region Growing

Abstract: Providing intention to encourage radiologist’s appraisal for distinguishing proof or order of mammogram images, different methods were suggested by specialists since past two decades. By means of this technical paper, we propose segmentation on advanced mammogram imaging with k-means clustering and locale developing systems tending to support specialists or radiologists to figure out cancerous areas with computer-aided techniques. The suggested task is further classified within two stages: Applied/implemented pre-processing, at primary stage. With the pre-processing stage, we carried a median filter to expel undesirable salt and pepper clamor. Further, we apply fuzzy intensification operator (INT) to upgrade the distinction of intake images. During subsequent stage, improved fuzzy imaging conduces as input for k-mean clustering. Secondly, the locale developing technique is employed with previously generated clustered imagery to partition mammogram into homogeneous areas indicated through force from pixels. With the end goal of the experiment, we utilized the smaller than normal MAIS dataset. The experiment’s end result shows that proposed strategy accomplishes higher precision.

Author 1: Nidhi Singh
Author 2: S. Veenadhari

Keywords: INT operator; feature extraction; k-mean clustering; mammogram; median filter; segmentation

PDF

Paper 47: IoT Technology for Facilities Management: Understanding End user Perception of the Smart Toilet

Abstract: The Internet of Things (IoT) plays an important role as an emerging technology. IoT platforms enable electronic devices to collect, process, and monitor various types of data. The Smart Toilet system featured in this paper is based on IoT technology, and it is designed for resource optimization in facility management services and for bringing convenience to individual end users. This paper presents a study conducted on individual end user perception of the Smart Toilet. A total of 124 respondents had participated in the study’s online survey and statistical data analysis methods were used to analyse the data. Results indicate user perception of the proposed Smart Toilet and the Smart Toilet app.

Author 1: Venushini Raendran
Author 2: R. Kanesaraj Ramasamy
Author 3: Intan Soraya Rosdi
Author 4: Ruzanna Ab Razak
Author 5: Nurazlin Mohd Fauzi

Keywords: IoT system; facilities management; smart toilet; resource optimization; user acceptance; theory of planned behaviour

PDF

Paper 48: New Learning Approach for Unsupervised Neural Networks Model with Application to Agriculture Field

Abstract: An accurate and lower cost hybrid machine learning algorithm based on a combination of Kohonen-Self Organizing Map (SOM) and Gram-Schmidt (GSHM) algorithm was proposed, to enhance the crop yield prediction and to increase the agricultural production. The combination of GSHM and SOM allows to withdraw the most informative components about our data, by overcoming correlation issues between input data prior to the training process. The improved hybrid algorithm was trained firstly on data that have a correlation problem, and it was compared with another hybrid model based on SOM and Principal Component Analysis (PCA), secondly, it was trained using selected soil parameters related to the atmosphere (e.g. pH, nitrogen, phosphate, potassium, depth, temperature, and rainfall). A comparative study with the standard SOM was conducted. The improved Kohonen-Self Organizing Map when applied to correlated data, demonstrated better results in terms of classification accuracy (8/8), and rapidity = 0.015s compared to a classification accuracy (7/8) and a rapidity = 97,828 s using SOM combined with PCA. Moreover, the proposed algorithm resulted in better results for crop prediction in terms of maximum iteration number of 675, mean error ≤0.00022, and rapidity = 18.422s versus an iteration number of 729, mean error ≤ 0.000916 and rapidity= 23.707s with the standard SOM. The proposed algorithm allowed us to overcome correlation issues, and to improve the classification, learning process, and rapidity, with the potential to apply for predicting crop yield in the agricultural field.

Author 1: Belattar Sara
Author 2: Abdoun Otman
Author 3: El khatir Haimoudi

Keywords: Kohonen-self organizing map; gram-schmidt algorithm; principal component analysis; agriculture field; crop yield prediction

PDF

Paper 49: A Secured Large Heterogeneous HPC Cluster System using Massive Parallel Programming Model with Accelerated GPUs

Abstract: High Performace Computing (HPC) architectures are expected to develop first ExaFlops computer. This Exascale processing framework will be proficient to register ExaFlops estimation every subsequent that is thousands-overlay increment in current Petascale framework. Current advancements are confronting a few difficulties to move toward such outrageous registering framework. It has been anticipated that billion-way of parallelism will be exploited to discover Exascale level secured system that provide massive performance under predefined limitations such as processing cores and power consumption. However, the key elements of the strategies are required to develop a secured ExaFlops level energy efficient system. This study proposes a non-blocking, overlapping and GPU computation based tri-hybird model (OpenMP, CUDA and MPI) model that provide a massive parallelism through different granularity levels. We implemented the three different message passing strategies including and performed the experiments on Aziz-Fujitsu PRIMERGY CX400 supercomputer. It was observed that a comprehensive experimental study has been conducted to validate the performance and energy efficiency of our model. Experimental investigation shows that the EPC could be considered as an initiative and leading model to achieve massive performance through efficient scheme for Exascale computing systems.

Author 1: Khalid Alsubhi

Keywords: High Performance Computing HPC; MPI; OpenMP; CUDA; Supercomputing Systems

PDF

Paper 50: Metaheuristic for the Capacitated Multiple Traveling Repairman Problem

Abstract: The Capacitated Multiple Traveling Repairmen Problem (CmTRP) is an extension of the Multiple Traveling Repairmen Problem (mTRP). In the CmTRP, the number of vehicles is dispatched to serve a set of customers, while each vehicle’s capacity is limited by a predefined-value as well as each customer is visited exactly once. The goal is to find a tour that minimizes the sum of waiting times. The problem is NP-hard because it is harder than the mTRP. Even finding a feasible solution is also NP-hard problem. To solve medium and large size instances, a metaheuristic algorithm is proposed. The first phase constructs a feasible solution by combining between the Nearest Neighborhood Search (NNS) and Variable Neighborhood Search (VNS), while the optimization phase develops the feasible solution by the General Variable Neighborhood Search (GVNS). The combination maintains the balance between intensification and diversification to escape local optima. The proposed algorithm is implemented on benchmark instances from the literature. The results indicate that the developed algorithm obtains good feasible solutions in a short time, even for the cases with up to 200 vertices.

Author 1: Khanh-Phuong Nguyen
Author 2: Ha-Bang Ban

Keywords: CmTRP; NNS; VNS; GVNS

PDF

Paper 51: An Efficient Methodology for Water Supply Pipeline Risk Index Prediction for Avoiding Accidental Losses

Abstract: The accidents happening to buildings and other human facilitation sectors due to poor water supply pipelining system is a random phenomenon, but an efficient estimation system can help to escape from such accidents. Such a system can be useful in assisting the caretakers to take the initiative measures to avoid the occurrence of the accidents or at least reduce the associated risk. In this paper, we target this issue by proposing a water supply pipelines risk estimation methodology using feed forward backpropagation neural network (FFBPNN). For validation and performance evaluation, real data of water supply pipelines collected in Seoul, Republic of South Korea from 1987 to 2010 is used. A comprehensive analysis is performed in order to get reasonable results with both original and pre-processed input data. Pre-processing consists of two steps: data normalization and statistical moments computation. Statistical moments are mean, variance, kurtosis and skewness. Significant improvement in prediction accuracy is observed with data pre-processing in terms of selected performance metrics, such as mean absolute error (MAE), mean absolute percentage error (MAPE) and root mean squared error (RMSE).

Author 1: Muhammad Shuaib Qureshi
Author 2: Ayman Aljarbouh
Author 3: Muhammad Fayaz
Author 4: Muhammad Bilal Qureshi
Author 5: Wali Khan Mashwani
Author 6: Junaid Khan

Keywords: Neural networks; normalization; risk index; mean square error; statistical moments

PDF

Paper 52: Modeling of Coronavirus Behavior to Predict it’s Spread

Abstract: With the increasing presence and feast of infectious diseases and their fatalities in densest areas, many academics and societies have become fascinated in discovering new behaviors to predict these diseases' feast behaviors. This media will help them to plan and contain the disease better in trivial provinces and thus decrease the beating of human lives. Some cases of an indeterminate cause of pneumonia occurred in Wuhan, Hubei, China, in December 2019, with clinical presentations closely resembling viral pneumonia. In-depth analyzes of the sequencing from lower respiratory tract samples discovered a novel coronavirus, called 2019 novel coronavirus (2019-nCoV). Current events showed us how easily a coronavirus could take root and spread—such viruses transmitted easily between persons. To cure with these infections, we applied time series forecasting model in this paper to predict possible coronavirus events. The forecasting model applied is SIR. The results of the implemented models compared with the actual data.

Author 1: Shakir Khan
Author 2: Amani Alfaifi

Keywords: COVID-19; coronavirus; SIR model; data mining; R Software; forecasting model

PDF

Paper 53: Automatic Segmentation of Hindi Speech into Syllable-Like Units

Abstract: To develop the high-quality Text-to-Speech (TTS) system, appropriate segmentation of continuous speech into the syllabic units placed an important role. The research work has been implemented for automatic syllable based speech segmentation technique for continuous speech for the Hindi language. The experiments were conducted by using the energy convex hull approach for clean, continuous speech for Hindi. In this method, the Savitzky-Golay filter was applied on the short term energy (STE) signal to increase the signal to noise ratio (SNR), followed by applying the median filter to preserve the boundaries, hence smoothing the energy curve. Also, the Hamming sliding-window was applied twice on speech signal to get the more accurate depth of convex hull valleys. Further, the algorithm was tested on 50 unique utterances chosen from the travel domain. The accuracy of the proposed algorithm has been calculated and obtains that 76.07% syllables have time-error less than 30 ms with manual segmentation reference. The performance of the proposed algorithm is also analyzed and gives better-segmented accuracy as compared to the existing group delay segmentation technique for fricatives or nasal sounds. The syllable base segmented database is suitable for the speech technology system for Hindi in the travel domain.

Author 1: Ruchika Kumari
Author 2: Amita Dev
Author 3: Ashwani Kumar

Keywords: Database; short term energy; convex hull; speech segmentation; syllable

PDF

Paper 54: Identification and Assesment of the Specific Absorption Rate (SAR) Generated by the most used Telephone in Peru in 2017

Abstract: According to the World Health Organization (WHO) it is estimated that between 5 and 10% of the population is electro sensitive, the excessive or prolonged exposure to electromagnetic waves can damage health. Currently the electromagnetic radiation generated by wireless mobile telephony involves our daily lives, since it is reported that there are more than 5 billion cell phone users. Each country establishes its own relative national standards on exposure to electromagnetic fields, which Peru lacks. Nevertheless, they are based in standards that has not been revised since 1996. This contribution seeks to identify the Specific Absorption Rate (SAR) generated by the most used mobile phone in Peru in 2017 using the ComoSAR measuring system in the GSM (Global System for Mobile communications) band at a frequency of 900 MHz. The results obtained will evaluate the behavior of electromagnetic waves by affecting the emulated tissues of body density and dielectric constant. The maximum SAR values recorded in the measurement were 0.05 W/Kg for a 1 g cube and 0.02 W/Kg for a 10 g cube. On the other hand, the average values obtained were 0.046 W/Kg for a 1 g cube and 0.019 W/Kg for a 10 g cube. The SAR values measured in the conditions of the experiment are below that what is indicated by the US standard and the European standard of SAR values.

Author 1: Natalia Indira Vargas-Cuentas
Author 2: Mark Clemente-Arenas
Author 3: Avid Roman-Gonzalez
Author 4: Roxana Moran-Morales

Keywords: Electromagnetic radiation; SAR; ComoSAR; GSM

PDF

Paper 55: Multi Focus Image Fusion using Image Enhancement Techniques with Wavelet Transformation

Abstract: Multi-focus image fusion produces a unification of multiple images having different areas in focus, which contain necessary and detailed information in the individual image. The paper is proposing a novel idea in a pre-processing step in the image fusion environment in which sharpening techniques applied before fusion in the pre-processing step. This article is proposing multi-focus hybrid techniques for fusion, based on image enhancement, which helps to identify the key features and minor details and then fusion performed on the enhanced images. In image enhancement, we introduced a new hybrid sharpening method that combines Laplacian Filter (LF) with a Discrete Fourier Transform (DFT) and also performs sharpening using the Unsharp sharpen approach. Then fusion is performed using Stationary Wavelet Transformation (SWT) technique to fused the enhanced images and obtaining more detail of the resultant image. The proposed approach is applied to two image sets, e., the “planes” and “clocks” image sets. The quality of the output image evaluated using both qualitative and quantitative approaches. Four will know quantitative metrics used to assess the performance of the novel technique. The experimental results of the novel methods showed efficient, improved outcomes and better for multi-focused image fusion. The SWT (LF+DFT) and SWT (Unsharp Mask) are 2.6 %, 1.8%, and 0.62%, 0.61% better than the best baseline measure, i.e., SWT, considering RMSE (Root Mean Square Error) for both image sets.

Author 1: Sarwar Shah Khan
Author 2: Muzammil Khan
Author 3: Yasser Alharbi

Keywords: Multi focus image fusion; image enhancement; unsharp masking; Laplacian Filter (LF); Stationary Wavelet Trans-forms (SWT); frequency domain technique

PDF

Paper 56: Multiclass Pattern Recognition of Facial Images using Correlation Filters

Abstract: Pattern Recognition comes naturally to humans and there are many pattern recognition tasks which humans can perform admirably well. However, human pattern recognition cannot compete with machine speed when the number of classes to be recognized becomes tremendously large. In this paper, we analyze the effectiveness of correlation filters for pattern classification problems. We have used Distance Classifier Correlation Filter (DCCF) for pattern classification of facial images. Two essential qualities of a correlation filter are distortion tolerance and discrimination ability. DCCF transposes the feature space in such a way that the images belonging to the same class gets closer and the images from different class moves far apart; thereby increasing the distortion tolerance and the discrimination ability. The results obtained demonstrate the effectiveness of the approach for face recognition applications.

Author 1: Nisha Chandran S
Author 2: Charu Negi
Author 3: Poonam Verma

Keywords: Pattern recognition; correlation filter; multiclass recognition

PDF

Paper 57: Analytical Comparison Between the Information Gain and Gini Index using Historical Geographical Data

Abstract: The historical geographical data of Kashmir province is spread across two disparate files having attributes of Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M., rainfall besides auxiliary parameters like date, year etc. The parameters Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M. are continuous in nature and here, in this study, we applied Information Gain and Gini Index on these attributes to convert continuous data into discrete values, their after we compare and evaluate the generated results. Of the four attributes, two have same results for Information Gain and Gini Index; one attribute has overlapping results while as only one attribute has conflicting results for Information Gain and Gini Index. Subsequently, continuous valued attributes are converted into discrete values using Gini index. Irrelevant attributes are not considered and auxiliary attributes are labeled accordingly. Consequently, the data set is ready for the application of machine learning (decision tree) algorithms.

Author 1: Majid Zaman
Author 2: Sameer Kaul
Author 3: Muheet Ahmed

Keywords: Geographical data mining; information gain; Gini index; machine learning; decision tree

PDF

Paper 58: Disparity of Stereo Images by Self-Adaptive Algorithm

Abstract: This paper introduces a new searching method named “Self Adaptive Algorithm (SAA)” for computing stereo correspondence or disparity of stereo image. The key idea of this method relies on the previous search result which increases searching speed by reducing the search zone and by avoiding false matching. According to the proposed method, stereo matching search range can be selected dynamically until finding the best match. The searching range -dmax to +dmax is divided into two searching regions. First one is -dmax to 0 and second one is 0 to +dmax .To determine the correspondence of a pixel of the reference image (left image), the window costs of the right image are computed either for -dmax to 0 region or for 0 to +dmax region depending only on the matching pixel position. The region where the window costs will be computed- will be automatically selected by the proposed algorithm based on previous matching record. Thus the searching range is reduced to 50% within every iteration. The algorithm is able to infer the upcoming candidate’s pixel position depending on the intensity value of reference pixel. So the proposed approach improves window costs calculation by avoiding false matching in the right image and reduces the search range as well. The proposed method has been compared with the state-of-the-art methods which were evaluated on Middlebury standard stereo data set and our SAA outperforms the latest methods both in terms of speed and gain enhancement with no degradation of accuracy.

Author 1: Md. Abdul Mannan Mondal
Author 2: Mohammad Haider Ali

Keywords: Stereo correspondence; stereo matching; window cost; adaptive search; disparity; sum of absolute differences

PDF

Paper 59: Natural Language Processing based Anomalous System Call Sequences Detection with Virtual Memory Introspection

Abstract: Malware has become a significant problem for the security of computers in this scientific era. Nowadays, machine learning techniques are applied to find anomalous activities in computers especially in virtualization environments. Identifying anomalous activities in virtual machines with virtual memory introspector and analyzing data with machine learning techniques are need of current trend. In this paper, an anomaly detection method is implemented using Natural Language Processing (NLP) based on Bags of System Calls (BoSC) for learning the behavior of applications on Windows virtual machines running on Xen hypervisor. During this process, system call traces are extracted from normal applications (benign processes) and malware affected applications (malicious processes) with the help of virtual memory introspection. Preprocessing of extracted system call sequences is done to obtain valid system call sequences through filtering and ordering of redundant system calls. Further, analysis of behavior of system call sequences is carried out with NLP based anomaly detection techniques. During this process, Cosine Similarity Algorithm (Co-Sim) is applied to identify malicious processes running on a VM. Apart from this, Point Detection Algorithm is applied to precisely locate the point of compromise in the system call sequences. The results shown in this paper indicates that both of these algorithms detect anomalies in the running processes with 99% accuracy.

Author 1: Suresh K. Peddoju
Author 2: Himanshu Upadhyay
Author 3: Jayesh Soni
Author 4: Nagarajan Prabakar

Keywords: System call sequence; anomaly detection; natural language processing; memory forensics; cosine similarity

PDF

Paper 60: Parkinson’s Disease Diagnosis using Spiral Test on Digital Tablets

Abstract: For a proper diagnosis, Parkinson's disease (PD) requires frequent visits to the doctor for physical tests, causing a huge burden on the patient. As PD impairs the handwriting ability, the handwriting pattern can be used as an indicator for PD diagnosis. More specifically, the Static Spiral Test (SST) and the Dynamic Spiral Test (DST), that consists in retracing spirals using digital pen. Such exam can be self-conducted by the patient, and thus it would be convenient and non-time-consuming for both the patient and the medical staff. In this project, we designed and implemented a system that automatically self-aid-diagnoses PD using SST and DST on digital tablets. The system includes two main components, image processing techniques to pre-process and extract the appropriate visual features and machine learning techniques to recognize PD automatically. The conducted experiment showed that the semi-local Edge Histogram Descriptor extracted from DST drawing, and conveyed to a Gaussian Kernel Support Vector Machine outperforms the other considered systems with an accuracy, specificity and sensitivity around 90%.

Author 1: Najd Al-Yousef
Author 2: Raghad Al- Saikhan
Author 3: Reema Al- Gowaifly
Author 4: Reem Al-Abdullatif
Author 5: Felwa Al-Mutairi
Author 6: Ouiem Bchir

Keywords: Component; Parkinson's disease (PD); computer-aided diagnosis; pattern recognition

PDF

Paper 61: Still Image-based Human Activity Recognition with Deep Representations and Residual Learning

Abstract: Iterative Recognizing human activity in a scene is still a challenging and an important research area in the field of computer vision due to its various possible implementations on many fields including autonomous driving, bio medical, machine intelligent vision etc. Recently deep learning techniques have emerged and successfully deployed models for image recognition and classification, object detection, and speech recognition. Due to promising results the state of art deep learning techniques have replaced the traditional techniques. In this paper, a novel method is presented for human activity recognition based on pre-trained Convolutional Neural Network (CNN) model utilized as feature extractor and deep representations are followed by Support Vector Machine (SVM) classifier for action recognition. It has been observed that previously learnt CNN knowledge from large scale data-set could be transferred to activity recognition task with limited training data. The proposed method is evaluated on publicly available stanford40 human action data-set, which includes 40 classes of actions and 9532 images. The comparative experiment results show that proposed method achieves better performance over conventional methods in term of accuracy and computational power.

Author 1: Ahsan Raza Siyal
Author 2: Zuhaibuddin Bhutto
Author 3: Syed Muhammad Shehram Shah
Author 4: Azhar Iqbal
Author 5: Faraz Mehmood
Author 6: Ayaz Hussain
Author 7: Saleem Ahmed

Keywords: Human activity recognition; action recognition; deep learning; transfer learning; residual learning

PDF

Paper 62: Design and Experimental Analysis of Touchless Interactive Mirror using Raspberry Pi

Abstract: A prototype of a smart gesture-controlled mirror with enhanced interactivity is proposed and designed in this paper. With the help of hand gestures, the mirror provides some basic amenities like time, news, weather etc. The designed system uses Pi cam for image acquisition to perform the functions like gesture recognition and an ultrasonic sensor for presence detection. This paper also discusses the experimental analysis of human gesture interaction using parameters like the angle which ranges its horizon from 0 to 37 degree when tilting the forearm up and down and 0 to 15 degrees when the forearm is twisted right to left and otherwise based on yellow, pink and white background colours. Additionally, the range of detection using an ultrasonic sensor is restricted to the active region of 69.6 to 112.5 degrees. Moreover, time delay which takes half a second for a time as retrieve with the system and take 6 to 10 seconds for fetching headlines and weather information from the internet. These analyses are taken into account to subsequently improve the design algorithm of the gesture-controlled smart mirror. The framework developed comprises of three different gesture defects under which mirror will display the mentioned information on its screen.

Author 1: Abdullah Memon
Author 2: Syed Mudassir Kazmi
Author 3: Attiya Baqai
Author 4: Fahim Aziz Umrani

Keywords: Smart Mirror; Raspberry Pi; Pi-camera; Application Programing Interface; Hue Saturation Value; Region of Interest

PDF

Paper 63: An Enhanced Distance Vector-Hop Algorithm using New Weighted Location Method for Wireless Sensor Networks

Abstract: Location is an indispensable segment for Wireless Sensor Network (WSN), since when events happened, we need to know location. The distance vector-hop (DV-Hop) technique is a popular range-free localization algorithm due to its cost efficiency and non-intricate process. Nevertheless, it suffers from poor accuracy, and it is highly influenced by network topology; Especially, more hop counts lead to more errors. In the final phase, least squares are employed to address nonlinear equation, which will gain greater location errors. Aimed at addressing problems mentioned above, an enhanced DV-Hop algorithm based on weighted factor, along with new weighted least squares location technique, is proposed in this paper, and it is called WND-DV-Hop. First, the one hop count of unknown node was corrected by employed received signal strength indication (RSSI) technology. Next, in order to reduce average hop distance error, a weighted coefficient based on beacon node hop count was constructed. A new weighted least squares method was embedded to solve nonlinear equation problem. Finally, considerable experiments were carried out to estimate the performance of WND-DV-Hop, compared the outcomes with state-of-the-art DV-Hop, IDV-Hop, Checkout-DV-Hop, and New-DV-Hop depicted in literature. The empirical findings demonstrated that WND-DV-Hop significantly outperformed other localization algorithms.

Author 1: Fengrong Han
Author 2: Izzeldin Ibrahim Mohamed Abdelaziz
Author 3: Xinni Liu
Author 4: Kamarul Hawari Ghazali

Keywords: Wireless Sensor Network (WSN); localization algorithm; range-free; distance vector-hop (DV-Hop) localization algorithm

PDF

Paper 64: Early Forest Fire Detection System using Wireless Sensor Network and Deep Learning

Abstract: Due to the global warming, which mechanically increases the risk of starting fires. The number of forest fires is increasing and will increase more and more. To better support the fire soldiers on the ground, we present in this work a system for early detection of forest fires. This system is more precise compared to traditional surveillance approaches such as lookout towers and satellite surveillance. The proposed system is based on collecting environmental wireless sensor network data from the forest and predicting the occurrence of a forest fire using artificial intelligence, more particularly Deep Learning (DL) models. The combination of such a system based on the concept of the Internet of Things (IoT) is made up of a Low Power Wide Area Network (LPWAN), fixed or mobile sensors and a good suitable model of deep learning. That several models derived from deep learning were evaluated and compared enabled us to show the feasibility of an autonomous and real-time environmental monitoring platform for dynamic risk factors of forest fires.

Author 1: Wiame Benzekri
Author 2: Ali El Moussati
Author 3: Omar Moussaoui
Author 4: Mohammed Berrajaa

Keywords: Forest fire detection; wireless sensor network; deep learning; internet of things; low power wide area network

PDF

Paper 65: A Process Model Collection with Structural Variants and Evaluations

Abstract: Today in the era of the latest technologies, Business Process Management Systems (BPMS) have allowed organizations to build process model repositories which help to maintain the flow of operations in the form of various process models. Business process models are virtual models that can imitate the actual activities of an organization. Searching for semantically similar activities between pairs of process models in a repository is known as Process Model Matching (PMM). From the past few years, PMM has been gaining momentum due to its wide range of applications such as integration of process models, process model clone detection, and process model knowledge discovery. Different types of PMM techniques have been applied on available process model repositories but these repositories contained a limited number of process models. Another notable aspect of PMM is that the existing techniques have not achieved the desired results which questions the effectiveness of process model repositories. To address this problem, the authors of this study have developed a substantial, diverse, and carefully developed process model collection. This process model collection is compared with existing SAP collection to highlight its significance and superiority. Furthermore, the proposed process model collection represents structural variations of example process models which are governed by the defined set of rules. To reflect structural variations between process models of our collection, existing structural similarity approaches such as structural metrics and graph edit distance were applied by using a custom-developed tool. Our proposed process model collection is freely available to the research community which can be used to build new PMM techniques and for assessment of existing PMM techniques.

Author 1: Rimsha Anam
Author 2: Tahir Muhammad

Keywords: Business process modeling; process model collection; Process Model Matching (PMM); structural variants

PDF

Paper 66: A Gamification Experience and Virtual Reality in Teaching Astronomy in Basic Education

Abstract: Regardless of the country, there is a trend: the world of school and the modern world are two different poles. Young people see school as boring compared to the entertainment of today's technology. Most students prefer to play or surf the internet, but not study. Gamification is projected as a methodological practice that aims to turn classrooms into playful immersion scenarios, using participatory strategies with the incorporation of electronic devices. This article shows the results obtained by applying gamification techniques in the research project aimed at supporting astronomy learning for basic education students. When using the app, the student must overcome challenges to earn different achievements and rewards. Among the results highlights the student's motivation during the learning process and the perception of satisfaction of the personal achievements achieved.

Author 1: Norka Bedregal-Alpaca
Author 2: Olha Sharhorodska
Author 3: Luis Jiménez-Gonzáles
Author 4: Robert Arce-Apaza

Keywords: Gamification; game-based learning; reward system; student motivation

PDF

Paper 67: Customer Churn Prediction Model and Identifying Features to Increase Customer Retention based on User Generated Content

Abstract: Customer churn is a problem for most companies because it affects the revenues of the company when a customer switch from a service provider company to another in the telecom sector. For solving this problem we put two main approaches: the first one is identifying the main factors that affect customers churn, the second one is detecting the customers that have a high probability to churn through analyzing social media. For the first approach we build a dataset through practical questionnaires and analyzing them by using machine learning algorithms like Deep Learning, Logistic Regression, and Naïve Bayes algorithms. The second approach is customer churn prediction model through analyzing their opinions through their user-generated content (UGC) like comments, posts, messages, and products or services' reviews. For analyzing the UGC we used Sentiment analysis for finding the text polarity (negative/positive). The results show that the used algorithms had the same accuracy but differ in arrangement of attributes according to their weights in the decision.

Author 1: Essam Abou el Kassem
Author 2: Shereen Ali Hussein
Author 3: Alaa Mostafa Abdelrahman
Author 4: Fahad Kamal Alsheref

Keywords: Customer churn; telecom sector; churn prediction; sentiment analysis; machine learning; customer retention

PDF

Paper 68: Oscillation Preventing Closed-Loop Controllers via Genetic Algorithm for Biped Walking on Flat and Inclined Surfaces

Abstract: In this study, a closed-loop controller is designed to overcome the dynamical insufficiency of the 3D Linear Inverted Pendulum Model (LIPM) via the Genetic Algorithm (GA). The main idea is to still use the 3D LIPM with a closed-loop controller because of its ease at modeling. While suppressing the dynamical flaws only the legs are used, in other words a robot is used which does not have any upper body elements to have a more modular robot. For this purpose, a biped is modeled with the 3D LIPM which is one of the most famous modeling methods of humanoid robots for the ease of modeling and fast calculations during the trajectory planning. After obtaining the simple model, Model Predictive Control (MPC) is applied to the 3D LIPM to find the reference trajectories for the biped while satisfying the Zero Moment Point (ZMP) criteria. The found reference trajectories applied to the full dynamical model on Matlab Simulink and the real biped in the laboratory at Istanbul Technical University. From the simulation results on the flat and inclined surfaces and real-time experiments on a flat surface some dynamical flaws are observed due to the simple modeling. To overcome these flaws a Proportional-Integral (PI) controller is designed, and the optimal value of the controller gains are found by the GA. The results assert that the designed controller can overcome the observed flaws and makes biped move more stable, smoother, and move without steady-state error.

Author 1: Sabri Yilmaz
Author 2: Metin Gokasan
Author 3: Seta Bogosyan

Keywords: Humanoid robot; biped walking; Model Predictive Control (MPC); Genetic Algorithm (GA); trajectory planning; Zero Moment Point (ZMP); linear inverted pendulum

PDF

Paper 69: Using Fuzzy c-Means for Weighting Different Fuzzy Cognitive Maps

Abstract: Currently, complex socio-ecological problems have increasingly prevailed with uncertainty that often dominates these domains. In order to better represent these problems, there is an urgent need to engage a wide range of different stakeholders' perspectives, regardless of their levels of expertise and knowledge. Then, these perspectives should be combined in an appropriate manner for a comprehensive and reasonable problem representation. Fuzzy cognitive map (FCM) has proven to be powerful and useful as a soft computing approach in addressing and representing such problem domains. By the FCM approach, the relevant stakeholders can represent their perspectives in the form of FCM system. Normally, relevant stakeholders have different levels of knowledge, and hence produce different representations (FCMs). Therefore, these FCMs should be weighted appropriately before the combination process. This paper uses fuzzy c-means clustering technique to assign different weights for different FCMs according to their importance in representing the problem. First, fuzzy c-means is used to compute the membership values of belonging of FCMs to the selected clusters based on the FCMs similarities that show how convergent and consistent they are. According to these membership values, the importance clusters' values are calculated, in which a cluster with a high membership value from all FCMs is the cluster with the high importance value, and vice versa. Next, the importance values for FCMs are derived from the importance values of the clusters by looking at the amount of contributions of FCMs memberships to the clusters. Finally, FCMs importance values are used to assign weight values to these FCMs, which are used when they are combined. The suitability of the proposed method is investigated using a real dataset that includes an appropriate number of FCMs collected from different stakeholders.

Author 1: Mamoon Obiedat
Author 2: Ali Al-yousef
Author 3: Ahmad Khasawneh
Author 4: Nabhan Hamadneh
Author 5: Ashraf Aljammal

Keywords: Complex problems; uncertainty; fuzzy cognitive map (FCM); fuzzy c-Means; FCM weight values

PDF

Paper 70: Wind Power Integration with Smart Grid and Storage System: Prospects and Limitations

Abstract: Wind power generation is playing a pivotal role in adopting renewable energy sources in many countries. Over the past decades, we have seen steady growth in wind power generation throughout the world. This article aims to summarize the operation, conversion and integration of the wind power with conventional grid and local microgrids so that it can be a one-stop reference for early career researchers. The study is carried out primarily based on the horizontal axis wind turbine and the vertical axis wind turbine. Afterward, the types and methods of storing this electric power generated are discussed elaborately. On top of that, this paper summarizes the ways of connecting the wind farms with conventional grid and microgrid to portray a clear picture of existing technologies. Section-wise, the prospects and limitations are discussed and opportunities for future technologies are highlighted. It is envisaged that, this paper will help researchers and engineering professionals to grasp the fundamental concepts related to wind power generation concisely and effectively.

Author 1: Saad Bin Abul Kashem
Author 2: Muhammad E. H. Chowdhury
Author 3: Amith Khandakar
Author 4: Jubaer Ahmed
Author 5: Azad Ashraf
Author 6: Nushrat Shabrin

Keywords: Wind power system; wind turbines; energy storage system; microgrids; nation grids

PDF

Paper 71: Voice Scrambling Algorithm based on 3D Chaotic Map System (VSA3DCS) to Encrypt Audio Files

Abstract: Here, a proposed voice scrambling algorithm established on one of two 3D chaotic maps systems (VSA3DCS) will be presented, discussed, and applied on audio signals file. The two 3D chaotic map systems in which any one of them is used to build VSA3DCS are Chen's chaotic map system and Lorenz chaotic map system. Also Arnold cat map-based scrambling algorithm will be applied on the same sample of audio signals. These Scrambling algorithms are used to encrypt the audio files by shuffling the positions of signals at different conditions with the audio file as one block or two blocks. Amplitude values of audio signals with signals' time are registered and plotted for original file versus encrypted files which are produced from applying VSA3DCS using Chen's, VSA3DCS using Lorenz, and Arnold-based algorithm. The spectrogram frequencies of audio signals with signals' time are plotted for original file versus encrypted files for all algorithms. Also, the histogram of the original file and encrypted audio signals are registered and plotted. The comparative analysis is presented by using some measuring factors for both of encryption and decryption processes, such as; the time of encryption and decryption, Correlation Coefficient of original and encrypted signals between the samples, the Spectral Distortion (SD) measure, Log-Likelihood Ratio (LLR) measure, and key sensitivity measuring factor. The results of several experimental and comparative analyses will show that the VSA3DCS algorithm using Chen's or Lorenz is a good algorithm to provide an effective and safe solution to voice signal encryption, and also VSA3DCS algorithm better than Arnold-based algorithm in all results with all cases.

Author 1: Osama M. Abu Zaid

Keywords: Lorenz chaotic map; Chen's chaotic map; Arnold cat map; scrambling algorithms; audio encryption

PDF

Paper 72: Air Quality Monitoring Device for Vehicular Ad Hoc Networks: EnvioDev

Abstract: Urban air pollution has become a major concern for numerous densely populated cities globally since poor air quality may cause various health problems. The first crucial step towards solving this important problem is to identify the most critical areas with the highest air pollution over the allowed limit. Nowadays, air pollution is monitored by various stationary measurement systems that are expensive, large, consume a big amount of energy and gathered data has a low spatial resolution. This paper presents EnvioDev, a mobile air quality and traffic conditions measurement device for Vehicular Ad hoc Networks (VANETs) that can be used on any type of a vehicle. EnvioDev was tested in a real-world urban environment measuring CO, CH4 and LPG concentrations, as well as air temperature and humidity in order to create a city pollution map and the results are presented in the paper. Moreover, in order to determine how many EnvioDev devices are required to obtain close to a real-time air quality map of an urban area, three experiments with Simulation of Urban Mobility (SUMO) simulator were conducted. In the experiments an urban city map was divided into five zones and data aggregation frequencies are varied during different traffic load periods in order to study the number of required vehicles with EnvioDev measurement device. The obtained results show that by increasing the data aggregation frequency the number of required vehicles with EnvioDev measurement device increases and it is depended on the size and topology of the testing area.

Author 1: Josip Balen
Author 2: Srdan Ljepic
Author 3: Kristijan Lenac
Author 4: Sadko Mandzuka

Keywords: Air pollution; air quality; Arduino; sensors; SUMO; VANET

PDF

Paper 73: Urbanization Change Analysis based on SVM and RF Machine Learning Algorithms

Abstract: To maintain sustainability in the development, measured the yearly change rate of the land through Land Cover classified maps that hold the data which is surveyed as an influential factor for environment management and urbanization. This paper measured the change rate, which is helpful for the management of the city to define the new policy and implement the best one to maintain the natural resources. Machine Learning algorithms are utilized to produce the most acknowledged Land Cover maps using the GEE cloud-based reliable platform using the LANDSAT8 satellite imagery. For the classification used the Random Forest (RF) and Support Vector Machine (SVM) Algorithm. This investigation also found that the Support Vector Machine (SVM) classifier accomplished better over-all accuracy and Kappa coefficient as compared to the Random Forest (RF) classifier while the training sample for both is the same.

Author 1: Farhad Hassan
Author 2: Tauqeer Safdar
Author 3: Ghulam Irtaza
Author 4: Aman Ullah Khan
Author 5: Syed Muhammad Husnain Kazmi
Author 6: Farah Murtaza

Keywords: Random Forest (RF); Support Vector Machine (SVM); GEE; classification; machine learning classifier; multi-temporal change analysis; urban change analysis; LANDSAT8; Kappa co-efficient

PDF

Paper 74: Development of a Recurrent Neural Network Model for English to Yorùbá Machine Translation

Abstract: This research developed a recurrent neural network model for English to Yoruba machine translation. Parallel corpus was obtained from the English and Yoruba bible corpus. The developed model was tested and evaluated using both manual and automatic evaluation techniques. Results from manual evaluation by ten human evaluators show that the system is adequate and fluent. Also, results from automatic evaluation shows that the developed model has decent and good translation as well as higher accuracy because it has better correlation with human judgment.

Author 1: Adebimpe Esan
Author 2: John Oladosu
Author 3: Christopher Oyeleye
Author 4: Ibrahim Adeyanju
Author 5: Olatayo Olaniyan
Author 6: Nnamdi Okomba
Author 7: Bolaji Omodunbi
Author 8: Opeyemi Adanigbo

Keywords: Recurrent; tokenizer; corpus; translation; evaluation; correlation

PDF

Paper 75: Thinging-Oriented Modeling of Unmanned Aerial Vehicles

Abstract: In recent years, there has been a dramatic increase in both practical and research applications of unmanned aerial vehicles (UAVs). According to the literature, there is a need in this area to develop a more refined model of UAV system architecture—in other words, a conceptual model that defines the system’s structure and behavior. The existing models mostly are fractional and do not account for the entire important dynamic attributes. Progress in this area could reduce ambiguity and increase reliability in the design of such systems. This paper aims to advance the modeling of UAV system architecture by adopting a conceptual model called a thinging (abstract) machine in which all of the UAV’s software and hardware components are viewed in terms of the flow of things and five generic operations. We apply this model to a real case study of a drone. The results— an integrated conceptual representation of the drone—support the viability of this approach.

Author 1: Sabah Al-Fedaghi
Author 2: Jassim Al-Fadhli

Keywords: Unmanned aerial vehicle (UAV); drone; conceptual modeling; diagrammatic representation; system architecture

PDF

Paper 76: Deep Learning Model for Identifying the Arabic Language Learners based on Gated Recurrent Unit Network

Abstract: This paper focuses on identifying the Arabic Lan-guage learners. The main contribution of the proposed method is to use a deep learning model based on the Gated Recurrent Unit Network (GRUN). The proposed model explores a multitude of stylistic features such as the syntax, the lexical and the n-grams ones. To the best of our awareness, the obtained results outperform those obtained by the best existing systems. Our accuracy is the best comparing with the pioneers (45% vs 41%), considering the limited data and the unavailability of accurate tools dedicated to the Arabic language.

Author 1: Seifeddine Mechti
Author 2: Roobaea Alroobaea
Author 3: Moez Krichen
Author 4: Saeed Rubaiee
Author 5: Anas Ahmed

Keywords: Arabic; Native Language Identification (NLI); deep learning; Gated Recurrent Unit Network (GRUN)

PDF

Paper 77: Quantifying Feature Importance for Detecting Depression using Random Forest

Abstract: Feature selection based on importance is a funda-mental step in machine learning models because it serves as a vital technique to orient the use of variables to what is most efficient and effective for a given machine learning model. In this study, an explainable machine learning model based on Random forest, is built to address the problem of identification of depression level for Twitter users. This model reflects its transparency through calculating its feature importance. There are several techniques to quantify the importance of features. However, in this study, random forest is used as both a classifier, which has over-performing aspects over many classifiers such as decision trees, and a method for weighting the input features as their importance imply. In this study, the importance of features is measured using different techniques including random forest, and the results of these techniques are compared. Furthermore, feature importance uses the concept of weighting the input variables inside a complete system for recommending a solution for depressed persons. The experimental results confirm the superiority of random forest over other classifiers using three different methods for measuring the features importance. The accuracy of random forest classification reached 84.7%, and the importance of features increased the classifier accuracy to 84.9%.

Author 1: Hatoon AlSagri
Author 2: Mourad Ykhlef

Keywords: Machine learning; random forest; feature selection; feature importance; depression; emotions; twitter

PDF

Paper 78: Parallel QR Factorization using Givens Rotations in MPI-CUDA for Multi-GPU

Abstract: Modern supercomputers incorporate the use of multi-core processors and graphics processing units. Applications running on these computers take advantage of these technologies with scalable programs that work with multicores and accelerator such as graphics processing unit. QR factorization is essential for several numerical tasks, such as linear equations solvers, compute inverse matrix or compute a diagonal matrix, to name a few. There are several factorization algorithm such as LU, Cholesky, Givens and Householder, among others. The efficient parallel implementation of each parallelization algorithm will depend on the structure of the data and the type of parallel architecture used. A common strategy in parallel programming is to break a problem into subproblems to solve them in different processing units. This is very useful when dealing with complex problems or when the data is too large to work with the available memory. However, it is not clear how data partitioning affects subtask performance when mapping to processing units, specifically to graphical processing units. This work explores the partitioning of large symmetric matrix data for QR factorization using Givens rotations and its parallel implementation using MPI and CUDA is presented.

Author 1: Miguel Tapia-Romero
Author 2: Amilcar Meneses-Viveros
Author 3: Erika Hern´andez-Rubio

Keywords: Givens factorization; CUDA; heterogeneous pro-gramming; scalable parallelism

PDF

Paper 79: VIPEye: Architecture and Prototype Implementation of Autonomous Mobility for Visually Impaired People

Abstract: Comfortable movement of a visually impaired per-son in an unknown environment is non-trivial task due to com-plete or partial short-sightedness, absence of relevant information and unavailability of assistance from a non-impaired person. To fulfill the visual needs of an impaired person towards autonomous navigation, we utilize the concepts of graph mining and computer vision to produce a viable path guidance solution. We present an architectural perspective and a prototype implementation to determine safe & interesting path (SIP) from an arbitrary source to desired destination with intermediate way points, and guide visually impaired person through voice commands on that path. We also identify and highlight various challenging issues, that came up while developing a prototype solution, i.e. VIPEye -An Eye for Visually Impaired People, to this aforementioned problem, in terms of task’s difficulty and availability of required resources or information. Moreover, this study provides candidate research directions for researchers, developers, and practitioners in the development of autonomous mobility services for visually impaired people.

Author 1: Waqas Nawaz
Author 2: Khalid Bashir
Author 3: Kifayat Ullah Khan
Author 4: Muhammad Anas
Author 5: Hafiz Obaid
Author 6: Mughees Nadeem

Keywords: Safe and Interesting Path (SIP); Visually Impaired People (VIP); autonomous; mobility; computer vision; path guid-ance; VIPEye prototype; navigation; graph mining

PDF

Paper 80: A High Performance System for the Diagnosis of Headache via Hybrid Machine Learning Model

Abstract: Headache has been a major concern for patients, medical doctors, clinics and hospitals over the years due to several factors. Headache is categorized into two major types:(1) Primary Headache, which can be tension, cluster or migraine, and (2) Secondary Headache where further medical evaluation must be considered. This work presents a high performance Headache Prediction Support System (HPSS). HPSS provides preliminary guidance for patients, medical students and even clinicians for initial headache diagnosis. The mechanism of HPSS is based on a hybrid machine learning model. First, 19 selected attributes (questions) were chosen carefully by medical specialists according to the most recent International Classification of Headache Disorders (ICHD-3) criteria. Then, a questionnaire was prepared to confidentially collect data from real patients under the supervision of specialized clinicians at different hospitals in Jordan. Later, a hybrid solution consisting of clustering and classification was employed to emphasize the diagnosis results obtained by clinicians and to predict headache type for new patients respectively. Twenty-six (26) different classification algorithms were applied on 614 patients’ records. The highest accuracy was obtained by integrating K-Means and Random Forest with a migraine accuracy of 99.1% and an overall accuracy of 93%. Our web-based interface was developed over the hybrid model to enable patients and clinicians to use our system in the most convenient way. This work provides a comparative study of different headache diagnosis systems via 9 different performance metrics. Our hybrid model shows a great potential for highly accurate headache prediction. HPSS was used by different patients, medical students, and clinicians with a very positive feedback. This work evaluates and ranks the impact of headache symptoms on headache diagnosis from a machine learning perspective. This can help medical experts for further headache criteria improvements.

Author 1: Ahmad Qawasmeh
Author 2: Noor Alhusan
Author 3: Feras Hanandeh
Author 4: Maram Al-Atiyat

Keywords: High performance computing; Clinical Decision Support System (CDSS); machine learning; primary and secondary headache; performance analysis and improvement; headache diag-nosis; open medical application

PDF

Paper 81: Efficient Cache Architecture for Table Lookups in an Internet Router

Abstract: Table lookup is the most important operation in routers from the aspects of both packet processing throughput and power consumption. To realize the table lookup at high throughput with low energy, Packet Processing Cache (PPC) has been proposed. PPC stores table lookup results into a small SRAM (static random access memory) per flow and reuses the cached results to process subsequent packets of the same flow. Because the SRAM is accessed faster with significant lower energy than TCAM (Ternary Content Addressable Memory), which is conventionally used as a memory for storing the tables in routers, PPC can process packets at higher throughput with lower power consumption when the table lookup results of the packets are in PPC. Although the PPC performance depends on the PPC hit/miss rates, recent PPCs still show high PPC miss rates and cannot achieve sufficient performance. In this paper, efficient cache architecture, constructed of two different techniques, is proposed to improve the PPC miss rate more. The simulation results indicated that the combined approach of them achieved 1.72x larger throughput with 41.4% lower energy consumption in comparison to the conventional PPC architecture.

Author 1: Hayato Yamaki

Keywords: Router architecture; table lookup; Packet Process-ing Cache (PPC); Ternary Content Addressable Memory (TCAM)

PDF

Paper 82: A Robust Scheme to Improving Security of Data using Graph Theory

Abstract: With the incredible growth of using internet and other new telecommunication technologies, cryptography has be-come an absolute necessity for securing communications between two or more entities, particularly in the case of transferring confi-dential data. In the literature, many encryption systems have been proposed against attack threats. These schemes should normally overcome the concerns by ensuring confidentiality, integrity and authenticity of transmitted data. However, several of them have shown weaknesses in terms of security and complexity. Hence the need for a robust and powerful non-standard encryption algorithm to prevent any traditional opportunity to sniff data. In this work, we propose a new encryption system that perfectly meets the security requirements. The scheme is based essentially on the principles of graph theory which are very promising at plain text representations. Our approach proposes another use of the concept of Hamiltonian circuit and adjacency matrix using a shared key and a pseudo-random generator. After analysis of the experimental results, which were very promising, the technique was found to be both efficient and robust.

Author 1: Khalid Bekkaoui
Author 2: Soumia Ziti
Author 3: Fouzia Omary

Keywords: Cryptography; encryption; security; graph theory; Hamiltonian circuit; adjacency matrix

PDF

Paper 83: 2D/3D Registration with Rigid Alignment of the Pelvic Bone for Assisting in Total Hip Arthroplasty Preoperative Planning

Abstract: In Total Hip Arthroplasty preoperative planning requires the definition of medical parameters that help during the intraoperative process; these parameters must be allocated with accuracy to make an implant to the patient. Currently, preoperative planning carries out with different methods. It can be by using a prosthesis template (2D) projected on x-ray images or by using a computed tomography (CT) in order to set a 3D prosthesis. We propose an alternative developing preoperative planning through reconstructed 3D models using 2D x-ray images, which help to get the same precise information such as a CT. On this paper it has proposed to test the framework from the authors Bertelsen A and Borro D, it is an ITK-Based Framework for 2D-3D Registration between x-ray images and a computed tomography. We used the approach of this paper using two fixed images (reference images) and one moving image (image to transform) to do a intensity registration. This method uses a ray casting interpolator to generate a Digitally Reconstructed Radiograph (DRR) or virtual x-ray. We also applied a normalized gradient correlation for comparing the patient x-ray image and the virtual x-ray image optimized by a nonlinear conjugate gradient, both metric and optimizer are useful to update rigid transformation parameters which have an additional scale parameter which produced better results such as 0.01855mm on the alignment of relocated reference volume and 15.5915mm on the alignment of deformed and relocated reference volume of Hausdorff distance between both models (reference volume and transformed volumetric template).

Author 1: Christian A. Suca Velando
Author 2: Eveling G. Castro Gutierrez

Keywords: Digitally Reconstructed Radiograph (DRR); inten-sity registration; rigid transformation; 3D models; Total Hip Arthro-plasty (THA); preoperative planning; Computed Tomography (CT)

PDF

Paper 84: Evaluating Contact Detection, Size Recognization and Grasping State of an Object using Soft Elastomer Gripper

Abstract: Object handling process of the sensitive or fragile object is critical to preserve its quality. In this domain, soft robotics has gained a lot of attention. However, limitations of detecting contact and grasping behavior is still a challenging task due to the non-linear behavior of soft gripper. Moreover, to regulate grasping behavior, exact real-time contact feedback is a crucial task. To improve the contact detection accuracy a gradient-based algorithm is proposed with the feedback from a simple resistive flex sensor and a pressure sensor. For that purpose firstly, the resistive flex sensor is embedded into the gripper to get the real-time gripper’s finger position. Secondly, solenoid valves and pressure sensors are used to control the pneumatic pressure from the pump, and finally, a closed-loop control system is developed for controlling the grasping process. The proposed contact detection algorithm can provide contact feedback with an accuracy of ±3m􀀀, which is implemented to perform the size recognization of the sphere-shaped objects. A real-time experimental setup has been developed which can successfully perform a pick and place of fruits and vegetables. The key benefits of the proposed algorithm are less complexity and better accuracy.

Author 1: Kazi Abaul Jamil

Keywords: Grasping process; contact feedback; size recogniza-tion

PDF

Paper 85: Ensemble Methods to Detect XSS Attacks

Abstract: Machine learning techniques are gaining popularity and giving better results in detecting Web application attacks. Cross-site scripting is an injection attack widespread in web applications. The existing solutions like filter-based, dynamic analysis, and static analysis are not effective in detecting unknown XSS attacks, and machine learning methods can detect unknown XSS attacks. Existing research to detect XSS attacks by using machine learning methods have issues like single base classifiers, small datasets, and unbalanced datasets. In this paper, supervised ensemble learning techniques trained on a large labeled and balanced dataset to detect XSS attacks. The ensemble methods used in this research are random forest classification, AdaBoost, bagging with SVM, Extra-Trees, gradient boosting, and histogram-based gradient boosting. Analyzed and compared the performance of ensemble learning algorithms by using the confusion matrix.

Author 1: PMD Nagarjun
Author 2: Shaik Shakeel Ahamad

Keywords: Cross-site scripting; machine learning; ensemble learning; random forest; bagging; boosting

PDF

Paper 86: Evaluation of a Model Maximizing the Quality Value of Selected Software Components in a Library

Abstract: Reusable software components are selected from libraries by developers and integrated into existing software systems to improve their quality. In this article, we evaluate a mathematical model based on an approach of optimization of the selection of the software components according to their quality. This is a linear programming model with constraints. It takes into account the quality characteristics of the components based on standard ISO / IEC 9126, the financial cost and the adaptation time. The experience with the ILOG Cplex Studio optimization tool gave satisfactory results.

Author 1: Koffi Kouakou Ive Arsene
Author 2: Samassi Adama
Author 3: Kouamé Appoh

Keywords: Software component quality; reuse; reusable components; reusability; mathematical model; simulation; validation; maintenance effort

PDF

Paper 87: Feature Selection and Performance Improvement of Malware Detection System using Cuckoo Search Optimization and Rough Sets

Abstract: The proliferation of malware is a severe threat to host and network-based systems. Design and evaluation of efficient malware detection methods is the need of the hour. Windows Portable Executable (PE) files are a primary source of windows based malware. Static malware detection involves an analysis of several PE header file features and can be done with the help of machine learning tools. In the design of efficient machine learning models for malware detection, feature reduction plays a crucial role. Rough set dependency degree is a proven tool for feature reduction. However, quick reduct using rough sets is an NP-hard problem. This paper proposes a hybrid Rough Set Feature Selection using Cuckoo Search Optimization, RSFSCSO, in finding the best collection of reduced features for malware detection. Random forest classifier is used to evaluate the proposed algorithm; the analysis of results proves that the proposed method is highly efficient.

Author 1: Ravi Kiran Varma P
Author 2: PLN Raju
Author 3: K V Subba Raju
Author 4: Akhila Kalidindi

Keywords: Cuckoo search; rough sets; feature optimization; malware analysis; malware detection; feature reduction; clamp dataset

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org