The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 11 Issue 1

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: High Performance Computing in Resource Poor Settings: An Approach based on Volunteer Computing

Abstract: High Performance Computing (HPC) systems aim to solve complex computing problems (in a short amount of time) that are either too large for standard computers or would take too long. They are used to solve computational problems in many fields such as medical science (for drug discovery, breast cancer detection in images, etc.), climate science, physics, mathematical science, etc. Existing solutions such as HPC Supercomputer, HPC Cluster, HPC Cloud or HPC Grid are not adapted for resource poor settings (mainly for developing countries) because their fees are generally beyond the funding (particularly for academics) and the administrative complexity to access to HPC Grid creates a higher barrier. This paper presents an approach allowing to build a Volunteer Computing system for HPC in resource poor settings. This solution does not require any additional investment in hardware, but relies instead on voluntary machines already owned by the private users. The experiment has been made on the mathematical problem of solving the matrices multiplication using Volunteer Computing system. Given the success of this experiment, the enrollment of other volunteers has already started. The goal being to create a powerful Volunteer Computing system with the maximum number of computers.

Author 1: Adamou Hamza
Author 2: Azanzi Jiomekong

Keywords: Volunteer computing; resource poor settings; high performance computing; matrix multiplication

PDF

Paper 2: Search Space of Adversarial Perturbations against Image Filters

Abstract: The superiority of deep learning performance is threatened by safety issues for itself. Recent findings have shown that deep learning systems are very weak to adversarial examples, an attack form that was altered by the attacker’s intent to deceive the deep learning system. There are many proposed defensive methods to protect deep learning systems against adversarial examples. However, there is still lack of principal strategies to deceive those defensive methods. Any time a par-ticular countermeasure is proposed, a new powerful adversarial attack will be invented to deceive that countermeasure. In this study, we focus on investigating the ability to create adversarial patterns in search space against defensive methods that use image filters. Experimental results conducted on the ImageNet dataset with image classification tasks showed the correlation between the search space of adversarial perturbation and filters. These findings open a new direction for building stronger offensive methods towards deep learning systems.

Author 1: Dang Duy Thang
Author 2: Toshihiro Matsui

Keywords: Deep neural networks; image filters; adversarial examples; image classification

PDF

Paper 3: Towards Security Effectiveness Evaluation for Cloud Services Selection following a Risk-Driven Approach

Abstract: Cloud computing is gaining a lot of popularity with an increasing number of services available in the market. This has rendered services selection and evaluation a difficult and challenging task, particularly for security-based evaluation. A key problem with much of the literature on cloud services security evaluation is that it fails to consider the overall evaluation context given the cloud characteristics and the underlying influence factors including threats, vulnerabilities, and security controls. In this paper, we propose a holistic risk-driven security evaluation approach for cloud services selection. We first use fuzzy DEMATEL method to jointly assess the likelihood and impact of threats with respect to the cloud service types, the exploitability of vulnerabilities to the identified threats, and the effectiveness of security controls in mitigating those vulnerabilities. Consequently, the overall diffusion of risk is captured via the relations across these concepts, which is leveraged to filter and prioritize the most critical security controls. The selected controls were then weighted using a combination of fuzzy DEMATEL and fuzzy ANP methods based on several factors, including their effectiveness in preventing the identified risks, user’s preferences and level of control (i.e., responsibilities). The latter denotes how much control a cloud user is transferring to the cloud provider. To enhance the reliability of the results, the subjective weights were integrated with objective weights using the Entropy method. Finally, the TOPSIS method was employed for services ranking and the Improvement Gap Analysis (IGA) method was leveraged to provide more insights on the strength and weaknesses of the selected services. An illustrative example is given to demonstrate the application of the proposed framework.

Author 1: Sarah Maroc
Author 2: Jian Biao Zhang

Keywords: Cloud computing; cloud services selection; decision-making; risk-driven assessment; security evaluation

PDF

Paper 4: Hybrid Machine Learning Algorithms for Predicting Academic Performance

Abstract: The large volume of data and its complexity in educational institutions require the sakes from informative technologies. In order to facilitate this task, many researchers have focused on using machine learning to extract knowledge from the education database to support students and instructors in getting better performance. In prediction models, the challenging task is to choose the effective techniques which could produce satisfying predictive accuracy. Hence, in this work, we introduced a hybrid approach of principal component analysis (PCA) as conjunction with four machines learning (ML) algorithms: random forest (RF), C5.0 of decision tree (DT), and naïve Bayes (NB) of Bayes network and support vector machine (SVM), to improve the performances of classification by solving the misclassification problem. Three datasets were used to confirm the robustness of the proposed models. Through the given datasets, we evaluated the classification accuracy and root mean square error (RSME) as evaluation metrics of the proposed models. In this classification problem, 10-fold cross-validation was proposed to evaluate the predictive performance. The proposed hybrid models produced very prediction results which shown itself as the optimal prediction and classification algorithms.

Author 1: Phauk Sokkhey
Author 2: Takeo Okazaki

Keywords: Student performance; machine learning algorithms; k-fold cross-validation; principal component analysis

PDF

Paper 5: Proposal of a Sustainable Agile Model for Software Development

Abstract: Sustainability is even more important now than ever if we speak in the context of organizational growth, it is necessary that technological products, such as software developments, are certified as green- environmental friendly technology that would mean a competitive advantage for an organization that implements an agile methodology for software development that takes sustainability into account, giving the organization new ways to market their software products as environmentally friendly. This study proposes a model for agile software development, it has taken into account that software development must be based upon reusing old hardware, free non-privative software and code (open source), as well as virtualization of servers and machines, to create software that can be useful for over a decade, as a result, we expect a reduction of planned obsolescence in hardware, which means taking one step ahead to help solve the problem that the big amount of electronic waste (e-waste) means nowadays worldwide.

Author 1: Oscar Antonio Alvarez Galán
Author 2: José Luis Cendejas Valdéz
Author 3: Heberto Ferreira Medina
Author 4: Gustavo A. Vanegas Contreras
Author 5: Jesús Leonardo Soto Sumuano

Keywords: Sustainability; agile methodologies; software development; MDSIC; sustainability variables

PDF

Paper 6: Pathological Worrying and Artificial Neural Networks

Abstract: Worrying is a cognitive process that focuses on potential future negative events, where the outcome is often uncertain. Worries can arise in chains with one worry leading to another, often without solution. This may give rise to an uncontrollable worrying that may be associated with psychiatric disorders such as anxiety and depression. The generation of progressively more negative chains of worries can lead to a catastrophic phenomenon of pathological worrying. In this article we show that catastrophic worrying can be simulated by using a cascade-correlation algorithm for artificial neural networks.

Author 1: Carlos Pelta

Keywords: Pathological worrying; artificial neural networks; cascade-correlation algorithm

PDF

Paper 7: Performance Comparison of CRUD Methods using NET Object Relational Mappers: A Case Study

Abstract: Most applications available nowadays are using an Object Relational Mapper (ORM) to access and save data. The additional layer that is being wrapped over the database induces a performance impact in detrimental of raw SQL queries; on the other side, the advantages of using ORMs by focusing on domain level through application development represent a premise for easier development and simpler code maintenance. In this context, this paper makes a performance comparison between three of the most used ORM technologies from the .NET family: Entity Framework Core 2.2, nHibernate 5.2.3 and Dapper 1.50.5. The main objective of the paper is to make a comparative analysis of the impact that a specific ORM has on application performance when realizing database requests. In order to perform the analysis, a specific testing architecture was designed to ensure the consistency of tests. Performance evaluation for time responses and memory usage for each technology was done using the same CRUD (Create Read Update Delete) operations on the database. The results obtained proved that the decision to use one of another is dependent of the most used type of operation. A comprehensive discussion based on results analysis is done in order to support a decision for choosing a specific ORM by the software engineers in the process of software design and development.

Author 1: Doina Zmaranda
Author 2: Lucian-Laurentiu Pop-Fele
Author 3: Cornelia Gyorödi
Author 4: Robert Gyorödi
Author 5: George Pecherle

Keywords: ORM (Object Relational Mapper); domain-level development; performance evaluation; CRUD (Create Read Update Delete) operations

PDF

Paper 8: Using the Convolution Neural Network Attempts to Match Japanese Women’s Kimono and Obi

Abstract: Currently, the decline in kimono usage in Japan is serious. This has become an important problem for the kimono industry and kimono culture. The reason behind this lack of usage is that Japanese clothing has many strict rules attached to it. One of those difficult rules is that kimonos have status, and one must consider the proper kimono to wear depending on the place and type of event. At the same time, the obi (sash) also has status, and the status of the kimono and obi must match. The matching of the kimono and obi is called “obiawase” in Japanese, and it is not just a matter of the person wearing the kimono selecting a pair that she likes. Instead, the first place you wear a kimono determines its status, and the obi must match that status and kimono. In other words, the color, material, meaning behind the pattern must be matched with obi. Kimono patterns may evoke the seasons or a celebratory event. All this must be considered. The kimono was originally everyday wear, and people were taught these things in their households, but with today’s increasingly nuclear families, that person who could teach these things isn’t nearby, adding to the lack of use of kimonos. Because of this, there has been interest in using CNN (Convolution Neural Network) from the digital fashion industry. We are attempting to use machine learning to tackle the difficult task of matching an obi to a kimono, using the CNN machines drawing the most attention today.

Author 1: Kumiko Komizo
Author 2: Noriaki Kuwahara

Keywords: Digital fashion; kimono; obi; convolution neural network

PDF

Paper 9: An Enhanced K-Nearest Neighbor Predictive Model through Metaheuristic Optimization

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Allemar Jhone P. Delima

Keywords: CIGAL-KNN; GA-KNN; IBAX operator; KNN algorithm; prediction models

PDF

Paper 10: Hybrid Algorithm Naive Bayesian Classifier and Random Key Cuckoo Search for Virtual Machine Consolidation Problem

Abstract: The trade-off between Energy consumption and SLA violation presents a serious challenge in cloud computing environments. A non-aggressive virtual machine consolidation algorithm is a good approach to reduce the consumed energy as well as SLA violation. A well-known strategy to deal with the virtual machine consolidation problem consists of four steps: host overloading detection, host under-loading detection, virtual machine selection and virtual machine placement. In this paper, the previous strategy is modified by merging the last two steps virtual machine selection and virtual machine placement, to avoid any poor solutions caused by solving both steps separately. In the host overloading/under-loading detection steps, we classified host status into five classes: Over-Utilized, Nearly Over-Utilized, Normal Utilized, Under-Utilized and Switched Off, then an algorithm, based on the Naive Bayesian Classifier, was introduced in order to detect the future host state for minimizing the number of virtual machine migrations; as a result, the energy consumption and performance degradation due to migrations will be minimized. In the virtual machine selection and placement steps, we introduced an algorithm based on the Random Key Cuckoo Search to reduce the energy consumption and enhance the SLA violation. To assess the algorithm, real data traces for 10 days, were used to verify the proposed algorithms. The experimental results proved that the proposed algorithms can significantly reduce the consumed energy as well as the SLA violation in data centers.

Author 1: Yasser Moaly
Author 2: Basheer A.Youssef

Keywords: Cloud computing; Naive Bayesian classifier; Random Key Cuckoo Search; Energy-efficiency; SLA-aware; Virtual Machine consolidation

PDF

Paper 11: Adjacency Effects of Layered Clouds by Means of Monte Carlo Ray Tracing

Abstract: Adjacency effects from layered box shaped clouds are clarified by means of Monte Carlo Simulation: MCS taking into account a phase function of cloud particles and multi-layered plane parallel atmosphere. MCS allows estimation of top of the atmosphere radiance. Influences on adjacency effects of phase function of the clouds in concern and the number of layers of the plane parallel atmosphere are also clarified together with the effects from the top and the bottom clouds. There are 10 of cloud types in meteorological definition. One-layer cloud for cumulus and cumulonimbus clouds are investigated in this study.

Author 1: Kohei Arai

Keywords: Monte Carlo simulation; top of the atmosphere radiance; cloud type; adjacency effects; layered clouds

PDF

Paper 12: An Intelligent and Adaptive Model for Change Management

Abstract: The continuous and rapid changes that are taking place in the world today makes the change management crucial to any organization. The existing change management models bridge the gap between motivation, planning, and implementation. These models are highly significant as the organizational change is not a rare event anymore, but it is an ongoing process, and the ‘business as usual’ model becomes insignificant to most organizations. To automatize the theoretical model of change management, an intelligent and adaptive model for change management is developed in this paper, which takes into consideration all the positive and negative effects (factors) that may take place at any time and any place internally (internal factors) or externally (external factors). Based on these factors, accordingly, the proposed model can efficiently find a reasonable solution that adapts to the existing situation to avoid any failure of organizational management. The proposed system is built based on a decision support system (DSS) with inputs that represent the influencing factors and an output that represents feedback on the method of management. In this paper, the proposed change management model has been verified, and the results have been reported accordingly.

Author 1: Ali M Alshahrani

Keywords: Change Management; External Environment; Internal Environment; Decision Support System (DSS)

PDF

Paper 13: A Systematic Review on Students’ Engagement in Classroom: Indicators, Challenges and Computational Techniques

Abstract: Students’ engagement in a classroom is a key factor that influences several educational outcomes. Studies by the University of California, Los Angeles (UCLA) and British universities found that 40% of students are frequently experiencing boredom and less than 20% of students ask questions in class due to poor engagement. A survey by Malaysia’s Program for International Student Assessment (PISA) found that 80% of the participating schools fell into the poor performance bracket. However, studies in this line of research are limited and scattered. To provide a clear insight into this problem and support researchers, it is crucial to understand the current state of research in this area. Consequently, in this paper, a comprehensive review is conducted to map the literature studies to a consistent taxonomy. Search terms revealed 87 papers from several databases that have been classified into seven categories. A systematic review method is applied, analysis is performed, and finally, findings, discussion, and recommendations are presented.

Author 1: Latha Subramainan
Author 2: Moamin A. Mahmoud

Keywords: Classroom interaction; student engagement; engagement indicators; engagement challenges; computational techniques

PDF

Paper 14: The Effectiveness of Stemming in the Stylometric Authorship Attribution in Arabic

Abstract: The recent years have witnessed the development of numerous approaches to authorship attribution including statistical and linguistic methods. Stylometric authorship attribution, however, remains among the most widely used due to its accuracy and effectiveness. Nevertheless, many authorship problems remain unresolved in terms of Arabic. This can be attributed to different factors including linguistic peculiarities that are not usually considered in standard authorship systems. In the case of Arabic, the morphological features carry unique stylistic features that can be usefully used in testing authorship in controversial texts and writings. The hypothesis is that much of these morphological features are lost due to the execution of stemming. As such, this study is concerned with investigating the effectiveness of stemming in the stylometric applications to authorship attribution in Arabic. In so doing, three Arabic stemmers GOLD stemmer, Khoga stemmer, Light 10 stemmer are used. By way of illustration, a corpus of 2400 news articles written by different 97 authors is designed. To evaluate the effectiveness of stemming, the selected articles (both stemmed and unstemmed texts) are clustered using cluster analysis methods. Comparisons are made between clustering structures based on stemmed and unstemmed datasets. The results indicate that stemming has negative impacts on the accuracy of the clustering performance and thus on the reliability of stylometric authorship testing in Arabic. The peculiar stylistic features of the affixation processes in Arabic can, thus, be usefully used for improving the performance of authorship attribution applications in Arabic. It can be finally concluded that stemming is not effective in the stylometric authorship applications in Arabic.

Author 1: Abdulfattah Omar
Author 2: Wafya Ibrahim Hamouda

Keywords: Authorship attribution; cluster analysis; GOLD stemmer; Khoga stemmer; Light 10 stemmer; stemming; stylometry

PDF

Paper 15: Edge IoT Networking

Abstract: Data transmission has witnessed a new wave of emerging technologies such as IoT. This new way of communication could be done through smart communication such as smart sensors and actuators. Thus, data traffic keeps traversing to the main servers in order to accomplish the tasks at the sensors side. However, this way of communication has encountered certain issues related to network due to the nature of routing forth and back from the end users to the main servers. Subsequently, this incurs high delay and packet loss which successively degrades the overall Quality of Service (QoS). On the other hand, the new way of data transmission, which is called “edge IoT network”, has not only helped on reducing the load over the network but also made the nodes to be more self-manage at the edge. However, this approach has some limitations due to the power consumption and efficiency, which would lead to node failure and data loss. Therefore, this paper presents a new model of combining network science and computer network in order to enhance the edge IoT efficiency. Simulation results have shown a clear evidence in improving the efficiency, communicability, degree, and overall closeness.

Author 1: Majed Mohaia Alhaisoni

Keywords: Edge IoT; network centrality; communicability; degree; closeness

PDF

Paper 16: Awareness of Ethical Issues when using an e-Learning System

Abstract: Transformation to the digital system has made life easier, and the acceptance of the e-learning system in the academic life of students is a fact. Therefore, many educational organizations use the e-learning environment for teaching-learning activities. The present study has been conducted to evaluate the awareness of the undergraduate students to ethics and to determine if there is a difference according to gender and academic level variables when using an e-learning system at Al-Balqa Applied University in Jordan. A self-questionnaire has been designed to measure the participant's awareness of ethics. It consists of 20 items classified in three ethical categories; Intellectual property rights, vandalism and Privacy. The results show that the awareness of students is low in all three categories regarding their commitment to the ethical issues when using an e-learning system. Result also show that there are no significant differences between undergraduate students' gender and academic level related to the awareness of ethical issues. Therefore, undergraduate students should be fully knowledgeable about ethical issues to avoid unethical behavior while using of the e-learning system.

Author 1: Talib Ahmad Almseidein
Author 2: Omar Musa Klaif.Mahasneh

Keywords: Code of ethics; ethical issues; e-learning

PDF

Paper 17: A Blockchain based Mobile Money Interoperability Scheme

Abstract: Developing Countries in Africa in general and Zambia in particular, have seen a rapid rise in use of mobile payment platforms. This has not only revolutionized access to finance for the poor but also allowed them access to other financial products such as savings or insurance. With a growing number of mobile money providers in Zambia, there is need for a solution that would enable integration of the mobile money provider’s systems using a central clearinghouse for purposes of clearing and settlement to achieve mobile money interoperability. In this study, we first reviewed the technical landscape and features of mobile payment systems in Zambia and then assessed the feasibility of using blockchain technology in proposing a settlement and clearing system that would facilitate mobile money interoperability. A prototype system was then designed in which amounts being interchanged between providers are managed as assets on a permissioned blockchain. The system runs a distributed shared ledger, which provides non-repudiation, data privacy and data origin authentication, by leveraging the consistency features of blockchain technology.

Author 1: Fickson Mvula
Author 2: Jackson Phiri
Author 3: Simon Tembo

Keywords: Blockchain; mobile money interoperability; clearing and settlement; blockchain security

PDF

Paper 18: Sign Language Semantic Translation System using Ontology and Deep Learning

Abstract: Translation and understanding sign language may be difficult for some. Therefore, this paper proposes a solution to this problem by providing an Arabic sign language translation system using ontology and deep learning techniques. That is to interpret user’s signs to different meanings. This paper implemented ontology on the sign language domain to solve some sign language challenges. In this first version, simple static signs composed of Arabic alphabets and some Arabic words started to translate. Deep Convolution Neural Network (CNN) architecture was trained and tested on a pre-made Arabic sign language dataset and on a dataset collected in this paper to obtain better accuracy in recognition. Experimental results show that according to the pre-made Arabic sign language dataset the classification accuracy of the training set (80% of the dataset) was 98.06% and recognition accuracy of the testing set (20% of the dataset) was 88.87%. According to the collected dataset, the classification accuracy of the training set was 98.6% and Semantic recognition accuracy of the testing set was 94.31%.

Author 1: Eman K Elsayed
Author 2: Doaa R. Fathy

Keywords: Deep Learning (DL); ontology; sign language translation

PDF

Paper 19: Malicious URL Detection based on Machine Learning

Abstract: Currently, the risk of network information insecurity is increasing rapidly in number and level of danger. The methods mostly used by hackers today is to attack end-to-end technology and exploit human vulnerabilities. These techniques include social engineering, phishing, pharming, etc. One of the steps in conducting these attacks is to deceive users with malicious Uniform Resource Locators (URLs). As a results, malicious URL detection is of great interest nowadays. There have been several scientific studies showing a number of methods to detect malicious URLs based on machine learning and deep learning techniques. In this paper, we propose a malicious URL detection method using machine learning techniques based on our proposed URL behaviors and attributes. Moreover, bigdata technology is also exploited to improve the capability of detection malicious URLs based on abnormal behaviors. In short, the proposed detection system consists of a new set of URLs features and behaviors, a machine learning algorithm, and a bigdata technology. The experimental results show that the proposed URL attributes and behavior can help improve the ability to detect malicious URL significantly. This is suggested that the proposed system may be considered as an optimized and friendly used solution for malicious URL detection.

Author 1: Cho Do Xuan
Author 2: Hoa Dinh Nguyen
Author 3: Tisenko Victor Nikolaevich

Keywords: URL; malicious URL detection; feature extraction; feature selection; machine learning

PDF

Paper 20: Noise Reduction in Spatial Data using Machine Learning Methods for Road Condition Data

Abstract: With the increase in the road transportation system the safety concerns for the road travels are also increasing. In order to ensure the road safety, various government and non-government efforts are visible to maintain the road quality and transport network system. The maintenance of the road condition is in the verse of getting automated for the quick identification of potholes, cracks and patch works and repair. The automation process is taking place in majority of the counties with the help of ICT enabled frameworks and devices. The primary device used for the purpose is the geo location enabled image capture devices. Regardless to mention the image capture process is always prune to noises and must be removed for better further analysis. Also, the spatial data is collected from the road networks are also prune to various error such as missing values or outliers due to the induced noises in the capture devices. Hence, the demand of the current research is to purpose a complete solution for the noise identification and removal from the spatial road network data for making the automation process highly successful and highly accurate. In the recent time, many parallel research attempts are observed, which resulted into solving the problem of noise reduction in all aspects of spatial data. Nevertheless, all the parallel research outcomes have failed to provide a single solution for all the noise issues. Henceforth, this work proposes three novel algorithms to solve spatial image noise problem using the adaptive moment filtration, missing value noise from the spatial data using adaptive logistic analysis and finally, the outlier noise removal from the same spatial data using corrective logistic machine learning method. The outcome of this work is nearly 70% accuracy in image noise reduction, 90% accuracy for missing value and outlier removal. The work also justifies the information loss reduction by nearly 50%. The final outcome of the work is to ensure higher accuracy for road maintenance automation.

Author 1: Dara Anitha Kumari
Author 2: A. Govardhan

Keywords: Spatial image moments; adaptive logistic de-noising; machine learning; noise removal; correlative corrections

PDF

Paper 21: Blockchain-based Electronic Voting System with Special Ballot and Block Structures that Complies with Indonesian Principle of Voting

Abstract: Blockchain technology could be implemented not only in digital currency, but also in other fields. One such implementation is in democratic life, namely voting. This research focuses on designing a blockchain-based electronic voting system for medium to large-scale usage that complies with law, specifically voting principles in Indonesia. In this research, we proposed the following: a ballot design as block transaction employing UUID version 4, a modified block structure using SHA3-256 hash algorithm, and a voting protocol. The minimum length of a ballot is 43 bytes (excluding ECDSA signature) if one character is used as candidate’s identifier and timestamp is stored as integer. We built a simulation program using Python-based Django web framework to cast 10,000 votes and mine them into blocks. Tampered transactions in each block could be detected and restored by synchronizing data with another node. We also evaluated the proposed system. By using this system, voters can exercise voting principles in Indonesia: direct, public, free, confidential, honest, and fair.

Author 1: Gottfried Christophorus Prasetyadi
Author 2: Achmad Benny Mutiara
Author 3: Rina Refianti

Keywords: Blockchain; voting; design; simulation; Python

PDF

Paper 22: Mobile Cloud Learning based on User Acceptance using DeLone and McLean Model for Higher Education

Abstract: Mobile learning has been used in the learning process in several tertiary institutions in Indonesia. However, several universities have not been able to implement mobile learning due to the limitations of computer and network infrastructure. Cloud computing is a solution for agencies that experience limitations in computer infrastructure in the form of internet-based services for their customers. This paper discusses the implementation of mobile cloud learning which is a combination of mobile learning technology and cloud computing using the renewal DeLone and McLean Model which is a successful model to measure how important the implementation of a mobile cloud learning system is. The results showed that from the F test results obtained Fcount of 13,222, then Fcount> Ftable (13,222> 3.01), then Ho was rejected and H1 was accepted. So it can be concluded Information Quality, System Quality, and Service Quality together affect the Intensity of Use.

Author 1: Kristiawan Nugroho
Author 2: Sugeng Murdowo
Author 3: Farid Ahmadi
Author 4: Tri Suminar

Keywords: Mobile learning; cloud computing; DeLone; McLean; model

PDF

Paper 23: Open Challenges for Crowd Density Estimation

Abstract: Nowadays, many emergency systems and surveillance systems are related to the management of the crowd. The supervision of a crowded area presents a great challenge especially when the size of the crowd is unknown. This issue presents a point of start to the field of the estimation of the crowd based on density or counts. The density of a crowded area is one of the important topics dealt with in many kinds of applications like surveillance, security, biology, traffic. In this paper, we try not only to present a deep review of the different approaches/techniques used in the previous works to estimate the size of the crowd but also to describe the different datasets used. A comparison of some related works based on the weakness and the strength features of each approach is highlighted to show the important research key related to the field of the estimation of the crowded area.

Author 1: Shaya A Alshaya

Keywords: Crowd density; count density; deep learning; CNN; datasets; metrics

PDF

Paper 24: Model for Measuring benefit of Government IT Investment using Fuzzy AHP

Abstract: Information Technology (IT) has become a mandatory for every organization including government. Investment on IT can help government to deliver services to the citizen. Every IT investment should give the maximum result. Measurement for the benefit of IT investment is needed to make sure that it has deliver the missions and goals. There are plenty models for measuring the feasibility of an IT investment before the implementation. But there are still few models to measure the IT investment after implementation. This paper proposes a model to measure the benefit of an IT investment after implementation, especially in government organizations. The model uses generic IS/IT business value category which consists of 13 categories and 73 sub-categories. Each category will be weighted according to organization preference using Fuzzy Analitic Hierarchy Process (FAHP). This model is applied to measure IT investments in the Ministry of Finance of the Republic of Indonesia, named SPAN and SAKTI applications. The weighted benefit score of SPAN is 76.39%, while the original score is 75.89%. The weighted benefit score of SAKTI is 68.08%, while the original score is 67.33%. The differences between the original score and weighted score indicate that the model accommodates the organization’s preference in the evaluation.

Author 1: Prih Haryanta
Author 2: Azhari Azhari
Author 3: Khabib Mustofa

Keywords: IT investment; government investment; ex-post evaluation; benefit creation; fuzzy AHP; analytic hierarchy process

PDF

Paper 25: Link Breakage Time Prediction Algorithm for Efficient Power and Routing in Unmanned Aerial Vehicle Communication Networks

Abstract: UAV Communication Networks (UAVCN) comes under the umbrella of Ad hoc Network technology. It has the critical differences with existing wireless networks, which are high mobility, high speed, dynamic updates, and changes in topology due to high movement, which creates the problem of link breakages and affects the routing performance. This problem degrades the performance of UAVCN in terms; it decreases throughput and minimizes the packet delivery ratio. In this paper, we have tried to overcome this problem by considering the received signal power strength (RSPS). We have proposed an algorithm which uses the received signal power strength and time and calculates the link breakage time prediction by using the interpolation method. We have implemented the proposed technique by modifying the OLSR protocol. The extended protocol termed EPOLSR, which efficiently using the signal power strength and time and increasing the performance of UAVCN. The extended protocol implemented by using a research tool network simulator (v3). The metrics received rate, no of received packets, throughput, and packet delivery ratio (PDR) is considered for evaluation. We have examined the proposed EPOLSR with existing routing protocols. It has been observed that the modified protocol performs better concerning all existing evaluated routing approaches.

Author 1: Haque Nawaz
Author 2: Husnain Mansoor Ali

Keywords: UAV; link breakage; algorithm; power; RSPS; routing

PDF

Paper 26: IoT System for Sleep Quality Monitoring using Ballistocardiography Sensor

Abstract: Sleep is very important for people to preserve their physical and mental health. The development of the ballistocardiography (BCG) sensor enables the possibility of day-to-day and portable monitoring at home. The goal of this study is to develop an IoT sleep quality monitoring system using BCG sensors, microcontrollers and cloud servers. The BCG sensor produces ECG data from the physical activity of the patient. The data is sent to the sensor and is read by the microcontroller. The sensor data is collected and pre-processed in the microcontroller. The microcontroller then transmits the data obtained from the BCG sensor to the cloud server for further analysis, i.e. to assess the sleep quality. The assessment of data transmission efficiency and resource consumption is carried out in this paper. The findings of the evaluation show that the proposed method achieves higher efficiency, lower response time and decreases memory usage by up to 77% compared to the conventional method.

Author 1: Nico Surantha
Author 2: C’zuko Adiwiputra
Author 3: Oei Kurniawan Utomo
Author 4: Sani Muhamad Isa
Author 5: Benfano Soewito

Keywords: Internet-of-Things; sleep quality; ballisto-cardiography; HRV; ECG

PDF

Paper 27: A Critical Review on Adverse Effects of Concept Drift over Machine Learning Classification Models

Abstract: Big Data (BD) is participating in the current computing revolution in a big way. Industries and organizations are utilizing their insights for Business Intelligence using Machine Learning Models (ML-Models). Deep Learning Models (DL-Models) have been proven to be a better selection than Shallow Learning Models (SL-Models). However, the dynamic characteristics of BD introduce many critical issues for DL-Models, Concept Drift (CD) is one of them. CD issue frequently appears in Online Supervised Learning environments in which data trends change over time. The problem may even worsen in the BD environment due to veracity and variability factors. Due to the CD issue, the accuracy of classification results degrades in ML-Models, which may make ML-Models not applicable. Therefore, ML-Models need to adapt quickly to changes to maintain the accuracy level of the results. In current solutions, a substantial improvement in accuracy and adaptability is needed to make ML-Models robust in a non-stationary environment. In the existing literature, the consolidated information on this issue is not available. Therefore, in this study, we have carried out a systematic critical literature review to discuss the Concept Drift taxonomy and identify the adverse effects and existing approaches to mitigate CD.

Author 1: Syed Muslim Jameel
Author 2: Manzoor Ahmed Hashmani
Author 3: Hitham Alhussain
Author 4: Mobashar Rehman
Author 5: Arif Budiman

Keywords: Big data classification; machine learning; online supervised learning; concept drift; Adaptive Convolutional Neural Network Extreme Learning Machine (ACNNELM); Meta-Cognitive Online Sequential Extreme Learning Machine (MOSELM); Online Sequential Extreme Learning Machine (OSELM); Real Drift (RD); Virtual Drift (VD); Hybrid Drift (HD); Deep Learning (DL); Shallow Learning (SL); Concept Drift (CD)

PDF

Paper 28: Scalability Performance for Low Power Wide Area Network Technology using Multiple Gateways

Abstract: Low Power Wide Area Network is one of the leading technologies for the Internet of Things. The capability to scale is one of the advantage criteria for a technology to compare to each other. The technology uses a star network topology for communication between the end-node and gateway. The star network topology enables the network to support a large number of end-nodes and with multiple of gateways deployed in the network, it can increase the number of end nodes even more. This paper aims to investigate the performance of the Low Power Wide Area Network Technology, focusing on the capability of the network to scale using multiple gateways as receivers. We model the network system based on the communication behaviours between the end-node and gateways. We also included the communication limit range for the data signal from the end-node to successfully be received by the gateways. The performance of the scalability for the Low Power Wide Area Network Technology is shown by the successfully received packet data at the gateways. The simulation to study the scalability was done based on several parameters, such as the number of end-nodes, gateways, channels and also application time. The results show that the amount of successfully received data signal at gateway increased as the gateways, application time and channel used increased.

Author 1: N.A. Abdul Latiff
Author 2: I.S. Ismail
Author 3: M. H. Yusoff

Keywords: Low power wide area network; scalability; simulation; multiple gateways

PDF

Paper 29: Tuberculosis Prevention Model in Developing Countries based on Geospatial, Cloud and Web Technologies

Abstract: Information is important when making decisions. Decisions which are based on gut feeling and made in the absence of evidence always tend to be less effective in most situations. This is also the case when it comes to Tuberculosis (TB) disease control and prevention intervention planning and implementation. The lack of evidence-based information upon which decisions for action to help with the prevention of spread of TB has proved to be less effective in the prevention of the disease as TB keeps spreading. The aim of this paper was to design and develop a prototype system that would provide TB program managers with information and tools which can be used to make decisions which can effectively influence the fight against the spread of TB through the application of cloud computing, geospatial data analysis and web technologies. The system would improve disease monitoring and tracking through the use of the identified technologies, by displaying the geographical distribution of TB cases in the communities on a mapping application as well as providing reports which TB program managers can use to make decisions when planning and implementing disease control and prevention activities.

Author 1: Innocent Mwila
Author 2: Jackson Phiri

Keywords: Evidence-based; monitoring; cloud computing; geospatial data analysis; mapping; web technologies; information; decision-making; tuberculosis prevention

PDF

Paper 30: Novel Language Resources for Hindi: An Aesthetics Text Corpus and a Comprehensive Stop Lemma List

Abstract: This paper is an effort to complement the contributions made by researchers working toward the inclusion of non-English languages in natural language processing studies. Two novel Hindi language resources have been created and released for public consumption. The first resource is a corpus consisting of nearly thousand pre-processed fictional and non-fictional texts spanning over hundred years. The second resource is an exhaustive list of stop lemmas created from 12 corpora across multiple domains, consisting of over 13 million words, from which more than 200,000 lemmas were generated, and 11 publicly available stop word lists comprising over 1000 words, from which nearly 400 unique lemmas were generated. This research lays emphasis on the use of stop lemmas instead of stop words owing to the presence of various, but not all morphological forms of a word in stop word lists, as opposed to the presence of only the root form of the word, from which variations could be derived if required. It was also observed that stop lemmas were more consistent across multiple sources as compared to stop words. In order to generate a stop lemma list, the parts of speech of the lemmas were investigated but rejected as it was found that there was no significant correlation between the rank of a word in the frequency list and its part of speech. The stop lemma list was assessed using a comparative method. A formal evaluation method is suggested as future work arising from this study.

Author 1: Gayatri Venugopal-Wairagade
Author 2: Jatinderkumar R. Saini
Author 3: Dhanya Pramod

Keywords: Hindi; corpus; aesthetics; stopwords; stoplemmas

PDF

Paper 31: An IoT based Home Automation Integrated Approach: Impact on Society in Sustainable Development Perspective

Abstract: In recent years, due to substantial evolution in the field of consumer electronics, the society is striving to optimize efficiency, energy savings, green technology and environmental sustainability in their daily lives at homes. Most of the people are controlling and monitoring home appliances manually and therefore, facing lots of problems in managing natural resources, cost, effort and security which lead towards an un-comfortable and un-reliable life. Numerous ‘intelligent’ devices such as smartphones, tablets, air-conditioners, etc. have promoted the key concept of the Internet of Things (IoT) based home automation. Entrenched with technology, these devices can be distantly monitored and controlled over the Internet at home and anywhere in the world. Over the past few decades, global warming has become a severe worldwide challenge. However, sustainable development and green technology play an important role in climate change. The primary purpose of this study is to save natural resources, reduce energy consumption, and to understand the impact of home automation on the society in order to achieve the goal of green technology and environmental sustainability. In this paper, IoT based home automation approach integrated with the smart meter, solar, wind, geothermal renewable energy resources and government green awareness program to extensively optimize the need of energy consumption, security, cost, convenience and cleaner environment for the society is proposed. In addition, a survey was conducted among the target audience for the purpose of identifying and evaluating its least impact on the environment and society in a sustainable development perspective. The results of this survey are statistically analyzed using IBM SPSS statistics version 23. The results revealed that there is a significant impact of home automation on the society thereby contributing to its solution.

Author 1: Yasir Mahmood
Author 2: Nazri Kama
Author 3: Azri Azmi
Author 4: Suraya Ya’acob

Keywords: Internet of Things; smart home; sustainable development; home automation; environment sustainability

PDF

Paper 32: Crime Mapping Model based on Cloud and Spatial Data: A Case Study of Zambia Police Service

Abstract: Crime mapping is a strategy used to detect and prevent crime in the police service. The technique involves the use of geographical maps to help crime analysts identify and profile crimes committed in different residential areas, as well as determining best methods of responding. The development of geographic information system (GIS) technologies and spatial analysis applications coupled with cloud computing have significantly improved the ability of crime analysts to perform this crime mapping function. The aim of this research is to automate the processes involved in crime mapping using spatial data. A baseline study was conducted to identify the challenges in the current crime mapping system used by the Zambia Police Service. The results show that 85.2% of the stations conduct crime mapping using physical geographical maps and pins placed on the map while 14.8% indicated that they don’t use any form of crime mapping technique. In addition, the study revealed that all stations that participated in the study collect and process the crime reports and statistics manually and keep the results in books and papers. The results of the baseline study were used to develop the business processes and a crime mapping model, this was implemented successfully. The proposed model includes a spatial data visualization of crime data based on Google map. The proposed model is based on the Cloud Architecture, Android Mobile Application, Web Application, Google Map API and Java programming language. A prototype was successfully developed and the test results of the proposed system show improved data visualization and reporting of crime data with reduced dependency on manual transactions. It also proved to be more effective than the current system.

Author 1: Jonathan Phiri
Author 2: Jackson Phiri
Author 3: Charles S. Lubobya

Keywords: Zambia police; web application; mobile application; cloud model; crime mapping; spatial data

PDF

Paper 33: Classification Models for Determining Types of Academic Risk and Predicting Dropout in University Students

Abstract: Academic performance is a topic studied not only to identify those students who could drop out of their studies, but also to classify them according to the type of academic risk they could find themselves. An application has been implemented that uses academic information provided by the university and generates classification models from three different algorithms: artificial neural networks, ID3 and C4.5. The models created use a set of variables and criteria for their construction and can be used to classify student desertion and more specifically to predict their type of academic risk. The performance of these models was compared to define the one that provided the best results and that will serve to make the classification of students. Decision tree algorithms, C4.5 and ID3, presented better measurements with respect to the artificial neural network. The tree generated using the C4.5 algorithm presented the best performance metrics with correctness, accuracy, and sensitivity equal to 0.83, 0.87, and 0.90 respectively. As a result of the classification to determine student desertion it was concluded, according to the model generated using the C4.5 algorithm, that the ratio of credits approved by a student to the credits that he should have taken is the variable more significant. The classification, depending on the type of academic risk, generated a tree model indicating that the number of abandoned subjects is the most significant variable. The admission scan modality through which the student entered the university did not turn out to be significant, as it does not appear in the generated decision tree.

Author 1: Norka Bedregal-Alpaca
Author 2: Víctor Cornejo-Aparicio
Author 3: Joshua Zárate-Valderrama
Author 4: Pedro Yanque-Churo

Keywords: Educational data mining; ID3 algorithm; C4.5 algorithm; artificial neural network; classification algorithms; student desertion; academic risk

PDF

Paper 34: Predicting the Future Transaction from Large and Imbalanced Banking Dataset

Abstract: Machine learning (ML) algorithms are being adopted rapidly for a range of applications in the finance industry. In this paper, we used a structured dataset of Santander bank, which is published on a data science and machine learning competition site (kaggle.com) to predict whether a customer would make a transaction or not? The dataset consists of two classes, and it is imbalanced. To handle imbalance as well as to achieve the goal of prediction with the least log loss, we used a variety of methods and algorithms. The provided dataset is partitioned into two sets of 200,000 entries each for training and testing. 50% of data is kept hidden on their server for evaluation of the submission. A detailed exploratory data analysis (EDA) of datasets is performed to check the distributions of values. Correlation between features and importance of characteristics is calculated. To calculate the feature importance, random forest and decision trees are used. Furthermore, principal component analysis and linear discriminant analysis are used for dimensionality reduction. We have used 9 different algorithms including logistic regression (LR), Random forests (RF), Decision tree (DT), Multilayer perceptron (MLP), Gradient boosting method (GBM), Category boost (CatBoost), Extreme gradient boosting (XGBoost), Adaptive boosting (Adaboost) and Light gradient boosting (LigtGBM) method on the dataset. We proposed LighGBM as a regression problem on the dataset and it outperforms the state-of-the-art algorithms with 85% accuracy. Later, we have used fine-tune hyperparameters for our dataset and implemented them in combination with the LighGBM. This tuning improves performance, and we have achieved 89% accuracy.

Author 1: Sadaf Ilyas
Author 2: Sultan Zia
Author 3: Umair Muneer Butt
Author 4: Sukumar Letchmunan
Author 5: Zaib un Nisa

Keywords: Machine Learning (ML); banking; Santander; transactions; prediction; imbalanced; unbalanced; skewed; hyperparameter; oversampling; undersampling; EDA; dimensionality reduction; PCA; LDA; LR; RF; DT; MLP; GBM; CatBoost; XGBoost; AdaBoost; LigtGBM

PDF

Paper 35: SBAG: A Hybrid Deep Learning Model for Large Scale Traffic Speed Prediction

Abstract: Intelligent Transportation System (ITS) is the fundamental requirement to an intelligent transport system. The proposed hybrid model Stacked Bidirectional LSTM and Attention-based GRU (SBAG) is used for predicting the large scale traffic speed. To capture bidirectional temporal dependencies and spatial features, BDLSTM and attention-based GRU are exploited. It is the first time in traffic speed prediction that bidirectional LSTM and attention-based GRU are exploited as a building block of network architecture to measure the backward dependencies of a network. We have also examined the behaviour of the attention layer in our proposed model. We compared the proposed model with state-of-the-art models e.g. Fully Convolutional Network, Gated Recurrent Unit, Long -short term Memory, Bidirectional Long-short term Memory and achieved superior performance in large scale traffic speed prediction.

Author 1: Adnan Riaz
Author 2: Muhammad Nabeel
Author 3: Mehak Khan
Author 4: Huma Jamil

Keywords: Attention mechanism; large scale traffic prediction; Gated Recurrent Unit (GRU); Bidirectional Long-short term Memory (BiLSTM); Intelligent Transportation System (ITS)

PDF

Paper 36: Categorizing Attributes in Identifying Learning Style using Rough Set Theory

Abstract: In a learning process, learning style becomes one crucial factor that should be considered. However, it is still challenging to determine the learning style of the student, especially in an online learning activity. Data-driven methods such as artificial intelligence and machine learning are the latest and popular approaches for predicting the learning style. However, these methods involve complex data and attributes. It makes it quite heavy in the computational process. On the other hand, the literate based driven approach has a limitation in inconsistency between results with the learning behavior. Combination, both approaches, gives a better accuracy level. However, it still leaves some issues such as ambiguity and a wide of range of attributes value. These issues can be reduced by finding the right approach and categorization of attributes. Rough set proposed the simple way that can compromise with the ambiguity, vague, and uncertainty. Rough set generated the rules that can be used for prediction or classification decision attributes. Yet, due to the method based on categorical data, it must be careful in determining the category of attributes. Hence, this research investigated several categorizing attributes in the identification learning style. The results showed that the approach gives a better prediction of the learning style. Different categories give different results in terms of accuracy level, number of eliminated data, number of eliminated attributes, and number of generated rules criteria. For decision making, it can be considered by balancing of these criteria.

Author 1: Dadang Syarif Sihabudin Sahid
Author 2: Riswan Efendi
Author 3: Emansa Hasri Putra
Author 4: Muhammad Wahyudi

Keywords: Learning style; rough set; categorizing attributes; conditional attributes; decision attributes

PDF

Paper 37: Facial Emotion Recognition using Neighborhood Features

Abstract: We present a new method for human facial emotions recognition. For this purpose, initially, we detect faces in the images by using the famous cascade classifiers. Subsequently, we then extract a localized regional descriptor (LRD) which represents the features of a face based on regional appearance encoding. The LRD formulates and models various spatial regional patterns based on the relationships between local areas themselves instead of considering only raw and unprocessed intensity features of an image. To classify facial emotions into various classes of facial emotions, we train a multiclass support vector machine (M-SVM) classifier which recognizes these emotions during the testing stage. Our proposed method takes into account robust features and is independent of gender and facial skin color for emotion recognition. Moreover, our method is illumination and orientation invariant. We assessed our method on two benchmark datasets and compared it with four reference methods. Our proposed method outperformed them considering both the datasets.

Author 1: Abdulaziz Salamah Aljaloud
Author 2: Habib Ullah
Author 3: Adwan Alownie Alanazi

Keywords: Haar features; feature integration; emotion recognition; face detection; localized features; multiclass SVM classifier

PDF

Paper 38: A New Solution to Protect Encryption Keys when Encrypting Database at the Application Level

Abstract: Encrypting databases at the application level (client level) is one of the most effective ways to secure data. This strategy of data security has the advantage of resisting attacks performed by the database administrators. Although the data and encryption keys will be necessarily stored in the clear on the client level, which implies a problem of trust viz-a-viz the client since it is not always a trusted site. The client can attack encryption keys at any time. In this work, we will propose an original solution that protects encryption keys against internal attacks when implementing database encryption at the application level. The principle of our solution is to transform the encryption keys defined in the application files into other keys considered as the real keys, for encryption and decryption of the database, by using the protection functions stored within the database server. Our proposed solution is considered as an effective way to secure keys, especially if the server is a trusted site. The solution implementation results displayed better protection of encryption keys and an efficient process of data encryption /decryption. In fact, any malicious attempt performed by the client to hold encryption keys from the application level cannot be succeeded since the real values of keys are not defined on it.

Author 1: Karim El bouchti
Author 2: Soumia Ziti
Author 3: Fouzia Omary
Author 4: Nassim Kharmoum

Keywords: Database encryption; encryption key protection model; database encryption keys protection; data security

PDF

Paper 39: Neural Network Supported Chemiresistor Array System for Detection of NO2 Gas Pollution in Smart Cities (NN-CAS)

Abstract: Neural Networks supported Chemiresistor array system is designed and laboratory tested for the detection of emissive gasses from vehicles and other sources of pollution. The designed and tested system is based on an integrated PbPc array of chemiresistors that sends signals corresponding to emitted NO2 gas to Signal Processing Unit. The process comprises using relative conductivity values of Edge sensors to Central sensor for detected gas as an indicator of response characteristics and profiling for NO2 gas pollution level. The process continues up to the limit where Edge Sensor values for relative conductivity equates, then the relative conductivity for the Edge Sensors is used as a control value to shut down the sampling system and send a warning message of excessive pollution. Pollution could be due to a number of factors besides vehicles, such as gas leaks. Optimization of array elements response is carried out using Neural Networks (Back Propagation Algorithm). The proposed system is promising and could further be developed to become a vital and integrated part of Intelligent Transportation Systems (ITS) in order to monitor emission of hazardous gases, and could be integrated with Road Side Units (RSUs) of urban areas in smart cities.

Author 1: Mahmoud Zaki Iskandarani

Keywords: Gases; chemiresistors; neural networks; sensor array; correlation; road side unit; intelligent transportation systems; smart cities

PDF

Paper 40: A Framework for Detecting Botnet Command and Control Communication over an Encrypted Channel

Abstract: Botnet employs advanced evasion techniques to avoid detection. One of the Botnet evasion techniques is by hiding their command and control communication over an encrypted channel like SSL and TLS. This paper provides a Botnet Analysis and Detection System (BADS) framework for detecting Botnet. The BADS framework has been used as a guideline to devise the methodology, and we divided this methodology into six phases: i. data collection, customization, and conversion, ii. feature extraction and feature selection, iii. Botnet prediction and classification, iv. Botnet detection, v. attack notification, and vi. testing and evaluation. We tend to use the machine learning algorithm for Botnet prediction and classification. We also found several challenges in implementing this work. This research aims to detect Botnet over an encrypted channel with high accuracy, fast detection time, and provides autonomous management to the network manager.

Author 1: Zahian Ismail
Author 2: Aman Jantan
Author 3: Mohd. Najwadi Yusoff

Keywords: Botnet; Botnet Analysis and Detection System (BADS); encrypted channel; machine learning; accuracy; autonomous

PDF

Paper 41: The Multi-Class Classification for the First Six Surats of the Holy Quran

Abstract: The Holy Quran is one of the holy books revealed to the prophet Muhammad in the form of separate verses. These verses were written on tree leaves, stones, and bones during his life; as such, they were not arranged or grouped into one book until later. There is no intelligent system that is able to distinguish the verses of Quran chapters automatically. Accordingly, in this study we propose a model that can recognize and categorize Quran verses automatically and conclusion the essential features through Quran chapters classification for the first six Surat of the Holy Quran chapters, based on machine learning techniques. The classification of the Quran verses into chapters using machine learning classifiers is considered an intelligent task. Classification algorithms like Naïve Bayes, SVM, KNN, and decision tree J48 help to classify texts into categories or classes. The target of this research is using machine learning algorithms for the text classification of the Holy Quran verses. As the Quran texts consists of 114 chapters, we are only working with the first six chapters. In this paper, we build a multi-class classification model for the chapter names of the Quranic verses using Support Vector Classifier (SVC) and GaussianNB. The results show the best overall accuracy is 80% for the SVC and 60% for the Gaussian Naïve Bayes.

Author 1: Nouh Sabri Elmitwally
Author 2: Ahmed Alsayat

Keywords: Text classification; machine learning; natural language processing; text pre-processing; feature selection; data mining; Holy Quran

PDF

Paper 42: Cancelable Face Template Protection using Transform Features for Cyberworld Security

Abstract: Cyber world becomes a fundamental and vital component of the physical world with the increase of dependence on internet-connected devices in industry and government organizations. Provision of privacy and security of users during online communication offers unique cybersecurity challenges for industry and government. Intrusion is one of the crucial issues of cybersecurity, which can be overcome by providing the vigorous authentication solutions. Biometrics authentication is used in different cybersecurity systems for user authentication purpose. The cancelable biometric is a solution to rid of privacy problems in traditional biometric systems. This paper purposes a new cancelable face authentication method, which uses Hybrid Gabor PCA (HGPCA) descriptor for cyberworld security. The proposed method uses the wavelet transform for the extraction of the features of the face images by using Gabor filter and Principal Component Analysis (PCA). Later, both types of features have been ensemble by using the simple concatenation scheme. Then scrambling has been applied to the fused features by using the random key generated by the user. So finally, scrambled fused features has been stored in the database which are used for the cancelable biometric authentication as well as recovery. HGPCA achieves “cancelability” and increases the authentication accuracy. The proposed method has been tested on three standard face datasets. Experimental results of the proposed method have been compared with existing methods by using standard quantitative measures that show superiority over existing methods.

Author 1: Firdous Kausar

Keywords: Cancelable biometrics; face authentication; feature extraction; Gabor filter; Principal Component Analysis (PCA); wavelet transformation

PDF

Paper 43: Development of Flipbook using Web Learning to Improve Logical Thinking Ability in Logic Gate

Abstract: The multimedia-based learning process has great potential to change the way of learning. One of them is a growing multimedia Flipbook from textbooks, with ease of reading and learning without carrying a thick book. The purpose of this study is to produce a Flipbook-assisted Web Learning development product and to improve the ability to logical thinking by using those products. This type of research using the 4D Models model consisting of define, design, develop, and disseminate. The results of the development with expert data validation analysis obtained an overall percentage of 83.92% included in the excellent category. Analysis of the validation of media expert's overall percentage obtained 80% included in the excellent category. And the validation analysis of peers the overall percentage found 84.78% included in the excellent category. From all analysis shows that flipbook-assisted teaching materials web learning is appropriate to be used. The results of the t-test calculations show 10.25 higher than 2.045, the use of flipbook logic gates assisted with web learning increases the ability of logical thinking. And the analysis of N-Gain 0.39 in including the medium criteria, the flipbook gate logic assisted with web learning to increase the ability to logical thinking.

Author 1: Rizki Noor Prasetyono
Author 2: Rito Cipta Sigitta Hariyono

Keywords: Multimedia based learning; flipbook; web learning; logical thinking; logic gates

PDF

Paper 44: Automatic Detection and Correction of Blink Artifacts in Single Channel EEG Signals

Abstract: Ocular Artifacts (OAs) are inevitable during EEG acquisition and make the signal analysis critical. Detection and correction of these artifacts is a major problem now a day’s. In this paper an energy detection method is used to detect the artifacts and performed wavelet thresholding within the researched zones to protect neural data at non blink regions. Various sets of Wavelet Transform (WT) techniques and threshold functions are collated and identification of the optimum combination for OA’s separation is indicated in many research areas including Technology & Management. The output of these methods at blink regions is compared interns of various standard metrics using established techniques of Supply Chains. Results of this study demonstrate that the SWT+HT has better in rejecting the artifacts than other methods in this paradigm.

Author 1: G Bhaskar N Rao
Author 2: Anumala Vijay Sankar
Author 3: Peri Pinak Pani
Author 4: Aneesh Sidhireddy

Keywords: Electroencephalogram (EEG); ocular artifacts; wavelet transform; hybrid threshold

PDF

Paper 45: Usability of Mobile Assisted Language Learning App

Abstract: The aim of this study is to evaluate the usability of Mobile Assisted Language Learning i.e. Literacy and Numeracy Drive (LND) which is smartphone application to learn language and mathematics in public sector primary schools of Punjab, the biggest province of Pakistan. In this study, usability tests were conducted which included surveys of questionnaires from teachers and students. The user experience, reliability, and performance of mobile application assessed, along with user satisfaction. The LND mobile application has not been found to be successful, with a poor user interface and requires improvement. The "Using Experience," "Ease of Use" and "Usefulness" variables have been the lowest scorers in terms of user experience. Mobile device specifications were not simple and confusing; the services provided by the LND were not appealing and effective for students or teachers. This research suggested several improvements in the usability and functionality of this LND application based on assessed user experience. Many schools have chosen to use mobile apps for the teaching and evaluation of language at school. The use of mobile-assisted learning at public sector schools in Punjab, invites us to gauge the usability and effectiveness of this approach at such a huge scale which will make it more effective.

Author 1: Kashif Ishaq
Author 2: Nor Azan Mat Zin
Author 3: Fadhilah Rosdi
Author 4: Adnan Abid
Author 5: Qasim Ali

Keywords: Literacy and numeracy drive; usability; user experience; mobile app; assessment; public school

PDF

Paper 46: Classification of Non-Discriminant ERD/ERS Comprising Motor Imagery Electroencephalography Signals

Abstract: Classification of Motor Imagery (MI) Electroencephalography (EEG) signals has always been an important aspect of Brain Computer Interface (BCI) systems. Event Related Desynchronization (ERD)/ Event Related Synchronization (ERS) plays a significant role in finding discriminant features of MI EEG signals. ERD/ERS is one type and Evoked Potential (EP) is another type of brain response. This study focuses upon the classification of MI EEG signals by Removing Evoked Potential (REP) from non-discriminant MI EEG data in filter band selection, called REP. This optimization is done to enhance the classification performance. A comprehensive comparison of several pipelines is presented by using famous feature extraction methods, namely Common Spatial Pattern (CSP), XDawn. The effectiveness of REP is demonstrated on the PhysioNet dataset which is an online data resource. Comparison is done between the performance of pipelines including proposed one (Common Spatial Pattern (CSP) and Gaussian Process Classifier (GPC)) as well as before and after applying REP. It is observed that the REP approach has improved the classification accuracy of all the subjects used as well as all the pipelines, including state of the art algorithms, up to 20%.

Author 1: Zaib unnisa Asi
Author 2: M. Sultan Zia
Author 3: Umair Muneer Butt
Author 4: Aneela Abbas
Author 5: Sadaf Ilyas

Keywords: MI EEG Signals; non-discriminant ERD/ERS; evoked potentials; common spatial pattern; Gaussian process classifier

PDF

Paper 47: Barchan Sand Dunes Collisions Detection in High Resolution Satellite Images based on Image Clustering and Transfer Learning

Abstract: Desertification is a core concern for populations living in arid and semi-arid areas. Specifically, barchans dunes which are the fastest moving sand dunes put constant pressure over human settlements and infrastructure. Remote sensing was used to analyze sand dunes around Tarfaya city located in the south of Morocco in the Sahara Desert. In this area, dunes form long corridors made of thousands crescent shaped dunes moving simultaneously, thus, making data gathering in the field very difficult. A computer vision approach based on machine learning was proposed to automate the detection of barchans sand dunes and monitor their complex interactions. An IKONOS high resolution satellite image was used with the combination of a clustering algorithm for image segmentation of the dunes corridor, and a Transfer Learning model which was trained to detect three classes of objects: Barchan dunes, bare fields and a new introduced class consisting of dunes collisions. Indeed, collisions were very difficult to model using classical digital image processing methods due to the large variability of their shapes. The model was trained on 1000 image patches which were annotated then augmented to generate a larger dataset. The obtained detection results showed an accuracy of 84,01%. The interest of this research was to provide with a relatively affordable approach for tracking sand dunes locations in order to better understand their dynamics.

Author 1: M. A. Azzaoui
Author 2: L. Masmoudi
Author 3: H. El Belrhiti
Author 4: I. E. Chaouki

Keywords: High resolution satellite images; remote sensing; transfer learning; image segmentation; sand dunes; desertification

PDF

Paper 48: Usefulness of Mobile Assisted Language Learning in Primary Education

Abstract: Literacy & Numeracy Drive (LND) is a mobile application that is used in public sector primary schools in Punjab province, Pakistan to teach students of Grade 03 on a tablet for learning languages and Mathematics. Persons designated the role of a Monitoring & Evaluation Assistant (MEA) visit every school allocated by authorities once in a month and select 07-10 students randomly to evaluate them on his own tablet by asking multiple questions related to English, Urdu and Mathematics. After the evaluation, MEA has to upload the result on the official portal for the respective school. This study aims to evaluate the effectiveness of LND for its usefulness, usability, accessibility, content, and assessments by involving students and teachers using this application in different schools. A mixed-method study has been adopted in which 57 teachers and nearly 300 students from different locations of the district and from different schools have been selected, to measure the effectiveness of LND and evaluate the effectiveness with the help of interviews and questionnaires. The result reveals, in its current form, the LND application is not effective and needs improvement in usability, design, content, accessibility, infrastructure, and assessment. Furthermore, teachers recommend that game-based learning consists of an interactive interface, phonics, animations. As the more interactive and attractive presentation of the content and variations in the assessment may increase students’ involvement and will make this application more effective and will produce good results.

Author 1: Kashif Ishaq
Author 2: Nor Azan Mat Zin
Author 3: Fadhilah Rosdi
Author 4: Adnan Abid
Author 5: Qasim Ali

Keywords: Literacy and numeracy drive; monitoring and evaluation assistant; assessment; usability; content; design; infrastructure

PDF

Paper 49: Detecting Flooding Attacks in Communication Protocol of Industrial Control Systems

Abstract: Industrial Control Systems (ICS) are normally using for monitoring and controlling various process plants like Oil & Gas refineries, Nuclear reactors, Power generation and transmission, various chemical plants etc., in the world. MODBUS is the most widely used communication protocol in these ICS systems, which is using for bi-directional data transfer of sensor data between data acquisition servers and Intelligent Electronic Devices (IED) like Programmable Logic Controllers (PLC) or Remote Telemetry Unit (RTU). The security of ICS systems is a major concern in safe and secure operations of these plants. This Modbus protocol is more vulnerable to cyber security attacks because security measures were not considered in mind at the time of protocol design. Denial-of-Service (DoS) attack or flooding attack is one of the prominent attacks for MODBUS, which affects the availability of the control system. In this paper, a new method was proposed, to detect user application-level flooding or DoS attacks and triggers alarm annunciator and displays suitable alarms in Supervisory Control and Data Acquisition system (SCADA) to draw the attention of administrators or engineers to take corrective action. This method detected highest percentage of attacks with less time compared to other methods. This method also considered all types of conditions, which triggers flooding attack in MODBUS protocol.

Author 1: Rajesh L
Author 2: Penke Satyanarayana

Keywords: Supervisory Control and Data Acquisition (SCADA); Remote Telemetry Unit (RTU); Programmable Logic Controllers (PLC); Communication Protocol; MODBUS; Industrial Control Systems (ICS)

PDF

Paper 50: An Artificial Deep Neural Network for the Binary Classification of Network Traffic

Abstract: Classifying network packets is crucial in intrusion detection. As intrusion detection systems are the primary defense of the infrastructure of networks, they need to adapt to the exponential increase in threats. Despite the fact that many machine learning techniques have been devised by researchers, this research area is still far from finding perfect systems with high malicious packet detection accuracy. Deep learning is a subset of machine learning and aims to mimic the workings of the human brain in processing data for use in decision-making. It has already shown excellent capabilities in dealing with many real-world problems such as facial recognition and intelligent transportation systems. This paper develops an artificial deep neural network to detect malicious packets in network traffic. The artificial deep neural network is built carefully and gradually to confirm the optimum number of input and output neurons and the learning mechanism inside hidden layers. The performance is analyzed by carrying out several experiments on real-world open source traffic datasets using well-known classification metrics. The experiments have shown promising results for real-world application in the binary classification of network traffic.

Author 1: Shubair A. Abdullah
Author 2: Ahmed Al-Ashoor

Keywords: Deep learning; ANN; packet classification; binary classification; malicious traffic classification

PDF

Paper 51: An Improved Framework for Content-based Spamdexing Detection

Abstract: To the modern Search Engines (SEs), one of the biggest threats to be considered is spamdexing. Nowadays spammers are using a wide range of techniques for content generation, they are using content spam to fill the Search Engine Result Pages (SERPs) with low-quality web pages. Generally, spam web pages are insufficient, irrelevant and improper results for users. Many researchers from academia and industry are working on spamdexing to identify the spam web pages. However, so far not even a single universally efficient method is developed for identification of all spam web pages. We believe that for tackling the content spam there must be improved methods. This article is an attempt in that direction, where a framework has been proposed for spam web pages identification. The framework uses Stop words, Keywords Density, Spam Keywords Database, Part of Speech (POS) ratio, and Copied Content algorithms. For conducting the experiments and obtaining threshold values WEBSPAM-UK2006 and WEBSPAM-UK2007 datasets have been used. An excellent and promising F-measure of 77.38% illustrates the effectiveness and applicability of proposed method.

Author 1: Asim Shahzad
Author 2: Hairulnizam Mahdin
Author 3: Nazri Mohd Nawi

Keywords: Information retrieval; Web spam detection; content spam; pos ratio; search spam; Keywords stuffing; machine generated content detection

PDF

Paper 52: Combining 3D Interpolation, Regression, and Body Features to build 3D Human Data for Garment: An Application to Building 3D Vietnamese Female Data Model

Abstract: Modeling 3D human body is an advanced technique used in human motion analysis and garment industry. In this paper, we propose a method for forming deformation functions so that we can rebuild the 3D human body given anthropometric measurements. The advanced idea in our approach is that we split the 3D body into small parts. In that way, we can specialize different set of parameters needed to interpolate for each section. With an interpolation approach, we build a 3D human body for 593 female bodies with the corresponding body shape but require fewer input measurements than 3D laser scans.

Author 1: Tran Thi Minh Kieu
Author 2: Nguyen Tung Mau
Author 3: Le Van
Author 4: Pham The Bao

Keywords: Anthropometry; 3D scanning; human body modeling; interpolation; parametric modeling

PDF

Paper 53: Breast Cancer Computer-Aided Detection System based on Simple Statistical Features and SVM Classification

Abstract: Computer-Aided Detection (CADe) systems are becoming very helpful and useful in supporting physicians for early detection of breast cancer. In this paper, a CADe system that is able to detect abnormal clusters in mammographic images will be implemented using different classifiers and features. The CADe system will utilize a Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) as classifiers. Adopting mammographic database from Mammographic Image Analysis Society (MIAS), for training and testing, the performance of the two types of classifiers are compared in terms of sensitivity, specificity, and accuracy. The obtained values for the previous parameters show the efficiency of the CADe system to be used as a secondary screening method in detecting abnormal clusters given the Region of Interest (ROI). The best classifier is found to be SVM showed 96% accuracy, 92% sensitivity and 100% specificity.

Author 1: Yahia Osman
Author 2: Umar Alqasemi

Keywords: Breast cancer; MIAS; features extraction; SVM; mammogram; clusters; computer-aided detection systems; KNN; ROI

PDF

Paper 54: EEG Emotion Signal of Artificial Neural Network by using Capsule Network

Abstract: Human emotion recognition through electroencephalographic (EEG) signals is becoming attractive. Several evolutions used for our research mechanism technology to describe two different primaries: one used for combining the vital attribute, frequency sphere, and physical element of the EEG signals, and the architecture describes the two-dimensional image. Emotion realization is imposing effort in the computer brain interface field, which is mostly used to understand the field of education, medical military, and many others. The allocation issue arises in the required area of emotion recognition. In this paper, the allocation structure based on Caps Net neural network is described. The heder factor shows that the best point to classified the original EEG signals scarce group to using many of the algorithms like Lasso for a better function to used and other than occupy the heights.Furthermore, essential features like tiny subset take by input for the computer network attain for many ultimate emotional classifications. Many of the results show to alternate the best parameters model use and other network formats to making the Caps Net and another neural network act as the emotional valuation on EEG signals. It attains almost 80.22% and 85.41% average allocation efficiency under demeanor and view of the emotion pathway as compared to the Support Vector Machine (SVM) and convolutional neural network(CNN or ConvNet). A significant allocation edge attains the best conclusion and automatically enhances the performance of the EEG emotional classification. Deep learning access, such as CNN has widely used to improve primary allocation performance of motor symbolism-based brain-computer interfaces (BCI). As we know that CNN's limited allocation achievement degraded when an essential point data is distorted. Basically, in the electroencephalography (EEG) case, the signals consist of the same user are not measure. So we implement the Capsule networks (CapsNet), which is essential to extract many features. By that, it attains a much more powerful and positive performance than the old CNN approaches.

Author 1: Usman Ali
Author 2: Haifang Li
Author 3: Rong Yao
Author 4: Qianshan Wang
Author 5: Waqar Hussain
Author 6: Syed Badar ud Duja
Author 7: Muhammad Amjad
Author 8: Bilal Ahmed

Keywords: Emotion recognition; caps net; EEG signal; multidimensional feature; hybrid neural networks; CNN; Granger; motor imagery classification; deep learning

PDF

Paper 55: Behavior of Learning Rules in Hopfield Neural Network for Odia Script

Abstract: Automatic character recognition is one of the challenging fields in pattern recognition especially for handwritten Odia characters as many of these characters are similar and rounded in shape. In this paper, a comparative performance analysis of Hopfield neural network for storing and recalling of handwritten and printed Odia characters with three different learning rules such as Hebbian, Pseudo-inverse and Storkey learning rule has been presented. An experimental exploration of these three learning rules in Hopfield network has been performed in two different ways to measure the performance of the network to corrupted patterns. In the first experimental work, an attempt has been proposed to demonstrate the performance of storing and recalling of Odia characters (vowels and consonants) in image form of size 30 X 30 on Hopfield network with different noise percentages. At the same time, the performance of recognition accuracy has been observed by partitioning the dataset into training and a different testing dataset with k-fold cross-validation method in the second experimental attempt. The simulation results obtained in this study express the comparative performance of the network for recalling of stored patterns and recognizing a new set of testing patterns with various noise percentages for different learning rules.

Author 1: Ramesh Chandra Sahoo
Author 2: Sateesh Kumar Pradhan

Keywords: Hopfield network; Odia script; Hebbian; pseudo-inverse; Storkey; NIT dataset

PDF

Paper 56: Critical Analysis of Brain Magnetic Resonance Images Tumor Detection and Classification Techniques

Abstract: The image segmentation, tumor detection and extraction of tumor area from brain MR images are the main concern but time-consuming and tedious task performed by clinical experts or radiologist, while the accuracy relies on their experiences only. So, to overcome these limitations, the usage of computer-aided design (CAD) technology has become very important. Magnetic resonance imaging (MRI) and Computed Tomography (CT) are the two major imaging modalities that are used for brain tumor detection. In this paper, we have carried out a critical review of different image processing techniques of brain MR images and critically evaluate these different image processing techniques in tumor detection from brain MR images to identify the gaps and limitations of those techniques. Therefore, to obtain precise and better results, the gaps can be filled and limitations of various techniques can be improved. We have observed that most of the researchers have employed these stages such as Pre-processing, Feature extraction, Feature reduction, and Classification of MR images to find benign and malignant images. We have made an effort in this area to open new dimensions for the readers to explore the concerned field of research.

Author 1: Zahid Ullah
Author 2: Su-Hyun Lee
Author 3: Donghyeok An

Keywords: Magnetic Resonance Imaging (MRI); Computed Tomography (CT); MRI Classification; Tumor Detection; Digital Image Processing

PDF

Paper 57: Secure V2V Communication in IOV using IBE and PKI based Hybrid Approach

Abstract: We live in the world of “Internet of Everything”, which lead to advent of different applications and Internet of vehicles (IOV) of one among them, which is a major step forward for the future of transportation system. Vehicle to vehicle (V2V) communication plays a major role in which a vehicle may send sensitive, non-sensitive messages and these messages are encrypted with public keys, which makes distribution of public keys is a major problem due to the vehicle need to be anonymous having pseudonyms which changes more frequently and makes it more complicated. Here we proposed a hybrid approach, which uses existing Public key certificate for authorization of the vehicle and Identity Based Encryption to generate public keys from the pseudonyms and use it in secure V2V communication without compromising anonymity of the vehicle.

Author 1: Satya Sandeep Kanumalli
Author 2: Anuradha Ch
Author 3: Patanala Sri Rama Chandra Murty

Keywords: Privacy; internet of vehicles; hashing; ibe; public key certificate

PDF

Paper 58: A Multi-Objectives Optimization to Develop the Mobile Dimension in a Small Private Online Course (SPOC)

Abstract: The impact of the mobile technology trend is being felt in several sectors today, including education. In this paper, we present an analysis of the development of the mobile dimension in a Massive Open Online Course (MOOC) or a Small Private Online Course (SPOC) as a decision-making problem among various approaches which cannot be ordered incontestably from the best to the worst. This is due to the fact that the various approaches to integrate the mobile dimension are different and that each solution presents both advantages and shortcomings from a technological point of view. The decision must be made on the basis of the end-users' requirements and usage. We propose to view this situation as a multi-objective optimization problem as the decision is a compromise between several conflicting objectives/criteria. The various approaches to the development of mobile access to a MOOC/SPOC are presented first and then compared using various criteria. Then we provide an analysis of the alternatives to find the non-dominant Pareto solutions.

Author 1: Naima BELARBI
Author 2: Abdelwahed NAMIR
Author 3: Mohamed TALBI
Author 4: Nadia Chafiq

Keywords: Mobile dimension; MOOC/SPOC; multi-objective optimization; criteria; decision

PDF

Paper 59: A Fuzzy Multi-Objective Covering-based Security Quantification Model for Mitigating Risk of Web based Medical Image Processing System

Abstract: Medical image processing is one of the most active research areas and has big impact on the health sector. With the arrival of intelligent processes, web based medical image processing has become simple and errorless. Web based application is now used extensively for medical image processing. Large amount of medical data is generated daily with more and more data being shared over public and private networks for the diagnosis of diseases through the web based image processing systems. Medical images like that of the CT (Computed Tomography) scan, MRI (Magnetic Resonance Imaging), X-Ray and Ultrasound images, etc., contain highly personal data of the patients. This data needs to be secured from intruders. Medical images are more sensitive to external interruption and manipulation in data may cause changes in the result. Data breaches in medical cases can lead to wrong diagnosis or even more fatal possibilities with life threatening results. So, security in web based medical image processing is a major issue. However, ensuring security for the medical images while preserving the characteristics of confidentiality, integrity, availability, etc., of medical images poses a major challenge. Working towards a feasible solution, in this study, authors are using a list of criteria for checking security level of the web based image processing system. We propose Fuzzy Analytic Hierarchy Process (FAHP) combined with Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in the list of criteria that affect the security assessment in medical image processing. At the results we see that FAHP-TOPSIS produce good results in security checking in web based medical image processing system. At the data analysis section all the steps showed which is involved in our model.

Author 1: Abdullah Algarni
Author 2: Masood Ahmad
Author 3: Abdulaziz Attaallah
Author 4: Alka Agrawal
Author 5: Rajeev Kumar
Author 6: Raees Ahmad Khan

Keywords: Web based medical image processing; fuzzy analytical hierarchy process; TOPSIS method; security management

PDF

Paper 60: Knowledge Sharing Factors for Modern Code Review to Minimize Software Engineering Waste

Abstract: Software engineering activities, for instance, Modern Code Review (MCR) produce quality software by identifying the defects from the code. It involves social coding and provides ample opportunities to share knowledge among MCR team members. However, the MCR team is confronted with the issue of waiting waste due to poor knowledge sharing among MCR team members. As a result, it delays the project delays and increases mental distress. To minimize the waiting waste, this study aims to identify knowledge sharing factors that impact knowledge sharing in MCR. The methodology employed for this study is a systematic literature review to identify knowledge sharing factors, data coding with continual comparison and memoing techniques of grounded theory to produce a unique and categorized list of factors influencing knowledge sharing. The identified factors were then assessed through expert panel for its naming, expressions, and categorization. The study finding reported 22 factors grouped into 5 broad categories. i.e. Individual, Team, Social, Facility conditions, and Artifact. The study is useful for researchers to extend the research and for the MCR team to consider these factors to enhance knowledge sharing and to minimize waiting waste.

Author 1: Nargis Fatima
Author 2: Sumaira Nazir
Author 3: Suriayati Chuprat

Keywords: knowledge sharing; modern code review; software engineering waiting waste

PDF

Paper 61: Situational Factors for Modern Code Review to Support Software Engineers’ Sustainability

Abstract: Software engineers working in Modern Code Review (MCR) are confronted with the issue of lack of competency in the identification of situational factors. MCR is a software engineering activity for the identification and fixation of defects before the delivery of the software product. This issue can be a threat to the individual sustainability of software engineers and it can be addressed by situational awareness. Therefore, the objective of the study is to identify situational factors concerning the MCR process. Systematic Literature Review (SLR) has been used to identify situational factors. Data coding along with continuous comparison and memoing procedures of grounded theory and expert review has been used to produce an exclusive and validated list of situational factors grouped under categories. The study results conveyed 23 situational factors that are grouped into 5 broad categories i.e. People, Organization, Technology, Source Code and Project. The study is valuable for researchers to extend the research and for software engineers to identify situations and sustain for longer.

Author 1: Sumaira Nazir
Author 2: Nargis Fatima
Author 3: Suriayati Chuprat

Keywords: Situational; modern code review; sustainable software engineer

PDF

Paper 62: Plant Disease Detection using Internet of Thing (IoT)

Abstract: This paper presents the idea of internet of things (IOT) innovation to percept data, and talks about the job of the IOT innovation in farming infection and bug nuisance control, which incorporates rural ailment and bug checking framework, gathering sickness and creepy crawly bother data utilizing sensor hubs, information preparing and mining, etc. A malady and bug irritation control framework dependent on IOT is proposed, which comprised of three levels and three frameworks. The framework can give another approach to get to horticultural data for the farm. In this paper a computerized framework has been created to decide if the plant is ordinary or infected. The typical development of the plants, yield and nature of horticultural items is truly influenced by plant illness. This paper attempt to build up a robotized framework that identifies the nearness of disease in the plants. A mechanized ailment recognition framework is created utilizing sensors like temperature, moistness and shading dependent on variety in plant leaf wellbeing condition. The qualities dependent on temperature, mugginess and shading parameters are utilized to distinguish nearness of plant sickness.

Author 1: Muhammad Amir Nawaz
Author 2: Tehmina khan
Author 3: Rana Mudassar Rasool
Author 4: Maryam Kausar
Author 5: Amir Usman
Author 6: Tanvir Fatima Naik Bukht
Author 7: Rizwan Ahmad
Author 8: Jaleel Ahmad

Keywords: Plant diseases; internet of things; temperature sensor plants; farming

PDF

Paper 63: Performance Analysis of Machine Learning Classifiers for Detecting PE Malware

Abstract: In this modern era of technology, securing and protecting one’s data has been a major concern and needs to be focused on. Malware is a program that is designed to cause harm and malware analysis is one of the paramount focused points under the sight of cyber forensic professionals and network administrations. The degree of the harm brought about by malignant programming varies to a great extent. If this happens at home to a random person then that may lead to some loss of irrelevant or unimportant information but for a corporate network, it can lead to loss of valuable business data. The existing research does focus on some few machine learning algorithms to detect malware and very few of them worked with Portable Executables (PE) files. In this paper, we mainly focused on top classification algorithms and compare their accuracy to find out which one is giving the best result according to the dataset and also compare among these algorithms. Top machine learning classification algorithms were used alongside neural networks such as Artificial Neural Network, XGBoost, Support Vector Machine, Extra Tree Classifier, etc. The experimental result shows that XGBoost achieved the highest accuracy of 98.62 percent when compared with other approaches. Thus, to provide a better solution for this kind of anomalies, we have been interested in researching malware detection and want to contribute to building strong and protective cybersecurity.

Author 1: ABM.Adnan Azmee
Author 2: Pranto Protim Choudhury
Author 3: Md. Aosaful Alam
Author 4: Orko Dutta
Author 5: Muhammad Iqbal Hossai

Keywords: Malware detection; machine learning; data protection; XGBoost; support vector machine; extra tree classifiers; artificial neural network

PDF

Paper 64: Determinants of Interface Criteria Learning Technology for Disabled Learner using Analytical Hierarchy Process

Abstract: The advancement of technology nowadays is rapidly increasing due to the leveraging availability of learning technology. Due to the rapid change in the availability of technology, it is crucial for disabled learners to select a good technology design that may help them to achieve better academic achievements. Selecting a good design of technology involves a decision making process to choose several designs of learning technology. In general, the abilities, capacities and achievements of disabled learners are lower compared to a normal child. Using a good approach and assisted with the right selection of learning technology may help disabled learners to get a better understanding and achievement in academic matters. In this study, the analytical hierarchy process (AHP) approach was used to determine the best appropriate design of learning technology for disabled learners. Three hierarchy levels made up of criteria, sub-criteria and alternatives were considered. This study finds the best selection design of elements that can be used in the development of learning technology in a classroom of disabled learners.

Author 1: Syazwani Ramli
Author 2: Hazura Mohamed
Author 3: Zurina Muda

Keywords: Selection design; learning technology; disabled learner; decision making; Analytical Hierarchy Process (AHP)

PDF

Paper 65: Predicting IoT Service Adoption towards Smart Mobility in Malaysia: SEM-Neural Hybrid Pilot Study

Abstract: Smart city is synchronized with digital environment and its transportation system is vitalized with RFID sensors, Internet of Things (IoT) and Artificial Intelligence. However, without user’s behavioral assessment of technology, the ultimate usefulness of smart mobility cannot be achieved. This paper aims to formulate the research framework for prediction of antecedents of smart mobility by using SEM-Neural hybrid approach towards preliminary data analysis. This research undertook smart mobility service adoption in Malaysia as study perspective and applied the Technology Acceptance Model (TAM) as theoretical basis. An extended TAM model was hypothesized with five external factors (digital dexterity, IoT service quality, intrusiveness concerns, social electronic word of mouth and subjective norm). The data was collected through a pilot survey in Klang Valley, Malaysia. Then responses were analyzed for reliability, validity and accuracy of model. Finally, the causal relationship was explained by Structural Equation Modeling (SEM) and Artificial Neural Networking (ANN). The paper will share better understanding of road technology acceptance to all stakeholders to refine, revise and update their policies. The proposed framework will suggest a broader approach to investigate individual-level technology acceptance.

Author 1: Waqas Ahmed
Author 2: Sheikh Muhamad Hizam
Author 3: Ilham Sentosa
Author 4: Habiba Akter
Author 5: Eiad Yafi
Author 6: Jawad Ali

Keywords: Smart Mobility; Internet of Things (IoT); Radio-Frequency Identification (RFID); Neural Networks; Technology Acceptance Model (TAM)

PDF

Paper 66: Robotic Technology for Figural Creativity Enhancement: Case Study on Elementary School

Abstract: Robotic technology is a field that is in great demand today and it is very useful to human life, especially in the aspect of education. It would help students to be more active in the learning process. Creativity can be stimulating with the use of robotic technology. One kind of creativity is Figural Creativity (FC). This study investigated the effect of Robotic Technology as a learning tool to improve the FC skills of students. Forty (40) elementary school students aged 10-11 years, were the participants in this study. Students' creativity skills were measured from the Figural Creativity Test (TKF). This test was carried out before the intervention (pre-test) and after the intervention program (post-test). In the intervention program, students were given some education about robotic technology. To analyze the test results, we made use of the Statistical Service Products and Solutions package. The findings showed that the level of creativity in students with the K13 curriculum improved better, the FC scores of students in K-13 Curriculum were improved up to 23% with sig. 2-tailed = .000, p<.05 and the FC scores of the KTSP curriculum only improved only by 1.7 % with sig.2-tailed value = .572, p>.05. Thus, robotic technology learning is more effective in improving the FC of students with the K13 curriculum. Based on the result, we make a recommendation to the Ministry of Education that robotic technology is applied as an educational tool in the educational sectors.

Author 1: Billy Hendrik
Author 2: Nazlena Mohamad Ali
Author 3: Norshita Mat Nayan

Keywords: Robotic technology; figural creativity; curriculum; TKF; education; KTSP; K-13

PDF

Paper 67: The Impact of Deep Learning Techniques on SMS Spam Filtering

Abstract: Over the past decade, phone calls and bulk SMS have been fashionable. Although many advertisers assume that SMS has died, it is still alive. It is one of the simplest and most cost-effective marketing tools for companies to communicate on a personal level to their customers. The spread of SMS has led to the risk of spam. Most of the previous studies that attempted to detect spam were based on manually extracted features using classical machine learning classifiers. This paper explores the impact of applying various deep learning techniques on SMS spam filtering; by comparing the results of seven different deep neural network architectures and six classifiers for classical machine learning. Proposed methodologies are based on the automatic extraction of the required features. On a benchmark data set consisting of 5574 records, a fabulous accuracy of 99.26% has been resulted using Random Multimodel Deep Learning (RMDL) architecture.

Author 1: Wael Hassan Gomaa

Keywords: SMS Spam Filtering; Deep Learning; RNN; GRU; LSTM; CNN; RCNN; RMDL

PDF

Paper 68: Translator System for Peruvian Sign Language Texts through 3D Virtual Assistant

Abstract: The population with hearing impairment in Peru is a community that doesn’t receive the necessary support from the government, nor does it have the necessary resources for the inclusion of its members as active persons in this society. Few initiatives have been launched to achieve this goal and for this reason we will create a resource that will give the deaf community to have greater access to textual information of the listener-speaking community. Our goal is to build a tool that translates general texts and also academic content such as encyclopedias into Peruvian sign language which will be represented by a 3D avatar. This translation will feature different lexical-syntactic processing modules as well as the disambiguation of terms using a small lexicon, similar to Wordnet synsets. The project is developed in collaboration with the deaf community.

Author 1: Gleny Paola Gamarra Ramos
Author 2: María Elisabeth Farfán Choquehuanca

Keywords: Peruvian sign language; Lexical-Syntactic Analysis; Avatar 3D

PDF

Paper 69: A Service-Oriented Architecture for Optimal Service Selection and Positioning in Extremely Large Crowds

Abstract: The problem of managing large crowds has many aspects and has been reported in the literature. One of these aspects is the distribution of supplies such as food and water in especially when the targeted region is overcrowded. Some of the challenges is to plan the locations of food and water supply centres in such a way to achieve multiple objective functions such as the type of food and the shortest distance to customer. A practical example of this problem is the food distribution and food cart location in the region of Mena (also known as Tent City, in Saudi Arabia) during the yearly pilgrimage season. In this work, we propose a Service Oriented Architecture (SOA) for positioning services in the region of Mena (Mecca – Saudi Arabia) that covers an area of approximately 20 square kilometres during the pilgrimage season. The architecture proposes an optimal service selection as well as a mobile food cart positioning algorithm based on client pre-set profiles to achieve multiple objective functions for the clients as well as the service providers. Some of these objective functions are the least waiting time to be served, the shortest distance to service, the lowest cost, and the maximum profit for the service provider.

Author 1: Mohammad A.R. Abdeen

Keywords: Large crowd management; Service-Oriented Architecture; multi-objective optimization; Hajj; Mena; WSDL

PDF

Paper 70: EDUGXQ: User Experience Instrument for Educational Games’ Evaluation

Abstract: A significant increase in research on educational computer games in recent years has proven that the demand for educational games has increased as well. However, production of incompatible educational games not only cost wastage of money but also energy and time for game designers and game developers. To produce a suitable educational game, it is important to understand the user’s need as well as the educational need. Therefore, this study aims to develop a User Experience (UX) framework for educational games (EDUGX) based on UX elements and psychometrically validate a new instrument, EDUGX questionnaire (EDUGXQ) that is appropriate to evaluate educational games. Based on literature review, six main UX elements were identified which are Flow, Immersion, Player Context, Game Usability, Game System and Learnability to construct the framework. In this paper, we first discussed the development process of EDUGX framework followed by EDUGXQ. This study will also review and discuss several UX questionnaires for educational games in UX design evaluation which at the same time supports the framework’s elements to develop the EDUGXQ.

Author 1: Vanisri Nagalingam
Author 2: Roslina Ibrahim
Author 3: Rasimah Che Mohd Yusoff

Keywords: User Experience (UX); framework; psychometrically; educational games; educational games’ evaluation

PDF

Paper 71: Stemming Text-based Web Page Classification using Machine Learning Algorithms: A Comparison

Abstract: The research aim is to determine the effect of word-stemming in web pages classification using different machine learning classifiers, namely Naïve Bayes (NB), k-Nearest Neighbour (k-NN), Support Vector Machine (SVM) and Multilayer Perceptron (MP). Each classifiers' performance is evaluated in term of accuracy and processing time. This research uses BBC dataset that has five predefined categories. The result demonstrates that classifiers' performance is better without word stemming, whereby all classifiers show higher classification accuracy, with the highest accuracy produced by NB and SVM at 97% for F1 score, while NB takes shorter training time than SVM. With word stemming, the effect on training and classification time is negligible, except on Multilayer Perceptron in which word stemming has effectively reduced the training time.

Author 1: Ansari Razali
Author 2: Salwani Mohd Daud
Author 3: Nor Azan Mat Zin
Author 4: Faezehsadat Shahidi

Keywords: Web page classification; stemming; machine learning; Naïve Bayes; k-NN; SVM; multilayer perceptron

PDF

Paper 72: Performance Realization of CORDIC based GMSK System with FPGA Prototyping

Abstract: The Gaussian Minimum Shift Keying (GMSK) modulation is a digital modulation scheme using frequency shift keying with no phase discontinuities, and it provides higher spectral efficiency in radio communication systems. In this article, the cost-effective hardware architecture of the GMSK system is designed using pipelined CORDIC and optimized CORDIC models. The GMSK systems mainly consist of the NRZ encoder, Integrator, Gaussian filter followed by FM Modulator using CORDIC models and Digital Frequency Synthesizer (DFS) for IQ Modulation in transmitter section along with channel, the receiver section has FM demodulator, followed by Differentiator and NRZ decoder. TheCORDIC algorithms play a crucial role in GMSK systems for IQ generation and improve the system performance on a single chip. Both the pipelined CORDIC and optimized CORDIC models are designed for 6-stages. The optimized CORDIC model is designed using quadrature mapping method along with pipeline structure. The GMSK systems are implemented on Artix-7 FPGA with FPGA prototyping. The Performance analysis is represented in terms of hardware constraints like area, time and power. These results show that the optimized CORDIC based GMSK system is a better option than the pipelined CORDIC based GMSK systems for real-time scenarios.

Author 1: Renuka Kajur
Author 2: K V Prasad

Keywords: GMSK; CORDIC algorithm; FPGA; DFS; Gaussian Filter; pipelined; integrator; differentiator; channel

PDF

Paper 73: HybridFatigue: A Real-time Driver Drowsiness Detection using Hybrid Features and Transfer Learning

Abstract: Road accidents mainly caused by the state of driver drowsiness. Detection of driver drowsiness (DDD) or fatigue is an important and challenging task to save road-side accidents. To help reduce the mortality rate, the “HybridFatigue” DDD system was proposed. This HybridFatigue system is based on integrating visual features through PERCLOS measure and non-visual features by heart-beat (ECG) sensors. A hybrid system was implemented to combine both visual and non-visual features. Those hybrid features have been extracted and classified as driver fatigue by advanced deep-learning-based architectures in real-time. A multi-layer based transfer learning approach by using a convolutional neural network (CNN) and deep-belief network (DBN) was used to detect driver fatigue from hybrid features. To solve night-time driving and to get accurate results, the ECG sensors were utilized on steering by analyzing heartbeat signals in case if the camera is not enough to get facial features. Also to solve the accurate detection of center head-position of drivers, two-cameras were mounted instead of a single camera. As a result, a new HybridFatigue system was proposed to get high accuracy of driver's fatigue. To train and test this HybridFatigue system, three online datasets were used. Compare to state-of-the-art DDD system, the HybridFatigue system is outperformed. On average, the HybridFatigue system achieved 94.5% detection accuracy on 4250 images when tested on different subjects in the variable environment. The experimental results indicate that the HybridFatigue system can be utilized to decrease accidents.

Author 1: Qaisar Abbas

Keywords: Driver fatigue; image processing; deep learning; transfer learning; convolutional neural network; deep belief network

PDF

Paper 74: An Improved Deep Learning Approach based on Variant Two-State Gated Recurrent Unit and Word Embeddings for Sentiment Classification

Abstract: Sentiment classification is an important but challenging task in natural language processing (NLP) and has been widely used for determining the sentiment polarity from user opinions. And word embedding technique learned from a various contexts to produce same vector representations for words with same contexts and also has been extensively used for NLP tasks. Recurrent neural networks (RNNs) are common deep learning architecture that are extensively used mechanism to address the classification issue of variable-length sentences. In this paper, we analyze to investigate variant-Gated Recurrent Unit (GRU) that includes encoder method to preprocess data and improve the impact of word embedding for sentiment classification. The real contributions of this paper contain the proposal of a novel Two-State GRU, and encoder method to develop an efficient architecture namely (E-TGRU) for sentiment classification. The empirical results demonstrated that GRU model can efficiently acquire the words employment in contexts of user’s opinions provided large training data. We evaluated the performance with traditional recurrent models, GRU, LSTM and Bi-LSTM two benchmark datasets, IMDB and Amazon Products Reviews respectively. Results present that: 1) proposed approach (E-TGRU) obtained higher accuracy than three state-of-the-art recurrent approaches; 2) Word2Vec is more effective in handling as word vector in sentiment classification; 3) implementing the network, an imitation strategy shows that our proposed approach is strong for text classification.

Author 1: Muhammad Zulqarnain
Author 2: Suhaimi Abd Ishak
Author 3: Rozaida Ghazali
Author 4: Nazri Mohd Nawi
Author 5: Muhammad Aamir
Author 6: Yana Mazwin Mohmad Hassim

Keywords: RNN; GRU; LSTM; encoder; Two-state GRU; Long-term dependencies; Sentence Classification

PDF

Paper 75: Angle Adjustment for Vertical and Diagonal Communication in Underwater Sensor

Abstract: Underwater wireless sensor network has been an area of interest for few previous decades. UWSNs consists of tiny sensors responsible for monitoring different underwater events and transmit the collected data to sink node. In the harsh and continuously changing environment of water, gaining better communication and performance is a difficult task as compared to networks available on land because of different underwater characteristics such as end-to-end delays, node movement and energy constraints. In this paper a novel routing technique named angle adjustment for vertical and diagonal communication was proposed which don’t use any location information of nodes. It is also efficient in terms of energy and end-to-end delays. In this approach, the source node evaluates the flooding zone based on the angle by using the basic formula for forwarding the packet to the sink. After evaluating the flooding zone, the angles of each node are compared and the packet is sent to node closest to vertical line. The proposed approach is evaluated with the help of NS-2 with AquaSim. The results show better results performance in data delivery, end-to end delays and energy consumption than DBR.

Author 1: Aimen Anum
Author 2: Tariq Ali
Author 3: Shuja Akbar
Author 4: Iqra Obaid
Author 5: Muhammad Junaid Anjum
Author 6: Umar Draz
Author 7: Momina Shaheen

Keywords: Wireless Sensor Networks; Underwater wireless sensor network; DVRP; Terrestrial Wireless Sensor Networks; Depth Based Routing (DBR)

PDF

Paper 76: Towards an Intelligent Approach for the Restitution of Physical Soil Parameters

Abstract: The analysis of the radar response on natural surfaces has been subject of intense research during the last decades in the field of remote sensing. Unless the availability of accurate values of surface roughness parameter, the restitution of soil moisture from radar backscattering signal can constantly provide inaccurate estimates. Characterization of soil roughness is not fully understood, so a wide range of roughness values can be obtained for the same studied surface when using different measurement methodologies. Various studies have shown a weak agreement between experimental measurements of soil physical parameters and theoretical values under natural conditions. Due to this nonlinearity and its ill-posedness, the inversion of backscattering radar signal on soils for restitution of physical soil parameters is particularly complex. The aim of the present work is the restitution of soil physical parameters from backscattered radar signal using an adapted backscattering model to the soil proposed description. As our study focuses on little rough soils, we have adopted in this work a multi-layered modified multiscale bi-dimensional Small Perturbation Model (2D MLS SPM). Subsequently, we propose a new way of describing the dielectric constant, with the aim of including air fractions in the multiscale multilayer description of the soil. Calculating the dielectric constant is based on the consideration of a soil comprising two phases, a fraction of soil, and an air fraction. For the inversion method, a methodology of coupling between neural networks (NN) and genetic algorithms (GA) was carried on in order to restitute the physical properties of the soil. Samples were generated by the original MLS 2D SPM followed by a neural network to obtain the statistic soil moisture and MLS roughness parameters algorithm. thereafter, these restored values were modelled by the genetic algorithms to resolve, in part or in whole, the disagreement between the retrieval and original values.

Author 1: Ibtissem HOSNI
Author 2: Lilia BENNACEUR FARAH
Author 3: Imed Riadh FARAH
Author 4: Raouf BENNACEUR
Author 5: Mohamed Saber NACEUR

Keywords: Inversion; air fractions; multi-layered; multiscale; SPM; genetic algorithms

PDF

Paper 77: An Innovative Smartphone-based Solution for Traffic Rule Violation Detection

Abstract: This paper introduces a novel smartphone-based solution to detect different traffic rule violations using a variety of computer vision and networking technologies. We propose the use of smartphones as participatory sensors via their cameras to detect the moving and stationary objects (e.g., cars and lane markers) and understand the resulting driving and traffic violation of each object. We propose novel framework which uses a fast in-mobile traffic violation detector for rapid detection of traffic rule violation. After that, the smartphone transmits the data to the cloud where more powerful computer vision and machine learning operations are used to detect the traffic violation with a higher accuracy. We show that the proposed framework detection is very accurate by combining a) a Haar-like feature cascade detector at the in-mobile level, and b) a deep learning-based classifier, and support-vector machine-based classifiers in the cloud. The accuracy of the deep convolutional network is about 92% for true positive and 95% for true negative. The proposed framework demonstrates a potential for mobile-based traffic violation detection by especially by combining the information of accurate relative position and relative speed. Finally, we propose a real-time scheduling scheme in order to optimize the use of battery and real-time bandwidth of the users given partially known navigation information among the different users in the network, which us the real case. We show that the navigation information is very important in order to better utilize the battery and bandwidth for each user for a small number of users compared to the navigation trajectory length. That is, the utilization of the resources is directly related to the number of available participants, and the accuracy of navigation information.

Author 1: Waleed Alasmary

Keywords: Participatory sensing; traffic violation detection; au-tomatic detection; applied computer vision; resources optimization

PDF

Paper 78: HarmonyMoves: A Unified Prediction Approach for Moving Object Future Path

Abstract: Trajectory prediction plays a critical role on many location-based services such as proximity-based marketing, routing services, and traffic management. The vast majority of existing trajectory prediction techniques utilize the object’s motion history to predict the future path(s). In addition to, their assumptions that the objects’ moving with recognized patterns or know their routes. However, these techniques fail when the history is unavailable. Also, these techniques fail to predict the path when the query moving objects lost their ways or moving with abnormal patterns. This paper introduces a system named HarmonyMoves to predict the future paths of moving objects on road networks without relying on their past trajectories. The system checks the harmony between the query object and other moving objects, after that if the harmony exists, this means that there are other objects in space moving like the query object. Then, a Markov Model is adopted to analyze this set of similar motion patterns and generate the next potential road segments of the object with their probabilities. If the harmony does not exist, HarmonyMoves considers this query object as abnormal object (object lost the way and needs support to return back known routes), for this purpose HarmonyMoves employed a new module to handle this case. A fundamental aspect of HarmonyMoves lies in achieving a high accurate prediction while performing efficiently to return query answers.

Author 1: Mohammed Abdalla
Author 2: Hoda M. O. Mokhtar
Author 3: Neveen ElGamal

Keywords: Trajectory prediction; machine learning; moving objects

PDF

Paper 79: A Survey on Cloud Data Security using Image Steganography

Abstract: Now-a-days, cloud computing proved its impor-tance where it is being used by small and big organizations. The importance of cloud computing is due to the various services provided by the cloud. One of these services is storage as a service (SaaS) which allows users to store their data in the cloud databases. The drawback of this service is the security challenge since a third party manages the data. The users need to feel safe to store their data in the cloud. Consequently, we need for models that will enhance the data security. The image steganography is a way to protect data from unauthorized access. Image steganography allows users to conceal secret data in a cover image. In this paper, we review and compare some of the recent works proposed to protect cloud data using image steganography. The first comparison of models based on the algorithms they used, advantages and drawbacks. The second comparison of the models based on the aims of steganography: quality where the model produces a stego-image with high quality, security where the secret data is difficult to detect and capacity where the model allows to hide large amounts of data.

Author 1: Afrah Albalawi
Author 2: Nermin Hamza

Keywords: Security; cloud computing; image steganography; data hiding; data storage

PDF

Paper 80: Developing Decision Tree based Models in Combination with Filter Feature Selection Methods for Direct Marketing

Abstract: Direct Marketing is a form of advertising strategies which aims to communicate directly with the most potential customers for a certain product using the most appropriate communication channel. Banks are spending a huge amount of money on their marketing campaigns, so they are increasingly interested in this topic in order to maximize the efficiency of their campaigns, especially with the existence of high competition in the market. All marketing campaigns are highly dependent on the huge amount of available data about customers. Thus special Data Mining techniques are needed in order to analyze these data, predict campaigns efficiency and give decision makers indications regarding the main marketing features affecting the marketing success. This paper focuses on four popular and common Decision Tree (DT) algorithms: SimpleCart, C4.5, RepTree and Random Tree. DT is chosen because the generated models are in the form of IF-THEN rules which are easy to understand by decision makers with poor technical background in banks and other financial institutions. Data was taken from a Portuguese bank direct marketing campaign. A filter-based Feature selection is applied in the study to improve the performance of the classification. Results show that SimpleCart has the best results in predicting the campaigns success. Another interesting finding that the five most significant features influencing the direct marketing campaign success to be focused on by decision makers are: Call duration, offered interest rate, number of employees making the contacts, customer confidence and changes in the prices levels.

Author 1: Ruba Obiedat

Keywords: Direct marketing; data mining; decision tree; simpleCart; C4.5; reptree; random tree; weka; confusion matrix; class-imbalance

PDF

Paper 81: Modelling an Indoor Crowd Monitoring System based on RSSI-based Distance

Abstract: This paper reports a real-time localization algorithm system that has a main function to determine the location of devices accurately. The model can locate the smartphone position passively (which do not need a set on a smartphone) as long as the Wi-Fi is turned on. The algorithm uses Intersection Density, and the Nonlinear Least Square Algorithm (NLS) method that utilizes the Lavenberg-Marquart method. To minimize the localization error, Kalman Filter (KF) is used. The algorithm is computed under Matlab approach. The most obtained model will be implemented in this Wi-Fi tracker system using RSSI-based distance for indoor crowd monitoring. According to the experiment result, KF can improve Hit ratio of 81.15 %. Hit ratio is predicting results of a location that is less than 5 m from the actual area (location). It can be obtained from several RSSI scans, the calculation is as follows: the number of non-error results divided by the number of RSSI scans and multiplied by 100%.

Author 1: Syifaul Fuada
Author 2: Trio Adiono
Author 3: Prasetiyo
Author 4: Hartian Widhanto Shorful Islam

Keywords: Wi-Fi tracker system; RSSI-based distance; intersection density method; Nonlinear Least Square (NLS) method; Kalman Filter (KF)

PDF

Paper 82: Priority-based Routing Framework for Image Transmission in Visual Sensor Networks: Experimental Analysis

Abstract: A Visual Sensor Network (VSN) is a specialized Wireless Sensor Network (WSN) equipped with cameras. Its primary function is to capture images, videos and send them to power rich sink nodes for processing. As image data is much larger than scalar data sensed by a typical WSN, applications of VSN require much bigger amount of data to be transferred to the sink. Due to constraints of WSN such as low energy, limited CPU power and scarce memory, transmission of large amount of data becomes challenging. On the other hand, some VSN applications require critical image features sooner than the entire image to take action. In this paper, we provide the details of experiments done using our proposed Priority-based Routing Framework for Image Transmission (PRoFIT). PRoFIT is designed to deliver critical image features at high priority to the sink node for early processing. Peak signal-to-noise ratio (PSNR) analyses show that PRoFIT improves VSN application response time as compared to priority-less routing. This paper also contains the design of our VSN testbed. Multiple indoor and outdoor experiments were performed to validate the framework. This framework also improves the energy efficiency of the network. The results show that the PRoFIT is 40% efficient in terms of energy consumption.

Author 1: Emad Felemban
Author 2: Atif Naseer
Author 3: Adil Amjad

Keywords: Priority-based routing; visual sensor networks; test-bed; framework

PDF

Paper 83: A Deep Learning Approach for Handwritten Arabic Names Recognition

Abstract: Optical Character recognition (OCR) has enabled many applications as it has attained high accuracy for all printing documents and also for handwriting of many languages. How-ever, the state-of-the-art accuracy of Arabic handwritten word recognition is far behind. Arabic script is cursive (both printed and handwritten). Therefore, traditionally Arabic recognition systems segment a word to characters first before recognizing its characters. Arabic word segmentation is very difficult because Arabic letters contain many dots. Moreover, Arabic letters are context sensitive and some letters overlapped vertically. A holis-tic recognizer that recognizes common words directly (without segmentation) seems the plausible model for recognizing Arabic common words. This paper presents the result of training a Conventional Neural Network (CNN), holistically, to recognize Arabic names. Experiments result shows that the proposed CNN is distinct and significantly superior to other recognizers that were used with the same dataset.

Author 1: Mohamed Elhafiz Mustafa
Author 2: Murtada Khalafallah Elbashir

Keywords: Deep learning; Arabic names recognition; holistic paradigm

PDF

Paper 84: Optimal Topology Generation for Linear Wireless Sensor Networks based on Genetic Algorithm

Abstract: A linear network is a type of wireless sensor network in which sparse nodes are deployed along a virtual line; for example, on streetlights or columns of a bridge, tunnel, and pipelines. The typical deployment of Linear Wireless Sensor Net-work (LWSN) creates an energy hole around the sink node since nodes near the sink nodes deplete their energy faster than others. Optimal network topology is one of the key factors that can help improve LWSN performance and lifetime. Finding optimal topology becomes tough in large network where total possible combinations is very high. We propose an Optimal Topology Generation (OpToGen) framework based on genetic algorithm for LWSN. Network deployment tools can use OpToGen to configure and deploy LWSNs. Through a discrete event simulator, we demonstrate that the use of genetic algorithm accomplishes fast convergence to optimal topologies as well as less computational overhead as compared to brute force search for optimal topology. We have evaluated OpToGen on the number of generations it took to achieve the best topology for various sized LWSNs. The trade-off between energy consumption and different network sizes is also reported.

Author 1: Adil A. Sheikh
Author 2: Emad Felemban

Keywords: Ad hoc networks; network topology; genetic al-gorithms; computer simulation; computer networks management; network lifetime estimation; optimization

PDF

Paper 85: DDoS Flooding Attack Mitigation in Software Defined Networks

Abstract: Distributed denial of service (DDoS) attacks which have been completely covered by the security community, today pose a potential new menace in the software defined networks (SDN) architecture. For example, the disruption of the SDN controller could interrupt data communication in the whole SDN network. DDoS attacks can produce a great number of new and short traffic flows (e.g., a series of TCP SYN requests), which may launch spiteful flooding requests to overcharge the controller and cause flow-table overloading attacks at SDN switches. In this research work, we propose a lightweight and practical mitigation mechanism to protect SDN architecture against DDoS flooding threats and ensure a secure and efficient SDN-based networking environment. Our proposal extends the Data Plane (DP) with a classification and mitigation module to analyze the new incoming packets, classify the benign requests from the SYN flood attacks, and perform the adaptive countermeasures. The simulation results indicate that the proposed defending mechanism may efficiently tackle the DDoS flood attacks in the SDN architecture and also in the downstream servers.

Author 1: Safaa MAHRACH
Author 2: Abdelkrim HAQIQ

Keywords: Software Defined Networks (SDN); Distributed De-nial of Service (DDoS); network security; P4 language; DDoS mitigation

PDF

Paper 86: Investigation of Deep Learning-based Techniques for Load Disaggregation, Low-Frequency Approach

Abstract: Unlike sub-metering, which requires individual appliances to be equipped with their own meters, non-intrusive load monitoring (NILM) use algorithms to discover appliance individual consumption from the aggregated overall energy reading. Approaches that uses low frequency sampled data are more applicable in a real world smart meters that has typical sampling capability of <= 1Hz. In this paper, a systematic literature review on deep-learning-based approaches for NILM problem is conducted, aiming to analyse the four key aspects pertaining to deep learning adoption. This includes deep learning model adoption, features selection that are used to train the model, used data set and model accuracy. In our study, analyses the performance of four different deep learning approaches, namely, denoising autoencoder (DAE), recurrent long short-term memory (LSTM) , Recurrent gate recurrent unit (GRU), and sequence to point. Our experiments will be conducted using the two data sets, namely, REDD and UK-DALE. According to our analysis, the sequence to point model has achieved the best results with an average mean absolute error (MAE) of 14:98 watt when compared to other counterpart algorithms.

Author 1: Abdolmaged Alkhulaifi
Author 2: Abdulah J. Aljohani

Keywords: NILM; deep learning; load disaggregation; recur-rent long short-term memory; gate recurrent unit

PDF

Paper 87: Pedestrian Crowd Detection and Segmentation using Multi-Source Feature Descriptors

Abstract: Crowd analysis is receiving much attention from research community due to its widespread importance in public safety and security. In order to automatically understand crowd dynamics, it is imperative to detect and segment crowd from the background. Crowd detection and segmentation serve as pre-processing step in most crowd analysis applications, for example, crowd tracking, behavior understanding and anomaly detection. Intuitively, the crowd regions can be extracted using background modeling or using motion cues. However, these model accumulate many false positives when the crowd is static. In this paper, we propose a novel framework that automatically detects and segments crowd by integrating appearance features from multiple sources. We evaluate our proposed framework using challenging images with varying crowd densities, camera viewpoints and pedestrian appearances. From qualitative analysis, we observe that the proposed framework work perform well by precisely segmenting crowd in complex scenes.

Author 1: Saleh Basalamah
Author 2: Sultan Daud Khan

Keywords: Crowd detection; Fourier analysis; crowd analysis; crowd segmentation

PDF

Paper 88: Towards an Improvement of Fourier Transform

Abstract: With the development of information technology and the coming period of large data, the image signals play an increasingly more significant role in our life because of the phenomenal development of system correspondence innovation, and the comparing high proficiency image handling strategies are requested earnestly. The Fourier transform is an important image processing tool, which is used in a wide range of applications, such as image filtering, image analysis, image compression and image reconstruction. It 's the simplest among the other transformation method used in mathematics. The real time consumption is lesser due to this method. It has a vast use in image processing, particularly object 2D, 3D and other representation. This paper proposes a new Fourier transform which is called Non Uniform Fourier Transform (NUFT). The proposed descriptor takes into consideration the change of point index. Also, an application is made on 2D set of points and a real image. The main advantages of the proposed transform are invariance under change of index point and robustness to noise. Also, the extraction of invariant under rotation and affinity is immediate because the linearity is assured. The proposed descriptor is tested on MPEG 7 database and compared with the normal Fourier transform to shows its efficiency. The experimental results prove the effectiveness of the proposed descriptor.

Author 1: Khalid Aznag
Author 2: Toufik Datsi
Author 3: Ahmed El Oirrak
Author 4: Essaid El Bachari

Keywords: Fourier transform; NUFT; noise; invariant

PDF

Paper 89: LEA-SIoT: Hardware Architecture of Lightweight Encryption Algorithm for Secure IoT on FPGA Platform

Abstract: The Internet of Things (IoT) is one of the emerging technology in today’s world to connect billions of electronic devices and providing the data security to these electronic devices while transmission from the attacks is a big challenging task. These electronic devices are smaller and consume less power. The conventional security algorithms are complex with its computations and not suitable for IoT environments. In this article, the hardware architecture of the new Lightweight encryption algorithm (LEA) for the secured Internet of things (SIoT) is designed, which includes Encryption, decryption along with key generation process. The New LEA-SIoT is a hybrid combination of the Feistel networks and Substitution-permutation Network (SPN). The encryption/decryption architecture is the composition of Logical operations, substitution transformations, and swapping. The encryption/decryption process is designed for 64-bit data input and 64-bit key inputs. The key generation process is designed with the help of KHAZAD block cipher algorithm. The encryption and key generation process are executing in parallel with pipelined architecture with five rounds to improve the hardware and computational complexity in IoT systems. TheLEA-SIoT is designed on the Xilinx platform and implemented on Artix-7 FPGA. The hardware constraints like area, power, and timing utilization are summarized. The Comparison of the LEA-SIoT with similar security algorithms is tabulated with improvements.

Author 1: Bharathi R
Author 2: N. Parvatham

Keywords: IOT Devices; Security algorithm; Encryption; Decryption; Key generation; FPGA

PDF

Paper 90: Delay-Aware and User-Adaptive Offloading of Computation-Intensive Applications with Per-Task Delay in Mobile Edge Computing Networks

Abstract: Mobile-edge computing (MEC) is a new paradigm with a great potential to extend mobile users capabilities because-of its proximity. It can contribute efficiently to optimize the energy consumption to preserve privacy, and reduce the bottlenecks of the network traffic. In addition, intensive-computation offloading is an active research area that can lessen latencies and energy consumption. Nevertheless, within multi-user networks with a multi-task scenario, select the tasks to offload is complex and critical. Actually, these selections and the resources’ allocation have to be carefully considered as they affect the resulting energies and delays. In this work, we study a scenario con-sidering a user-adaptive offloading where each user runs a list of heavy computation-tasks. Every task has to be processed in its associated MEC server within a fixed deadline. Hence, the proposed optimization problem target the minimization of a weighted-sum normalized function depending on three metrics. The first is energy consumption, the second is the total processing delays, and the third is the unsatisfied processing workload. The solution of the general problem is obtained using the solutions of two sub-problems. Also, all solutions are evaluated using a set of simulation experiments. Finally, the execution times are very encouraging for moderate sizes, and the proposed heuristic solutions give satisfactory results in terms of users cost function in pseudo-polynomial times.

Author 1: Tarik Chanyour
Author 2: Youssef HMIMZ
Author 3: Mohamed EL GHMARY
Author 4: Mohammed Oucamah CHERKAOUI MALKI

Keywords: Mobile edge computing; user-adaptive offloading; computation-intensive offloading; per-task delay; tasks satisfaction optimization

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org