The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 7

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Edge-based Video Analytic for Smart Cities

Abstract: Video analytic is the important tool for smart city development. The video analytic application requires more memories and high processing devices. The problems of cloud-based approach for video analytic are high latency and more network bandwidth to transfer data into the cloud. To overcome these problems, we propose a model based on dividing the jobs into smaller sub-tasks with less processing requirements in a typical video analytics application for the development of smart city. The object detection, tracking and pattern recognition method to reduce the size of videos based on edge network will be proposed. We will design a video analytic model, and simulation is performed using iFogSim simulator. We will also propose Convolutional Neural Network (CNN) based object tracking model. The experimental verification shows that our tracking model is more than 96% accurate, and the proposed edge and cloud-based model is more than 80% effective than only cloud-based approach for video analytic applications.

Author 1: Dipak Pudasaini
Author 2: Abdolreza Abhari

Keywords: Video analytic; cloud computing; smart city; object detection; object tracking; edge network

PDF

Paper 2: SmartTS: A Component-Based and Model-Driven Approach to Software Testing in Robotic Software Ecosystem

Abstract: Validating the behaviour of commercial off-the-shelf components and of interactions between them is a complex, and often a manual task. Treated like any other software product, a software component for a robot system is often tested only by the component developer. Test sets and results are often not available to the system builder, who may need to verify functional and non-functional claims made by the component. Availability of test records is key in establishing compliance and thus selection of the most suitable components for system composition. To provide empirically verifiable test records consistent with a component’s claims would greatly improve the overall safety and dependability of robotic software systems in open-ended environments. Addi-tionally, a test and validation suite for a system built from the model package of that system empirically codifies its behavioural claims. In this paper, we present the “SmartTS methodology”: A component-based and model-driven approach to generate model-bound test-suites for software components and systems. SmartTS methodology and tooling are not restricted to the robotics domain. The core contribution of SmartTS is support for test and validation suites derived from the model packages of components and systems. The test-suites in SmartTS are tightly bound to an application domain’s data and service models as defined in the RobMoSys (EU H2020 project) compliant SmartMDSD toolchain. SmartTS does not break component encapsulation for system builders while providing them complete access to the way that component is tested and simulated.

Author 1: Vineet Nagrath
Author 2: Christian Schlegel

Keywords: Model-Driven Engineering (MDE); Component-Based Software Engineering (CBSE); Model-Driven Testing (MDT); Component-Based Software Testing (CBST); Service Robotics; Software Quality; Automated Software Testing

PDF

Paper 3: Vietnamese Short Text Classification via Distributed Computation

Abstract: Social networking has been growing rapidly in Vietnam. The sharing information is diverse and circulates in many forms. It requires user-friendly solutions such as topic sorting and perspectives analysis in analyzing community trends, advertisements or anticipating and monitoring the spread of bad news. Unfortunately, Vietnamese is highly different from other languages and little research has been conducted in the literature on messages classification. The implementation of machine learning models on Vietnamese has not been thoroughly investigated and these models’ performance is unknown when applying in a different language. Vietnamese text is a serialization of syllables, hence, word boundary identification is not trivial. This research portrays our endeavor to construct an effective distributed framework for addressing the task of classification of short Vietnamese texts on social networks using the idea of probability categorization. The authors argue that addressing the task sharps the successful combination of machine learning, nat-ural language processing, and ambient intelligence. The proposed framework is effective and enables fast calculation, suitable for implementation in Apache Spark, meeting the demand for dealing with large amounts of textual data on the current social networks. Our data has been collected from several online text sources of 12412 short messages classified into 5 different topics. The evaluation shows that our approach has achieved an average of 82.73% classification accuracy. Thoughtfully learning the literature, we could state that this is the first attempt to classify short Vietnamese messages under a distributed computation framework.

Author 1: Hiep Xuan Huynh
Author 2: Linh Xuan Dang
Author 3: Nghia Duong-Trung
Author 4: Cang Thuong Phan

Keywords: Short text classification; na¨ive bayes; apache spark; vietnamese; distributed computation

PDF

Paper 4: Development of Intelligent Tools for Detecting Resource-intensive Database Queries

Abstract: The detection of resource-intensive queries which consume an excessive amount of time, processor, disk, and memory resources is one of the most popular vulnerabilities of Database Management Systems (DBMS). The tools for monitoring and optimizing queries typically used in modern DBMS were analyzed, and their shortcomings were identified. Subsequently, the relevance of new intelligent tools’ development for timely and reliable detection of resource-intensive queries to databases was distinctly justified. The study concluded a set of analysis of an extended statistical parameter which indicated to be of interest for identifying resource-intensive queries. The initial set of queries’ parameters reduced by two consecutive methods. Firstly, normalizing the set of indicators using a sigmoid function. Secondly, selecting a finite number of principal components based on the Cattell test. Whereas the clustering of a set of queries performed using self-organizing Kohonen maps. Suggestions for further studies in the classification algorithm context were indicated in lights of the study’s conclusions.

Author 1: Salah M.M. Alghazali
Author 2: Konstantin Polshchykov
Author 3: Ahmad M. Hailan
Author 4: Lyudmila Svoykina

Keywords: Resource-intensive queries; database; detecting; self-organizing Kohonen maps; statistical parameters

PDF

Paper 5: A New Approach for Network Steganography Detection based on Deep Learning Techniques

Abstract: One of the techniques that current cyber-attack methods often use to steal and transmit data out is to hide secret data in packets. This is the network steganography technique. Because millions of packets are sent and received every hour in internet activity, so it is very difficult to detect the theft and transmission of system data out using this form. Recent approaches often seek ways to compute and extract abnormal behaviors of packets to detect a steganography protocol or technique. However, such methods have the difficult problem of not being able to detect abnormal packets when an attacker uses other steganography techniques. To solve the above problem, this paper proposes a network steganography detection method using deep learning techniques. The highlight of this study is some new proposed features based on different components of the packet. By combining these many components, this proposal will not only provide the ability to detect many steganography techniques in the network, but also improve the ability to accurately detect abnormal packets. Besides, this study proposes to use deep learning for the task of detecting normal and abnormal packets. The authors want to take advantage of the big data analysis and processing capabilities of deep learning models in order to improve the ability to analyze and detect network steganography techniques. The experimental results in Section IVD have proved the effectiveness of this proposed method compared with other approaches.

Author 1: Cho Do Xuan
Author 2: Lai Van Duong

Keywords: Network steganography; network steganography detection method; abnormal packets; deep learning techniques

PDF

Paper 6: Is Face Recognition with Masks Possible?

Abstract: With the recent outbreak of the COVID-19 pandemic, wearing face masks has become extremely important to protect us, and to reduce the spread of the virus. This measure has made many existing face recognition systems ineffective as they were trained to work with unmasked faces. In this paper, several methods have been proposed for masked face recognition. Two pre-trained deep learning architectures (VGG16, and MobileNetV2) and the Histogram of Gradients (HOG) technique were used to extract the relevant features from face images of celebrities. A SoftMax layer and Support Vector Machines (SVM) were used for classification. Five scenarios were devised to assess the different models and approaches. With an accuracy of 96.8%, the best model was obtained with MobileNetV2 with a SoftMax layer on the dataset consisting of a mixture of masked and unmasked images. Three different types of masks were also used in this study. The mean accuracy was 91.35% when the same type of mask is used for training and testing. However, the accuracy dropped by an average of 5.6% when a different type of mask is used for training and testing. A contactless attendance system using the best masked face recognition model has also been implemented.

Author 1: Yaaseen Muhammad Saib
Author 2: Sameerchand Pudaruth

Keywords: Face detection; face recognition; face mask; deep learning; VGG16; MobileNetV2; HOG

PDF

Paper 7: Requirements Engineering: A State of Practice in Gulf Cooperation Countries

Abstract: Requirement Engineering (RE) is one of the crucial elements for successful software development. Nevertheless, in terms of research discussing the failure or success of various products, little has been undertaken to examine this area as it pertains to the Gulf Cooperation Council (GCC) nations, i.e. Saudi Arabia (KSA), Kuwait, United Arab Emirates (UAE), Bahrain, Qatar, and Oman. The aim of this research is to present an analysis of the current ways in which software is developed in these nations. The researchers undertook a survey of practitioners in software development, asking questions regarding their recent work. The survey was based on an extensive survey that was adapted in view of contemporary software development practice. The research reports on requirement practices and how they relate to project sponsors/customers/users and project management. The respondents came from GCC nation companies, most of whom worked on developing software in-house. The outcomes demonstrate that the majority of IT companies in these nations do not employ the optimal methodologies for requirement engineering processes, using their own. In addition, project managers are often lacking in complete authority. Making comparisons between our findings and past research, requirements engineering practices is still inadequate in these nations. Thus the research results are particularly useful as the data is derived from countries where published research about software development practices is scant.

Author 1: Asaad Alzayed
Author 2: Abdulwahed Khalfan

Keywords: Requirements engineering; project success; software development; requirements engineering practices; GCC countries

PDF

Paper 8: Structural Limitations with K Means Algorithms in Research in Perú

Abstract: In the world of science there are high-level, moderate-level, and low-level emerging countries. The indicators are an investment in research and development (I&D), number of universities, investment, researchers, intellectual production, expenditure on education, gross domestic product (PBI), and quality of life (IDH). In Methodology, it is basic, explanatory, of conglomerates. There are 37 countries analyzed. The data comes from the FMI, datosmacro.com, UNESCO, URWU. There are 11 indicators. These are data taken in two stages, 2006 and 2019. The Results shows R2= 0.9887, which explains the behavior of the PBI by the investment in I&D. The positive and significant relationship between IDH and PBI per capita, which is 0.824, is transcendent. In conclusion, there are three clusters with clearly differentiated indicators. Peru’s problem is structural in that it does not have a per capita PBI of $ 30,000 per person or more. Investment in I&D in Peru is low and PBI is also low. Therefore, countries with higher investments in science have high PBIs and better IDH.

Author 1: Javier Pedro Flores Arocutipa
Author 2: Jorge Jinchuña Huallpa
Author 3: Julio César Lujan Minaya
Author 4: Ruth Daysi Cohaila Quispe
Author 5: Juan Luna Carpio
Author 6: Gamaniel Carbajal Navarro

Keywords: Researchers; PBIpc; investment in I&D; exports; universities

PDF

Paper 9: The Effects of Adaptive Feedback on Student’s Learning Gains

Abstract: There is an increase in the implementation of adap-tive feedback models, which focus on the relationship between adaptive feedback and learning gains. These literatures suggest that the complex relationship between feedback, task complexity, pedagogical principles and student’s characteristics affect the significance of feedback effects. However, current studies have shown insufficient research on the effect of adaptive feedback characteristics on student’s learning gains. Thus, there is a need to investigate the effect of multiple adaptive feedback characteristics on student’s learning gains. The adaptive feedback model proposed supports the retrieval of appropriate feedback for students based on established weights between related concepts. In comparing three experimental groups, students who were provided with adaptive feedback showed learning gains and normalized learning gains of 0.87 and 0.05 over the normal feedback group, with 0.97 and 0.07 over the non-feedback group. This research yielded better outcomes than previous similar studies.

Author 1: Andrew Thomas Bimba
Author 2: Norisma Idris
Author 3: Ahmed Al-Hunaiyyan
Author 4: Salwa Ungku Ibrahim
Author 5: Naharudin Mustafa
Author 6: Izlina Supa’at
Author 7: Norazlin Zainal
Author 8: Mohd Yahya Ahmad

Keywords: Authoring tools and methods; evaluation of CAL systems; intelligent tutoring systems; teaching/learning strategies; pedagogical issues

PDF

Paper 10: A Fuzzy MCDM Approach for Structured Comparison of the Health Literacy Level of Hospitals

Abstract: The primary objective of this study is to develop a hybrid multi-criteria decision-making (MCDM) model to evaluate and compare the organizational health literacy responsiveness (OHLR) level of hospitals. To achieve this goal, the health literacy performance indicators are selected, some potential uses of single and hybrid MCDM and qualitative approaches for structured comparison purposes are illustrated, one more common hybrid approach based on the Fuzzy Analytic Hierarchy Process and fuzzy Delphi method was chosen, developed, and applied. To compare the proposed model with its classical non fuzzy version (Qualitative – AHP), a case study example on the effect of their implementation on a structured comparison decisions is conceded, and the Bland Altman agreement method is applied to compare the results obtained by them. The results present the suitability of the application of both hybrid approaches for solving the problem. It also shows that the application of them leads to a distinctive outcomes. Robust Fuzzy based outcomes, and small agreement interval (< 0.0113) and little average change level in the rates of the hospitals (< 2.08 %) are observed between results acquired by the Fuzzy based approach and those which were defined by the other model. Based on these results, a fuzzy based model was recommended for structured comparison of the OHLR level of hospitals under uncertainty conditions. It supports sustainable planning practices, and helps with improvement and effectively distributes the necessary resources.

Author 1: Abed Saif Ahmed Alghawli
Author 2: Adel A. Nasser
Author 3: Mijahed N. Aljober

Keywords: Health literacy; the organizational health literacy standard; fuzzy analytic hierarchy process; fuzzy Delphi method; structured comparison

PDF

Paper 11: WorkStealing Algorithm for Load Balancing in Grid Computing

Abstract: Grid computing is a computer network in which many resources and services are shared for performing a specific task. The term grid appeared in the mid-1990s and due to the computational capabilities, efficiency and scalability provided by the shared resources, it is used nowadays in many areas, including business, e-libraries, e-learning, military applications, medicine, physics, and genetics. In this paper, we propose WorkStealing-Grid Cost Dependency Matrix (WS-GCDM) which schedule DAG tasks according to their data transfer cost, dependency between tasks and load of the available resources. WS-GCDM algorithm is an enhanced version from GCDM algorithm. WS-GCDM algorithm balances load between all the available resources in grid system unlike GCDM which uses specific number of resources regardless how many resources are available. WS-GCDM introduces better makespan than GCDM algorithm and enhances system performance from 13% up to 17% when we experiment algorithms using DAG with dependent tasks.

Author 1: Hadeer S. Hossam
Author 2: Hala Abdel-Galil
Author 3: Mohamed Belal

Keywords: Grid computing; static scheduling; dynamic scheduling; load balancing; directed acyclic graph (DAG)

PDF

Paper 12: Multi-parameter Coordinated Public School Admission Model by using Stable Marriage

Abstract: School admission is a very important process in improving the education quality. Meanwhile, one problem in the school admission system is the mismatch. There are unassigned applicants and unallocated seats. In Indonesia, zone-based model is adopted in the public-school admission system. Students are assigned to their nearest school. Besides location, student’s academic performance and economic level are also concerned. Based on it, this work proposes coordinated public school admission model that accommodates flexible number of the concerned parameters. It is built based on the stable marriage algorithm or the deferred-acceptance algorithm as its derivative. The proposed model is a combination between the mandatory approach and the school choice approach. The concerned parameters are school-home distance, student national exam score, school rank, applicant poor status, and applicant’s preference. The simulation is conducted to investigate the performance of the proposed model compared with the previous models: the zone-based model and the two-step model. The prioritization of the concerned parameters is proven easily adjusted. The simulation result shows that in the over-demand condition, the proposed model creates higher average student national exam score and higher average school-home distance rather than the previous models. When the number of applicants is twice of the number of seats, the proposed model creates 6.6 percent higher in the average student national exam score and 71.4 percent higher in the average school-home distance. The simulation result also shows that the mismatch is solved.

Author 1: Purba Daru Kusuma

Keywords: School admission; school choice; stable marriage; deferred-acceptance; education

PDF

Paper 13: IoT-based Smart Greenhouse with Disease Prediction using Deep Learning

Abstract: Rapid industrialization and urbanization has led to decrease in agricultural land and productivity worldwide. This is combined with increasing demand of chemical free organic vegetables by the educated urban households, and thus, greenhouses are quickly catching trend for their specialized advantages especially in extreme weather countries. They provide an ideal environment for longer and efficient growing seasons and ensure profitable harvests. The present paper designs and demonstrates a comprehensive IoT based Smart Greenhouse system that implements a novel combination of monitoring, alerting, cloud storage, automation and disease prediction, viz. a readily deployable complete package. It continuously keeps track of ambient conditions like temperature, humidity and soil moisture conditions to ensure a higher yield of crop and immediate redressal in case of abnormal conditions. It also has a built-in automatic irrigation management system. Finally, it employs the most efficient deep learning model for disease identification with leaf images. Furthermore, with memory and storage optimization through cloud storage, an individual living in the city can also build a greenhouse and can monitor it from his home and take redressal methods as and when desired.

Author 1: Neda Fatima
Author 2: Salman Ahmad Siddiqui
Author 3: Anwar Ahmad

Keywords: Cloud; deep learning; greenhouse; humidity; IoT; soil moisture; temperature

PDF

Paper 14: Snapshot of Energy Optimization Techniques to Leverage Life of Wireless Sensor Network

Abstract: Energy Optimization in Wireless Sensor Network (WSN) deals with the techniques which targets higher degree of energy efficiency using resource-constraint sensor nodes with minimal inclusion of any different variants of resources. At present, there are various approaches and techniques towards addressing the problems of energy but not all the research implications can be considered as optimized approach. Therefore, this paper reviews all the existing energy optimization schemes, categorizes them, briefly discuss about their strength and weakness to offer a compact snapshot of existing energy optimization techniques in WSN. The paper also contributes towards exploring the updates research trends and highlights about the open-end research problems in WSN. It is anticipated that the study findings of this manuscript will offer a true picture of study effectiveness in dealing with energy challenges so that favorable direction of investigation towards evolving up optimized solution comes up with promising outcome.

Author 1: Kavya A P
Author 2: D J Ravi

Keywords: Battery; energy efficiency; energy optimization; network lifetime; sensor node; wireless sensor network

PDF

Paper 15: Truck Scheduling Model in the Cross-docking Terminal by using Multi-agent System and Shortest Remaining Time Algorithm

Abstract: One most important and critical problem in a cross-docking system is truck scheduling. Many studies in it assumed that the temporary storage is unlimited which is in the real world, the temporary storage is limited. Many studies focus on minimizing total completion time. Meanwhile, studies that focus on minimizing temporary storage are hard to find, although this aspect is very important. Due to its complexity, especially in the cross-docking system with multiproduct characteristics, manual scheduling is almost impossible to achieve its goals. Many studies used several techniques, such as genetic algorithm (GA) and mixed integer programming where these methods are computationally expensive. Based on this problem, in this work, we propose new truck scheduling model in a cross-docking terminal with limited temporary storage constraint. This model is developed by using multi-agent system. The main contribution of this work is proposing the multi-agent-based truck scheduling model with limited temporary storage capacity constraint and temporary truck changeover permit. In it, there are three agents: inbound-trucks scheduler agent, outbound-trucks scheduler agent, and material handler agent. The shortest remaining time (SRT) algorithm is adopted in every agent. Based on the simulation result, this proposed model is proven competitive compared with the existing FIFO based models and integer-programming based model. Compared with the integer-programming model, it creates 41.8 percent lower in maximum inventory level. Compared with the FIFO based model, it creates 52.1 to 55.1 percent lower in maximum inventory level. In total time aspect, it creates 0.2 to 2.2 percent lower than the FIFO based model. It creates 7.2 percent higher in total time compared with the integer-programming based model.

Author 1: Purba Daru Kusuma

Keywords: Truck scheduling; cross-docking system; multi agent system; shortest remaining time; intelligent supply chain

PDF

Paper 16: A Note on Time and Space Complexity of RSA and ElGamal Cryptographic Algorithms

Abstract: The computational complexity study of algorithms is highly germane to the design and development of high-speed computing devices. The whole essence of computation is principally influenced by efficiency of algorithms; this is more so the case with the algorithms whose solution space explodes exponentially. Cryptographic algorithms are good examples of such algorithms. The goal of this study is to compare the computational speeds of RSA and ElGamal cryptographic algorithms by carrying out a survey of works done so far by researchers. This study has therefore examined some of the results of the studies already done and highlighted which of the RSA and ElGamal algorithms performed better under given parameters. It is expected that this study would spur further investigation of the behaviour of cryptographic structures in order to ascertain their complexity and impact on the field of theoretical computer science. The experimental results of many of the papers reviewed showed that RSA cryptographic algorithm performs better as regards to energy usage, time complexity and space complexity of text, image and audio data during encryption process while some studies showed that ElGamal performs better in terms of time complexity during decryption process.

Author 1: Adeniyi Abidemi Emmanuel
Author 2: Okeyinka Aderemi E
Author 3: Adebiyi Marion O
Author 4: Asani Emmanuel O

Keywords: RSA algorithm; ElGamal algorithm; time complexity; space complexity; data security

PDF

Paper 17: Anomaly Detection on Medical Images using Autoencoder and Convolutional Neural Network

Abstract: Detection of anomalies from the medical image dataset improves prognosis by discovering new facts hidden in the data. The present study aims to discuss anomaly detection using autoencoders and convolutional neural networks. The autoencoder identifies the imbalance between normal and abnormal samples. They create learning models flexible and accurate on training data. The problem is addressed in four stages: 1) training: an autoencoder is initialized with the hyper-parameters and trained on the lung cancer CT scan images, 2) test: the autoencoder reconstructs the input from the latent space representation with a slight variation from the original data, indicated by a reconstruction error as Mean Squared Error (MSE), 3) evaluate: the MSE value of the training and test dataset are compared. The MSE values of anomalous data are higher than a base threshold, detecting those as anomalies, 4) validate: the efficiency metrics such as accuracy and MSE scores are used at both training and validation phases. The dataset was further classified as benign and malignant. The accuracy reported for outlier detection and the classification task is 98% and 97.2%. Thus, the proposed autoencoder-based anomaly detection could positively isolate anomalies from the CT scan images of lung cancer.

Author 1: Rashmi Siddalingappa
Author 2: Sekar Kanagaraj

Keywords: Anomalies; autoencoder; convolutional neural networks (CNN) (ConvNets); deep neural network architecture; regularization

PDF

Paper 18: A Comparative Study of Unimodal and Multimodal Interactions for Digital TV Remote Control Mobile Application among Elderly

Abstract: A research was conducted to study user interactions design for the TV remote control applications that are preferable among the elderly. Now-a-days smart home concept is widely accepted around the globe. Many applications were developed based on smart home concepts, such as smart remote-control applications for TVs and air conditioners. These applications were helpful in our daily life. However, the elderly tends not to use these applications because of the complexity of the processes and interaction design that is unfriendly. Therefore, this study was conducted to determine which interaction design is preferable for the elderly, enhancing the elderly experience in using the TV remote control application besides encouraging them to use one in daily life and keep up with new technologies. In this paper, the two types of new interaction designs – a touch-based only (unimodal) interaction and multimodal interaction prototypes and an existing TV Remote Control application were compared by conducting usability testing of these three applications on the elderly. Three parameters were considered to compare these three interaction designs: task completion time, error rate, and satisfaction. Also, using usability testing’s data collection, statistical analysis was conducted to find out which type of interaction is preferable by the elderly. Ten elderlies participated in the usability testing carried out. The results show a significant difference in these three interactions designs regarding task completion time and satisfaction, but not error rate. After considering usability testing and analyses conducted, the elderly prefers a unimodal interaction design in the TV Remote Control application. Nevertheless, the unimodal interaction was not the typical “tapping buttons” user interface in existing applications. Instead, the favourable interaction design was the one that involved swiping gestures to replace several features that were implemented using buttons on existing TV remote control applications.

Author 1: Nor Azman Ismail
Author 2: Nurul Aiman Ab Majid
Author 3: Nur Haliza Abdul Wahab
Author 4: Farhan Mohamed

Keywords: HCI; usability testing; unimodal; multimodal; elderly

PDF

Paper 19: Harnessing Emotive Features for Emotion Recognition from Text

Abstract: With the prevalence of affective computing, emotion recognition becomes vital in any work related to natural language understanding. The inspiration for this work is provided by supplying machines with complete emotional intelligence and integrating them into routine life to satisfy complex human desires and needs. The text being a common communication medium on social media even now, it is important to analyze the emotions expressed in the text which is challenging due to the absence of audio-visual cues. Additionally, the conversational text conveys many emotions through communication contexts. Emoticon serves the purpose of self-annotation of writer’s emotion in text. Therefore, a machine learning-based text emotion recognition model using emotive features proposed and evaluated it on the SemEval-2019 dataset. The proposed work involves exploitation of different emotion-based features with classical machine learning classifiers like SVM, Multilayer perceptron, REPTree, and decision tree classifiers. The proposed system performs competitively well in terms of f-score 65.31% and accuracy 87.55%.

Author 1: Rutal Mahajan
Author 2: Mukesh Zaveri

Keywords: Emotion recognition; emotive features; natural language processing; affective computing

PDF

Paper 20: Monte Carlo Ray Tracing based Method for Investigation of Multiple Reflection among Trees

Abstract: Monte Carlo Ray Tracing (MCRT) method for investigation of the multiple reflection among trees is proposed. For the forest research (Leaf Area Index: LAI, Normalized Difference Vegetation Index: NDVI, forest type, tree age, etc.) with spaceborne based optical sensor data, some errors due to influences of the multiple reflection among trees on the estimation of at sensor radiance have to be considered. The influence is difficult to formulate in a radiative transfer equation. The proposed method allows estimating the influence. Through experiment with miniature sized forest, it is found that the proposed method is validated. It is also found that a few to more than 10% of influence due to multiple reflections among trees are anticipated. Furthermore, the influence on the estimation of at sensor radiance is clarified. The potentialities of the code are then depicted over different types of forests including coniferous and broadleaf canopies.

Author 1: Kohei Arai

Keywords: Radiative transfer equation; Monte Carlo ray tracing: MCRT; multi reflection among trees; forest research; canopy reflectance; ellipse and cone shaped trees model

PDF

Paper 21: Recent Progress on Bio-mechanical Energy Harvesting System from the Human Body: Comprehensive Review

Abstract: Energy harvesting is a powerful technique to produce clean and renewable energy with better infrastructure improvement. The exhaustive review of recent progress and development in bio-mechanical energy harvesting (BMEH) techniques from human body is discussed in this manuscript. The BMEH from the human body is categorized into three parts, namely, piezoelectric energy harvesting (PEEH), triboelectric energy harvesting (TEEH), and Electro-magnetic Energy harvesting (EMEH). Each energy harvesting system is discussed with working principles with mathematical equations; each energy harvesting progress is discussed with a few work demonstrations. The applications of each energy harvesting from the recent research work are addressed in detail. The summary of each energy harvester from the human body or motion with advantages, limitations, performance metrics, current methods, and implemented human body parts are highlighted with Tabulation. The critical challenges/issues with possible solutions are also discussed.

Author 1: Mohankumar V
Author 2: G.V. Jayaramaiah

Keywords: Bio-mechanical; energy harvesting; electromagnetic; human-body; piezoelectric; triboelectric

PDF

Paper 22: CRS-iEclat: Implementation of Critical Relative Support in iEclat Model for Rare Pattern Mining

Abstract: The research purpose is to develop a performance enhancement in Incremental Eclat (iEclat) model by embedding Critical Relative Support (CRS) in mining of infrequent itemset. The CRS measure acts as an interestingness measure (filter) in iEclat model that comprises of i-Eclat-diffset algorithm, i-Eclat-sortdiffset algorithm and i-Eclat-postdiffset algorithm for infrequent (rare) itemset mining. The association rule is performed to reveal the relationships among itemsets in a transactional database. The task of association rule mining is to discover if there exist the frequent itemset or infrequent patterns in the database and if any, an interesting relationship between these frequent or infrequent itemsets can reveal a new pattern analysis for the future decision making. Regardless of frequent or infrequent itemsets, the persisting issues are deemed to execution time to display the rules and the highest memory consumption during mining process. CRS-iEclat engine is proposed to overcome the said issues. Prior to experimentation, results indicate that CRS-iEclat outperforms iEclat from 54% to 100% accuracy on execution time (ET) in selected database as to show the improvement of ET efficiency.

Author 1: Wan Aezwani Wan Abu Bakar
Author 2: Mustafa Man
Author 3: Zailani Abdullah
Author 4: Mahadi B Man

Keywords: Critical relative support; equivalence class transformation (Eclat); iEclat model; interestingness measure

PDF

Paper 23: Pre-trained CNNs Models for Content based Image Retrieval

Abstract: Content based image retrieval (CBIR) systems is a ‎common recent method for image retrieval and is ‎based mainly ‎on two pillars extracted features and similarity measures. Low ‎level image presentations, ‎based on colour, texture and shape ‎properties are the most common feature extraction methods used ‎by ‎traditional CBIR systems. Since these traditional handcrafted ‎features require good prior domain ‎knowledge, inaccurate ‎features used for this type of CBIR systems may widen the ‎semantic gap and ‎could lead to very poor performance retrieval ‎results. Hence, features extraction methods, which ‎are ‎independent of domain knowledge and have automatic ‎learning capabilities from input image are ‎highly useful. Recently, ‎pre-trained deep convolution neural networks (CNN) with ‎transfer learning ‎facilities have ability to generate and extract ‎accurate and expressive features from image data. Unlike ‎other ‎types of deep CNN models which require huge amount of data ‎and massive processing time ‎for training purposes, the pre-‎trained CNN models have already trained for thousands of ‎classes of large-scale data, including huge ‎images and their ‎information could be easily used and transferred. ResNet18 ‎and ‎SqueezeNet are successful and effective examples of pre-‎trained CNN models used recently in many ‎machine learning ‎applications, such as classification, clustering and object ‎recognition. In this ‎study, we have developed CBIR systems ‎based on features extracted using ResNet18 and SqueezeNet ‎pre-‎trained CNN models. Here, we have utilized these pre-trained ‎CNN models to extract two groups of features ‎that are stored ‎separately and then later are used for online image searching and ‎retrieval. Experimental ‎results on two popular image datasets ‎Core-1K and GHIM-10K show that ResNet18 features ‎based on ‎the CBIR method have overall accuracy of 95.5% and 93.9% for ‎the two datasets, respectively, which ‎greatly outperformed the ‎traditional handcraft features based on the CBIR method.‎

Author 1: Ali Ahmed

Keywords: Pre-trained deep neural networks; transfer learning; content based image retrieval

PDF

Paper 24: Independent Task Scheduling in Cloud Computing using Meta-Heuristic HC-CSO Algorithm

Abstract: Cloud computing is a vital paradigm of emerging technologies. It provides hardware, software, and development platforms to end-users as per their demand. Task scheduling is an exciting job in the cloud computing environment. Tasks can be divided into two categories dependent and independent. Independent tasks are not connected to any type of parent-child concept. Various meta-heuristic algorithms have come into force to schedule the independent tasks. In this, paper a hybrid HC-CSO algorithm has been simulated using independent tasks. This hybrid algorithm has been designed by using the HEFT algorithm, Self-Motivated Inertia Weight factor, and standard Cat Swarm Optimization algorithm. The Crow Search algorithm has been applied to overcome the problem of premature convergence and to avoid the H-CSO algorithm getting stuck in the local fragment. The simulation was carried out using 500-1300 random lengths independent tasks and it was found that the H-CSO algorithm has beaten PSO, ACO, and CSO algorithms whereas the hybrid algorithm HC-CSO is working fine despite Cat Swarm Optimization, Particle Swarm Optimization, and H-CSO algorithm in the name of processing cost and makespan. For all scenarios, the HC-CSO algorithm is found overall 4.15% and 7.18% efficient than the H-CSO and standard CSO respectively in comparison to the makespan and in case of computation cost minimization, 9.60% and 14.59% than the H-CSO and the CSO, respectively.

Author 1: Jai Bhagwan
Author 2: Sanjeev Kumar

Keywords: Crow search algorithm (CSA); cat swarm optimization (CSO); H-CSO algorithm; HC-CSO algorithm; heft algorithm; SMIW (self-motivated inertia weight); independent tasks; particle swarm optimization (PSO); QoS (Quality of Service); virtual machines (VMs)

PDF

Paper 25: Detecting Website Defacement Attacks using Web-page Text and Image Features

Abstract: Recently, web attacks in general and defacement attacks in particular to websites and web applications have been considered one of major security threats to many enterprises and organizations who provide web-based services. A defacement attack can result in a critical effect to the owner’s website, such as instant discontinuity of website operations and damage of the owner’s reputation, which in turn may lead to huge financial losses. A number of techniques, measures and tools for monitoring and detecting website defacements have been researched, developed and deployed in practice. However, some measures and techniques can only work with static web-pages while some others can work with dynamic web-pages, but they require extensive computing resources. The other issues of existing proposals are relatively low detection rate and high false alarm rate because many important elements of web-pages, such as embedded code and images are not processed. In order to address these issues, this paper proposes a combination model based on BiLSTM and EfficientNet for website defacement detection. The proposed model processes web-pages’ two important components, including the text content and page screenshot images. The combination model can work effectively with dynamic web-pages and it can produce high detection accuracy as well as low false alarm rate. Experimental results on a dataset of over 96,000 web-pages confirm that the proposed model outperforms existing models on most of measurements. The model’s overall accuracy, F1-score and false positive rate are 97.49%, 96.87% and 1.49%, respectively.

Author 1: Trong Hung Nguyen
Author 2: Xuan Dau Hoang
Author 3: Duc Dung Nguyen

Keywords: Website defacement attacks; website defacement detection; machine learning-based website defacement detection; deep learning-based website defacement detection

PDF

Paper 26: Preprocessing Handling to Enhance Detection of Type 2 Diabetes Mellitus based on Random Forest

Abstract: Diabetes is a non-communicable disease that has a death rate of 70% in the world. Majority of diabetes cases, 90-95%, are of diabetes cases are type 2 diabetes which is caused by an unhealthy lifestyle. Type 2 diabetes can be detected earlier by using examination that contains diabetes-related parameters. However, the dataset does not always contain complete information, the distribution between positive and negative classes is mostly imbalanced, and some parameters have low importance to the decision class. To overcome the problems, this study needs to carry out preprocessing to improve detection precision and recall. In this paper, propose an approach on dataset preprocessing, which is applied to diabetes prediction. The preprocessing approach consists of the following process: missing value process, imbalanced data process, feature importance process, and data augmentation process. The data preprocessing process uses the median for missing value, random oversampling for imbalanced data, the Gini score in the random forest for feature importance, and posterior distribution for data augmentation. This research used random forest and logistic regression as classification algorithms. The experimental results show that the classification increased by 20% precision and 24% recall by applying proposed method and random forest method compared to without proposed method and random forest method.

Author 1: Nur Ghaniaviyanto Ramadhan
Author 2: Adiwijaya
Author 3: Ade Romadhony

Keywords: Diabetes mellitus; data preprocessing; data augmentation; random forest; classification

PDF

Paper 27: Grey Clustering Approach to Assess Sediment Quality in a Watershed in Peru

Abstract: The evaluation of sediment quality is a relevant topic that involves the analysis of various parameters that are altered by natural or anthropogenic causes. Therefore, the Grey Clustering method provides an alternative to evaluate sediment quality. In the present study, the sediment quality of the Chontayacu river watershed was evaluated considering the results of the monitoring of twenty-three points carried out in the early evaluations by the Environmental Impact Evaluation Agency (OEFA by its Spanish acronym). These twenty-three points were separated into three blocks considering the monitoring points upstream of the Uchiza town center and the Chontayacu Alto and Chontayacu Bajo hydroelectric plants. Seven parameters were analyzed: As, Cd, Cr, Cu, Pb, Hg and Zn, which were compared with Canadian sediment quality standards for the protection of aquatic life. The results of the assessment showed that all points in the Chontayacu River were classified as having unlikely adverse biological effects from heavy metals. However, a quality ranking was established between the points of each block where it was found that points P3, P4 and P17 correspond to the lowest values for the high CH, low CH and CP Uchiza blocks respectively. Finally, the results obtained will provide integrated information for decision making by the competent authorities in Peru, as well as indicate the level of sediment contamination that should be taken into account for the proposal of hydroelectric projects that influence sediment transport and entrainment.

Author 1: Alexi Delgado
Author 2: Jossel Altaminarano
Author 3: Luis Pariona
Author 4: Patricia Oscanoa
Author 5: Stephany Esquivel
Author 6: Wendy Mejía
Author 7: Chiara Carbajal

Keywords: Grey clustering; sediment quality; watershed

PDF

Paper 28: Dynamic Phrase Generation for Detection of Idioms of Gujarati Language using Diacritics and Suffix-based Rules

Abstract: Gujarati is the language used for everyday communication in the state of Gujarat, India. The Gujarati language is also officially recognized by the constitution and the government of India. Gujarati script is based on the Devanagari script. An idiom is an expression, phrase, or word that has a different meaning from the literal meaning of the words in it. Idioms represent the cultural heritage of Gujarati language. Idioms are used in Gujarati language for effective communication and convey of an accurate message. No Machine Translation System does the accurate translation of Gujarati idioms to English or any other language. Different idiom phrases can be generated by adding diacritic(s) as well as suffix to the root or base form of the idiom. Many forms of single idiom make automatic idiom identification as well as machine translation more challenging. This paper focuses on the design and implementation of diacritics and suffix-based rules for dynamic phrase generation and detection of idioms of Gujarati language. This implementation helps in identifying Gujarati idiom present in any possible form in the Gujarati text. The obtained results with the execution of 7050 different Gujarati idiom phrases yield an accuracy of 99.73%. The results are encouraging enough to make the proposed implementation useful for Natural Language processing tasks related to Gujarati language idioms.

Author 1: Jatin C. Modh
Author 2: Jatinderkumar R. Saini

Keywords: Diacritic; Gujarati; idiom; machine translation system (MTS); natural language processing (NLP); suffix; unicode transformation format (UTF)

PDF

Paper 29: Copy Move Forgery Detection Techniques: A Comprehensive Survey of Challenges and Future Directions

Abstract: Digital Image Forensics is a growing field of image processing that attempts to gain objective ‎proof ‎of the origin and veracity of a visual image. Copy-move forgery detection (CMFD) has ‎currently ‎become an active research topic in the passive/blind image forensics field. There has no ‎doubt that ‎conventional techniques and especially the keypoint based techniques have pushed the ‎CMFD ‎forward in the previous two decades. However, CMFD techniques in general and ‎conventional ‎techniques in particular suffer from several challenges. And thus, increasing approaches ‎are exploiting ‎deep learning for CMFD. In this survey, we cover the conventional and the ‎deep learning ‎based CMFD techniques from a new perspective. We classify the ‎CMFD techniques into several ‎classifications according to the detection methodology, the detection paradigm, and the detection ‎capability‎. We discuss the ‎challenges facing the CMFD techniques as well as the ways for solving ‎them. In addition, this survey covers the evaluation metrics‎ and datasets commonly utilized for ‎CMFD. Also, we are ‎debating and proposing certain plans for future research. This survey will be ‎helpful for the researchers’ ‎as it master the recent trends of CMFD and outline some future research ‎directions.‎

Author 1: Ibrahim A. Zedan
Author 2: Mona M. Soliman
Author 3: Khaled M. Elsayed
Author 4: Hoda M. Onsi

Keywords: Image forensics; copy-move forgery detection (CMFD);conventional techniques; deep learning techniques

PDF

Paper 30: LSTM, VADER and TF-IDF based Hybrid Sentiment Analysis Model

Abstract: Most sentiment analysis models that use supervised learning algorithms consume a lot of labeled data in the training phase in order to give satisfactory results. This is usually expensive and leads to high labor costs in real-world applications. This work consists in proposing a hybrid sentiment analysis model based on a Long Short-Term Memory network, a rule-based sentiment analysis lexicon and the Term Frequency-Inverse Document Frequency weighting method. These three (input) models are combined in a binary classification model. In the latter, each of these algorithms has been implemented: Logistic Regression, k-Nearest Neighbors, Random Forest, Support Vector Machine and Naive Bayes. Then, the model has been trained on a limited amount of data from the IMDB dataset. The results of the evaluation on the IMDB data show a significant improvement in the Accuracy and F1 score compared to the best scores recorded by the three input models separately. On the other hand, the proposed model was able to transfer the knowledge gained on the IMDB dataset to better handle a new data from Twitter US Airlines Sentiments dataset.

Author 1: Mohamed Chiny
Author 2: Marouane Chihab
Author 3: Omar Bencharef
Author 4: Younes Chihab

Keywords: Sentiment analysis; hybrid model; long short-term memory (LSTM); Valence Aware Dictionary and sEntiment Reasoner (VADER); term frequency-inverse document frequency (TF-IDF); classification algorithm

PDF

Paper 31: Multiple Relay Nodes Selection Scheme using Exit Time Variation for Efficient Data Dissemination in VANET

Abstract: Efficient Data dissemination in VANET is still the challenge because of variable speed of vehicles, road conditions, frequent fragmentation etc. In this article a selective forwarding data dissemination scheme using exit time differences in vehicles for highway lanes scenario is proposed that focuses on the solution of broadcast storm, less coverage, transmission delay and reliable data delivery. Our approach is selecting multiple forwarding nodes to increase coverage in less delay. In this article road lanes concept is used to identify the moving node direction. Redundant regions and zones technique in proposed approach is reducing the processing of parameters at significant extents. Simulation of proposed approach is done using NS2 and SUMO. Output of implementation is compared with unidirectional flooding, KB_Selective, and LT_Selective techniques. Result analysis shown that the proposed technique is much efficient and it increases the rate of coverage up to 23%. Also it reduces the delay up to 18% in data delivery ratio. This methodology also improves the performance of system by increasing the throughput and reducing the collision rate in comparison with other methods.

Author 1: Deepak Gupta
Author 2: Rakesh Rathi
Author 3: Shikha Gupta
Author 4: Neetu Sharma

Keywords: Broadcasting; disseminations; exit time; highway lanes; relay nodes; vehicle speed; vehicular ad hoc networks

PDF

Paper 32: IoT-based Closed Algal Cultivation System with Vision System for Cell Count through ImageJ via Raspberry Pi

Abstract: Spirulina platensis and other microalgae are now being considered in the different fields of research. It is due to the former’s endless potential let alone, its high protein content. That is why, a stable demand of microalga production is now necessary. To achieve a high-protein spirulina, its cultivation using closed algal cultivation requires monitoring and maintenance of the bio-environmental factors and parameters affecting its growth to provide a stable and efficient production of microalgae. Meanwhile, laboratories that culture spirulina determine its cell count through manually counting the cells under a microscope – a tedious work. This establishes the need to construct a device that cultivates spirulina with maintaining and cell counting capabilities. Thus, the proponents developed a culturing device that has three main systems. The first system is tasked to maintain the bio-environmental parameters, such as the pH level, temperature, and light. The second system on the other hand speaks of the cell counting system through ImageJ’s Image Processing. This system verifies the cell count and growth through counting the filaments of the spirulina. Lastly, a corresponding Android application, which was developed using Firebase and Android Studio, displays real-time values of the culture’s parameter. Results show that the device was able to stabilize its parameters. Also, red LEDs exhibited 28.43% higher approximate cell count than red-blue LEDs. With this, the quality of the Spirulina that was produced throughout the study was improved. Lastly, the use of ImageJ’s image processing feature showed no significant difference with manual counting. It also releases the results multiple times faster than the manual counting. Thus, being a better alternative to manual cell counting.

Author 1: Lean Karlo S. Tolentino
Author 2: Sheila O. Belarmino
Author 3: Justin Gio N. Chan
Author 4: Oliver D. Cleofas Jr
Author 5: Jethro Gringo M. Creencia
Author 6: Meryll Eve L. Cruz
Author 7: JC Glenn B. Geronimo
Author 8: John Peter M. Ramos
Author 9: Lejan Alfred C. Enriquez
Author 10: Jay Fel C. Quijano
Author 11: Edmon O. Fernandez
Author 12: Maria Victoria C. Padilla

Keywords: Spirulina platensis; ImageJ; image processing; closed algal cultivation; parameter monitoring; firebase

PDF

Paper 33: Feature Engineering Framework to detect Phishing Websites using URL Analysis

Abstract: Phishing is a most popular and dangerous cyber-attack in the world of internet. One of the most common attacks in cyber security is to access the personal information of internet users through “Phishing Website”. The major element through which hacker can do this job is through URL. Hacker creates an almost replica of original URL in which there is a very small difference, generally not revealed without keen observation. By pipelining various machine learning algorithms, the proposed model aims to recognize the important features to classify the URL using a recursive feature elimination process. In this work the data set of various URL records has been collected with 112 features including one target value. In this work a Machine Learning based model is proposed to identify the significant features, used to classify a URL, the wrapper method recursive feature elimination compares different bagging and boosting machine learning approaches .Ensemble algorithms, Bootstrap Aggregation Algorithms, Boosting and stacking algorithms are used for feature selection. The proposed work has five sections: work on the pre-processing phase, finding the relation between the features of the dataset, automatic selection of number of features using Extra Tree Classifier, comparison of the various ensemble algorithm and finally generates the best features for URL analysis. This paper, designs meta learner with XG BOOST classifier as base classifier and achieved an accuracy of 93% Out of 112 features, this model has performed an extensive comparative study on feature selection and identified 29 features as core features by performing URL analysis.

Author 1: N. Swapna Goud
Author 2: Anjali Mathur

Keywords: Recursive feature elimination; principal component analysis; standard scalar transformation; eXtreme gradient boosting classifier; correlation matrix

PDF

Paper 34: The Impact of CALL Software on the Performance of EFL Students in the Saudi University Context

Abstract: This paper investigates the extent to which Computer-Assisted Language Learning (CALL) contributes academically and pedagogically to the performance of students majoring English as a Foreign Language (EFL). The paper’s main objective is to explore the extent to which CALL is effective in developing the linguistic and communicative competence of EFL students in the skill of reading. This paper uses both quantitative and qualitative approaches in the process of data collection. As an empirical study, the sample in this study was 47 students studying English at Prince Sattam bin Abdulaziz University. The participants were classified into two groups: experimental and control; each of which has been assigned specific reading activities. The experimental group has been allocated technological learning, by means of using the computer programs of SnagitTM, Screencast; whereas the control group has been assigned traditional learning, i.e. without using computer. Results revealed that the use of CALL has more positive effects on the learning outcomes of the experimental group than those pertaining to the control group. This, in turn, accentuates the fact that the use and application of CALL into EFL contexts improves the students’ learning outcomes concerning the skill of reading. The study recommends further integration of computer software into the designation of the different EFL courses.

Author 1: Ayman Khafaga
Author 2: Abed Saif Ahmed Alghawli

Keywords: CALL; EFL students; Saudi university context; language skills; reading; SnagitTM; screencast; performance; effectiveness

PDF

Paper 35: A Novel Method for Handling Partial Occlusion on Person Re-identification using Partial Siamese Network

Abstract: Person-reidentification (Re-ID) is one of the tasks in CCTV-based surveillance system for verifying whether two detected objects are the same person or not. Re-ID visually matching one person or group in various situations obtained from different cameras or on the same camera but at different times. This method replaces the task of surveillance through surveillance cameras that was previously carried out conventionally by humans because it is prone to errors. The challenge of Re-ID is the pose of varied objects, occlusions, and the appearance of people who tend to be similar. Occlusion issues receive special attention since the performance of Re-ID can decrease due to partial occlusion. This can occur because the re-identification process relies on features of the person such as the color and pattern of clothing. The occlusion resulted in the feature not being caught by the camera resulting in a re-identification error. This paper proposed to overcome this problem by dividing the image into several parts (partial) and then processed in different neural network (NN) but with the same architecture. The research conducted is applying the CNN algorithm with the Siamese network architecture and applying the contrastive loss function to calculate the similarity distance between a pair of images. The test results show that the partial process obtained an accuracy of 86%, 77%, 68%, and 56% for occlusion data of 20%, 40%, 60%, and 80%. This accuracy is three to five percent higher than images without partial.

Author 1: Muhammad Pajar Kharisma Putra
Author 2: Wahyono

Keywords: CCTV; CNN; video-surveillance; NN; contrastive-loss

PDF

Paper 36: Validation of Requirements for Transformation of an Urban District to a Smart City

Abstract: The concept of a smart city is still debatable and yet gives attention to every country around the globe to provide their community with a better quality of life. New ideas for the development of a smart city have always evolved to enhance the quality, performance, and interactivity of services. This paper presents a model of a smart city based on the comparison of the chosen smart cities in the world and used the model to validate the requirements for the transformation of an urban district to a smart city. The proposed model for a smart city in this paper focuses on two major components, which are by utilizing IoTs (Internet of Things) in forming a model for a smart city and incorporating culture diversity. The relationship of components and culture influence are the foundation of designing the model of a smart city. In this research, the model of a smart city has been validated based on the requirements analysis from the survey instrument and the results show that the average mean of each element used is more than 4 out of 5. The model of a smart city can be used as a guideline for transformation of an urban district to a smart city.

Author 1: Rosziati Ibrahim
Author 2: N.A.M. Asri
Author 3: Sapiee Jamel
Author 4: Jahari Abdul Wahab

Keywords: Smart city; Internet of Things (IoTs); requirements analysis; survey instrument

PDF

Paper 37: Cyberattacks and Vociferous Implications on SECS/GEM Communications in Industry 4.0 Ecosystem

Abstract: Information and communications technology (ICT) is prevalent in almost every field of industrial production and manufacturing processes at present. A typical industry network consists of sensors, actuators, devices, and services to connect, track, and manage production processes to increase performance and boost productivity. The SEMI Equipment Communications Standard/Generic Equipment Model (SECS/GEM) is SEMI's Machine-to-Machine (M2M) protocol for equipment-to-host data communications. It is the most popular and profoundly used M2M communication protocol operating in the manufacturing industry. With Industry 4.0 as a guiding factor, connectivity to business networks is required for accessing real-time data whenever and wherever needed. This openness of connectivity raises security concerns as SECS/GEM protocol offers no security, which endangers exposing the manufacturing industries' business secrets and production processes. This paper discusses the key processes involved in SECS/GEM communications and how potential attackers can manipulate these processes to obtain illegal or unauthorized access. The experiments' results indicate that the SECS/GEM processes are entirely vulnerable to numerous attacks, including DoS attack, Replay attack, and False-Data-Injection-Attack. Thus, the future direction involves developing a prevention mechanism that aims at securing the SECS/GEM processes in the industrial network. This study's findings are useful as preliminary guidance for the infrastructure owners to plan for appropriate security measures to protect the industrial network.

Author 1: Shams A. Laghari
Author 2: Selvakumar Manickam
Author 3: Shankar Karuppayah
Author 4: Ayman Al-Ani
Author 5: Shafiq Ul Rehman

Keywords: SECS/GEM; cybersecurity; industry-4.0; machine-to-machine communication; industrial internet of things (IIoT)

PDF

Paper 38: A Similarity Score Model for Aspect Category Detection

Abstract: Aspect-based Sentiment Analysis (ABSA) aims to extract significant aspects of an item or product from reviews and predict the sentiment of each aspect. Previous similarity methods tend to extract aspect categories at the word level by combining Language Models (LM) in their models. A drawback for the LM model is its dependence on a large amount of labelled data for a specific domain to function well. This work proposes a mechanism to address labelled data dependency by a one-step approach experimenting to decide the best combinatory architectures of recurrent-based LM and the best semantic similarity measures for fostering a new aspect category detection model. The proposed model addresses drawbacks of previous aspect category detection models in an implicit manner. The datasets of this study, S1 and S2, are from standard SemEval online competition. The proposed model outperforms the previous baseline models in terms of the F1-score of aspect category detection. This study finds more relevant aspect categories by creating a more stable and robust model. The F1 score of our best model for aspect category detection is 79.03% in the restaurant domain for the S1 dataset. In dataset S2, the F1-score is 72.65% in the laptop domain and 75.11% in the restaurant domain.

Author 1: Zohreh Madhoushi
Author 2: Abdul Razak Hamdan
Author 3: Suhaila Zainudin

Keywords: Aspect category detection; language model; semantic similarity

PDF

Paper 39: SRAVIP: Smart Robot Assistant for Visually Impaired Persons

Abstract: Vision is one of the most important human senses, as visually impaired people encounter various difficulties due to their inability to move safely in different environments. This research aimed to facilitate integrating such persons into society by proposing a robotic solution (Robot Assistance) to assist them in navigating within indoor environments, such as schools, universities, hospitals, airports, etc. according to a prescheduled task. The proposed system is called the smart robot assistant for visually impaired persons (SRAVIP). It includes two subsystems: 1) an initialization system aimed to initialize the robot, create an environment map, and register a visual impaired person as a target object; 2) a real-time operation system implemented to navigate the mobile robot and communicate with the target object using a speech-processing engine and an optical character recognition (OCR) module. An important contribution of the proposed SRAVIP is the user-independent, i.e. it does not depend on the user, and one robot can serve unlimited users. SRAVIP utilized Turtlebot3 robot to realize SRAVIP and then tested it in the College of Computer and Information Sciences, King Saud University, AlMuzahmiyah Campus. The experimental results confirmed that the proposed system could function successfully.

Author 1: Fahad Albogamy
Author 2: Turk Alotaibi
Author 3: Ghalib Alhawdan
Author 4: Mohammed Faisal

Keywords: Mobile robot; robotics; robot assistance; and visually impaired persons

PDF

Paper 40: Modelling the Player and Avatar Attachment based on Student’s Engagement and Attention in Educational Games

Abstract: The Player and Avatar attachment help to motivate a student to strengthen their engagement in gameplay. The different types of avatar designs deployed in a game have an impact on students' engagements. The avatars are designed with different roles, wherein each role offers varying motivational effects on students' engagement. Several research in human and computer interaction have assessed user engagement and user attention in a computer or system application as well as in gameplay. Among the usual approaches to assess user engagement are using questionnaire and eye-tracking. Investigating the possible use of these approaches in determining the player and avatar attachment, particularly the attachment that associated with the various avatar designs and their effect on students' engagement are inconclusive and remains untapped. Essentially, studying students' engagement and attention perception while learning enriches one's comprehension about engagement in the education segment. As such, this study proposes a new model of player and avatar attachment based on the students' engagement and focus attention on the gameplay of digital educational games (DEGs). The model is developed follows a stepwise approach consisting component identification, relationship of the components, model development, and model validation. Several components were scrutinized, summarized, and developed into the model proposed in this study. A significant attachment can determine the avatar design that may influence a student's engagement in gameplay. Hence, this study offers several constructive recommendations for future avatars in game design for education purpose, which may validate the user's engagement based on his or her focus attention.

Author 1: Nooralisa Mohd Tuah
Author 2: Dinna @ Nina Mohd Nizam
Author 3: Zaidatol Haslinda A. Sani

Keywords: Avatar; engagement; attention; digital educational games

PDF

Paper 41: Assessment of Emotion in Online News based on Kansei Approach for National Security

Abstract: Securing a nation is more complicated in modern days than how it was decades ago. In the era of big data, massive information is constantly being shared in cyberspace. Online rumours and fake news could evoke negative emotions and disruptive behaviours that possibly can jeopardize national security. Real-time detection and monitoring of unsettling emotions and potential national security threats should be further developed to help authorities manage the situation early. Text in the online news could be weighted with emotions that possibly lead to a misunderstanding that can affect national security and trigger chaos. Thus, understanding the emotion included in the online news and the relationship with national security is crucial. Kansei approach was determined as a methodology capable of interpreting human emotions towards an artefact. This research explores the emotion assessment using Kansei for text in online news and summarized the emotion variable factors that are likely to have a relationship with an individual state of mind towards one of the national security elements which are political security. The result determines that the identified variables of factors were “Frustrated,” Consent,” Resentful” and “Attentive”. This gives an understanding of the significant effect of people's emotions represented in the text for political security elements.

Author 1: Noor Afiza Mat Razali
Author 2: Nur Atiqah Malizan
Author 3: Nor Asiakin Hasbullah
Author 4: Norul Zahrah Mohd Zainuddin
Author 5: Normaizeerah Mohd Noor
Author 6: Khairul Khalil Ishak
Author 7: Sazali Sukardi

Keywords: Online news; kansei; national security; political security

PDF

Paper 42: Arbitrary Verification of Ontology Increments using Natural Language

Abstract: Parallel to the advancement of practical use cases in computers, the trend toward collaborative ontology engineering is accelerating. Both domain experts and ontologists must collaborate in collaborative ontology engineering processes. However, the bulk of domain experts are not computer experts (i.e. lawyers, medical doctors, bankers, etc.). Question and Answer on Linked Data (QALD) is a suggested method for non-computer domain experts to engage with the ontology increments as they evolve. Existing QALD methods and systems, on the other hand, have a number of drawbacks, including significant setup requirements, domain dependence, and user discomfort. As a result, a new QALD algorithm and QALD system designed with the usage of First Order Logic (FOL) are presented in order to address the shortcomings of current QALD mechanisms. The suggested FOL based, QALD mechanism was tested quantitatively and qualitatively over three distinct ontology increments. This experiment had an overall acceptance rate of 79 percent from all stakeholders.

Author 1: Kaneeka Vidanage
Author 2: Noor Maizura Mohamad Noor
Author 3: Rosmayati Mohemad
Author 4: Zuriana Abu Bakar

Keywords: First order logic; linked data; ontologist; iterative framework

PDF

Paper 43: An Improvised Facial Emotion Recognition System using the Optimized Convolutional Neural Network Model with Dropout

Abstract: Facial expression detection has long been regarded as both verbal and nonverbal communication. The muscular expression on a person's face reflects their physical and mental state. Using computer programming to integrate all face curves into a categorization class is significantly more important than doing so manually. Convolutional Neural Networks, an Artificial Intelligence approach, was recently developed to improve the task with more acceptance. Due to overfit during the learning step, the model performance may be lowered and regarded underperforming. There is a method dropout uses to reduce testing error. The influence of dropout is applied at convolutional layers and dense layers to classify face emotions into a distinct category of Happy, Angry, Sad, Surprise, Neutral, Disgust, and Fear and is represented as an improved convolutional neural network model. The experimental setup used the datasets namely JAFEE, CK48, FER2013, RVDSR, CREMAD and a self-prepared dataset of 36,153 facial images for observing train and test accuracy in presence and absence of dropout. Test accuracies of 92.33, 96.50, 97.78, 99.44, and 98.68 are obtained on Fer2013, RVDSR, CREMA-D, CK48, and JAFFE datasets are obtained in presence of dropout. The used features are countably large in the computation as a result the higher computation support of NVDIA with the capacity of GPU 16GB, CPU 13GB and memory 73.1 GB are used for the experimental purposes.

Author 1: P V V S Srinivas
Author 2: Pragnyaban Mishra

Keywords: Convolutional neural network (CNN); facial emotion recognition (FER); dropout; FER 2013; CREMAD; RVDSR; CK48; JAFFE

PDF

Paper 44: An Exploration on Online Learning Challenges in Malaysian Higher Education: The Post COVID-19 Pandemic Outbreak

Abstract: Flexible online programmes and learning are gaining popularity as a means of educating students. It can also facilitate the delivery of knowledge to pupils, as well as facilitating the learning process. The purpose of this study was to investigate Online Learning Challenges following the Covid-19 pandemic outbreak in Malaysia. This study employs the qualitative and Fuzzy Delphi Method in collecting the data. In the qualitative research phase, open-ended questions were distributed to 118 participants, while in the Fuzzy Delphi phase, expert questionnaires were distributed to 7 experts in the field of study. Qualitative data were analysed using Atlas-ti software, whereas Fuzzy Delphi data was analysed using Fudelo 1.0 software. The qualitative study discovered that students confront seven significant challenges: internet coverage, mental fatigue, learning devices, environmental disturbance, pedagogical challenges, lack of motivation, and social interaction.Meanwhile, the fuzzy Delphi analysis of the expert consensus of the theme is at a reasonable level. The overall expert consensus agreement findings exceed 75%, the overall value of the threshold (d) is 0.2, and the -cut exceeds 0.5. The study provides important insights into online learning issues and the fields for further improvement. This study also discusses the avenue for future research by future researchers for more significant benefits and contributions to knowledge in general.

Author 1: Ramlan Mustapha
Author 2: Maziah Mahmud
Author 3: Norhapizah Mohd Burhan
Author 4: Hapini Awang
Author 5: Ponmalar Buddatti Sannagy
Author 6: Mohd Fairuz Jafar

Keywords: Online learning; COVID-19; outbreak; fuzzy delphi method; expert consensus

PDF

Paper 45: An Advanced Stress Detection Approach based on Processing Data from Wearable Wrist Devices

Abstract: Today's busy lifestyle often leads to frequent stress, the accumulation of which may lead to severe consequences for humans. Smartwatches are widely distributed and accessible, and as such deserve intelligent solutions that deal with the processing of such collected data and ensuring the improvement of the quality of life of end-users. The goal of this research is to create a stress detection technology that can correctly, constantly, and unobtrusively monitor psychological stress in real time. Due to the importance of stress detection and prevention, many traditional and advanced techniques have been proposed likewise we provide a unique stress-detection technique that is context-based. Due to the importance of stress detection and prevention, many traditional and advanced techniques have been proposed. In this research, a novel approach to designing and using a deep neural network for stress detection is presented. To provide a desirable training environment for network development, an open-source data set based on motion and physiological information collected from wrist and chest-worn devices was acquired and exploited. Raw data were analyzed, filtered, and preprocessed to create the best possible training data. For the proposed solution to have wide use value, further focus was placed on the data recorded using only smartwatches. Smartwatches are widely distributed and accessible, and as such deserve intelligent solutions that deals with the processing of such collected data and ensuring the improvement of the quality of life of end-users. Finally, two network types with proven capabilities of processing time series data are examined in detail: a fully convolutional network (FCN) and a ResNet deep learning model. The FCN model showed better empirical performances, and further efforts were made to select an optimal network structure. In the end, the proposed solution demonstrated performance similar to state-of-the-art solutions and significantly better than some traditional machine learning techniques, providing a good foundation for reliable stress detection and further development efforts.

Author 1: Mazin Alshamrani

Keywords: Fully convolutional neural network; stress detection; smartwatch; data pre-processing; semi-supervised learning

PDF

Paper 46: An Evaluation of the Accuracy of the Machine Translation Systems of Social Media Language

Abstract: In this age of information technology, it has become possible for people all over the world to communicate in different languages through social media platforms with the help of machine translation (MT) systems. As far as the Arabic-English language pair is concerned, most studies have been conducted on evaluating the MT output for the standard varieties of Arabic, with fewer studies focusing on the vernacular or colloquial varieties. This study attempts to address this gap through presenting an evaluation of the performance of MT output for vernacular or colloquial Arabic in the social media domain. As it is currently the most widely used MT system, Google Translate (GT) has been chosen for evaluating the reliability of its output in the context of translating the Arabic colloquial language (i.e., Egyptian/Cairene Arabic variety) used in social media into English. With this goal in mind, a corpus consisting of Egyptian dialectal Arabic sentences were collected from social media networks, i.e., Facebook and Twitter, and then fed into GT system. The GT output was then evaluated by three human translators to assess their accuracy of translation in terms of adequacy and fluency. The results of the study show that several translation problems have been spotted for GT output. These problems are mainly concerned with wrong equivalents, inappropriate additions and deletions, and transliteration for out-of-vocabulary (OOV) words, which are mostly due to the literal translation of the Arabic vernacular sentences into English. This can be due to the fact that Arabic vernacular varieties are different from the standard language for which MT systems have been basically developed. This, consequently, necessitates the need to upgrade such MT systems to deal with the vernacular varieties.

Author 1: Yasser Muhammad Naguib Sabtan
Author 2: Mohamed Saad Mahmoud Hussein
Author 3: Hamza Ethelb
Author 4: Abdulfattah Omar

Keywords: Colloquial Arabic; Google translate; machine translation evaluation; reliability; social media

PDF

Paper 47: Impact of Data Compression on the Performance of Column-oriented Data Stores

Abstract: Compression of data in traditional relational database management systems significantly improves the system performance by decreasing the size of the data that results in less data transfer time within the communication environment and higher efficiency in I/O operations. The column-oriented database management systems should perform even better since each attribute is stored in a separate column, so that its sequential values are stored and accessed sequentially on the disk. That further increases the compression efficiency as the entire column is compressed/decompressed at once. The aim of this research is to determine if data compression could improve the performance of HBase, running on a small-sized Hadoop cluster, consisted of one name node and nine data nodes. Test scenario includes performing Insert and Select queries on multiple records with and without data compression. Four data compression algorithms are tested since they are natively supported by HBase - SNAPPY, LZO, LZ4 and GZ. Results show that data compression in HBase highly improves system performance in terms of storage saving. It shrinks data 5 to 10 times (depending on the algorithm) without any noticeable additional CPU load. That allows smaller but significantly faster SSD disks to be used as cluster’s primary data storage. Furthermore, the substantial decrease in the network traffic is an additional benefit with major impact on big data processing.

Author 1: Tsvetelina Mladenova
Author 2: Yordan Kalmukov
Author 3: Milko Marinov
Author 4: Irena Valova

Keywords: Column-oriented data stores; data compression; distributed non-relational databases; benchmarking column-oriented databases

PDF

Paper 48: IoT-based Cyber-security of Drones using the Naïve Bayes Algorithm

Abstract: Recent advancements in drone technology are opening new opportunities and applications: in various fields of life especially in the form of small drones. However, these advancements are also causing new challenges in terms of security, adaptability, and consistency. Small drones are proving to be a new opportunity for the civil and military industries. The small drones are suffering from architectural issues and the definition of security and safety issues. The rapid growth of the Internet of things opens new dimensions for drone technology but posing new threats as well. The tiny flying intelligent devices are challenging for the security and privacy of data. The design of these small drones is yet not matured to fulfill the domain requirements. The basic design issues also need security mechanisms, privacy mechanisms, and data transformations. The aspects like intrusion and interception in the domain of the Internet of Drones (IoD) need to be investigated to make these timely drones more secure and more adaptable. In this paper, we have used intelligent machine learning approach to design an IoT aided drone. This approach will provide intelligent cyber security system which will help in detecting network security threats using Blockchain.

Author 1: Rizwan Majeed
Author 2: Nurul Azma Abdullah
Author 3: Muhammad Faheem Mushtaq

Keywords: Drone technology; security; internet of things; internet of drones; machine learning; blockchain

PDF

Paper 49: Data Mining to Determine Behavioral Patterns in Respiratory Disease in Pediatric Patients

Abstract: There are several varieties of respiratory diseases which mainly affect children between 0 and 5 years of age, not having a complete report of the behavior of each of these. This research seeks to conduct a study of the behavior of patterns in respiratory diseases of children in Peru through data mining, using data generated by the health sector, organizations and research between the years 2015 to 2019. This process was given by means of the K-Means clustering algorithm which allowed performing an analysis of this data identifying the patterns in a total of 10,000 Peruvian clinical records between the years mentioned, generating different behaviors. Through the grouping obtained in the clusters, it was obtained as a result that most of the cases in all the ages studied, they presented diseases with codes between the range of 000 and 060 approximately. This research was carried out in order to help health centers in Peru for further study, documentation and due decision-making, waiting for optimal prevention strategies regarding respiratory diseases.

Author 1: Michael Cabanillas-Carbonell
Author 2: Randy Verdecia-Peña
Author 3: José Luis Herrera Salazar
Author 4: Esteban Medina-Rafaile
Author 5: Oswaldo Casazola-Cruz

Keywords: Respiratory diseases; data mining; cluster algorithms; K-Means algorithm

PDF

Paper 50: Image Encryption Enabling Chaotic Ergodicity with Logistic and Sine Map

Abstract: Chaotic systems with complicated characteristics of ergodicity, impredictability as well as sensitivity to beginning stages are commonly utilized in the world of cryptography. A 2D logistic-adjusted-sine (LS) map is implemented in this article. Performance assessments reveal superior ergodicity as well as unpredictable and even a broader spectrum of chaotics than numerous previous chaotic maps. This research also develops a 2D-LS-based image encryption system and proposed LS-IES. The notion of diffusion as well as confusion is properly complied with enabled encryption functions. Research outcomes as well as security analyses demonstrate that LS-IES can swiftly encrypting different parameters in various images with a great resistance towards security threats.

Author 1: Mohammad Ahmar Khan
Author 2: Jalaluddin Khan
Author 3: Abdulrahman Abdullah Alghamdi
Author 4: Sarah Mohammed Awadh Bait Saidan

Keywords: Image encryption; ergodicity; logistic sine map; security; privacy

PDF

Paper 51: An Optimized Neural Network Model for Facial Expression Recognition over Traditional Deep Neural Networks

Abstract: Emotions have a key role in Feedback analysis to provide a good customer service, the main seven emotions are Anger, Disgust, Fear, Happy, Neutral, Sad and Surprise. There are several advantages, an efficient Facial Emotion Recognition model can help us in self-discipline and control over the drivers, while they are driving the vehicle. Low resolution and Low-reliable images are main problems in this field. We proposed a new model which can efficiently perform on Low resolution and Low-reliable images. We created a low resolution facial expression dataset (LRFE) by collecting various images from different resources, which contains low resolution images. We also proposed a new hybrid filtering method, which is a combination of Gaussian, Bilateral, Non local means filtering techniques. Densenet-121 achieves 0.60 0.68 accuracy on fer2013 and LRFE respectively. When hybrid filtering method is combined with Densenet-121, it achieved 0.95 accuracy. Similarly Resnet-50, MobileNet, Xception models performed effectively when combined with the hybrid filtering method. The proposed convolutional neural network(CNN) model achieved 0.65 accuracy on fer2013 dataset, while the existing models like Resnet-50, MobileNet, Densenet-121 and Xception obtained 0.60 0.57 0.60 0.52 accuracies on fer2013 respectively. The proposed model when combined with hybrid filtering method achieved 0.85 accuracy. Clearly the proposed model outperforms the traditional methods. When the hybrid filtering method is combined with the CNN models, there is significant increase in the accuracy.

Author 1: Pavan Nageswar Reddy Bodavarapu
Author 2: P.V.V.S Srinivas

Keywords: Facial expression recognition; deep learning; filtering techniques; convolutional neural network; emotion

PDF

Paper 52: Development of a Low-Cost Bio-Inspired Swimming Robot (SRob) with IoT

Abstract: Now-a-days, exploring underwater is a difficult activity to do and requires specialized equipment to explore it. Many studies so far have proven that bio-inspired robotic fish such as stingray robots have many advantages for use as underwater exploration. One of them is the manta ray which can show excellent swimming ability by flapping the pectoral fins with large amplitude. By studying the movement behavior of genus Mobula, the development of biomimetic robots has grown exponentially in recent years. But this technology requires expensive development costs, and the prototypes produced are heavy. Therefore, the development of low-cost bio-inspired Swimming Robot (SRob) using embedded controller with internet of things (IOT) is proposed and presented in this paper. SRob is designed with a small size and lightweight compared to other conventional swimming robots and is well equipped with 6 servo motors, ADXL335 accelerometer 3-axis, 2 Lipo batteries 7.4V, ESP01 Wi-Fi module and Arduino Mega. The RemoteXY app that works like a remote control will be connected to the Arduino Mega using the ESP01 Wi-Fi module to control servo motors and obtain readings of the sensors. Based on the experimental results, the servo motor used to produce flapping motion can be controlled precisely while producing a large amplitude of motion. In addition, the position control for the compact SRob can be realized and determined correctly while swimming in the water.

Author 1: Mohd Aliff
Author 2: Ahmad Raziq Mirza
Author 3: Mohd Ismail
Author 4: Nor Samsiah

Keywords: Stingray robot; angle of flapping motion; remote control; position control; compact Srob

PDF

Paper 53: Multicriteria Handover Management by the SDN Controller-based Fussy AHP and VIKOR Methods

Abstract: A wireless environment is characterized by its dynamic nature, inherent uncertainty, and imprecise parameters and constraints. Network settings such as speed, RSS, network delays, etc. are inherently imprecise. Due to this vagueness, accurately measuring these network parameters in a wireless environment is a difficult task. As a result, a fuzzy logic approach appears to work best when used to design systems in such environments. Although conventional techniques based on precise values can be used to reduce transmission delay, they cannot produce intelligent and efficient transfer decisions that take into account all the constraints of the network. Thus, using one criterion only can lead to service disruption, unbalanced network load, and inefficient handoff. Therefore, to guide the Horizontal handover process in wireless networks towards making a better choice for VoIP in congested environments, we propose the integration of the Fuzzy-AHP and VIKOR method in SDN (Software Defined Networking) controller on several criteria (the signal-to-noise ratio plus interference (SNIR), packet loss, jitter, delay, throughput). However, the results of this work show that our contribution maintains a good quality of service for real-time applications.

Author 1: Najib Mouhassine
Author 2: Mostapha Badri
Author 3: Mohamed Moughit

Keywords: SDN; QoS; WLAN; Handover; F-AHP; VIKOR

PDF

Paper 54: ROI Image Encryption using YOLO and Chaotic Systems

Abstract: In this paper, we design a cellular automata (CA)-based ROI (region of interest) image encryption system that can effectively reduce computational cost and maintain an appropriate level of security. The proposed image encryption system obtains a cryptographic image through three steps. First, a region of interest with high importance is extracted from the entire image using deep learning. We use the YOLO (You Only Look Once) algorithm to extract the ROI from a given original image. Next, the detected ROI is encrypted using the Chen system, a chaotic-based function with high security. Finally, the execution time is effectively reduced by encrypting the entire image using a hardware-friendly CA. The safety of the proposed encryption system is verified through various statistical experiment results and analyses.

Author 1: Sung Won Kang
Author 2: Un Sook Choi

Keywords: Image encryption; cellular automata; YOLO algorithm; deep learning; Chen system; region of interest

PDF

Paper 55: Multi-point Fundraising and Distribution via Blockchain

Abstract: Trust and transparency are significant facets that are much esteemed by charitable organizations in achieving their mission and encouraging donations from the public. However, after many high-profile scandals, the faith in charities is questionable, heralding the need for an increased level of transparency among such organizations. Fortunately, leveraging Blockchain technology in charities’ systems could help to rebuild the integrity of these organizations. This study aims to raise the level of integrity showcased by charities by creating a multi-point fundraising approach using smart contracts. The proposed system offers a transparent fundraising platform through its integration of charity organization evaluators. Various steps were deployed to satisfy the intended target. Firstly, the study investigated the potentials of Blockchain in improving the level of transparency. Secondly, a probing process was undertaken to choose a suitable platform as a server-side in the system. This process involved garnering salient features in Blockchain platforms based on the proposed system requirements. After the probing process, a Decision Support System (DSS) was utilized to investigate the most suitable Blockchain platform. Results garnered proved that the Ethereum platform is best for the proposed system.

Author 1: Abdullah Omar Abdul Kareem Alassaf
Author 2: Fakhrul Hazman Yusoff

Keywords: Blockchain; smart contract; transparency; charity

PDF

Paper 56: Power System Controlled Islanding using Modified Discrete Optimization Techniques

Abstract: Controlled islanding is implemented to save the power system from experiencing blackouts during severe sequence line tripping. The power system is partitioned into several stand-alone islands by removing the optimal transmission line during controlled islanding execution. Since selecting the optimal transmission lines to be removed (cutsets) is important in this action, a good technique is required in order to determine the optimal islanding solution (lines to be removed). Thus, this paper developed two techniques, namely Modified Discrete Evolutionary Programming (MDEP) and Modified Discrete Particle Swarm Optimization (MDPSO) to determine the optimal islanding solution for controlled islanding implementation. The best technique among these two which is based on their capability of producing the optimal islanding solution with minimal objective function (minimal power flow disruption) will be selected to implement the controlled islanding. The performance of these techniques is evaluated through case studies using the IEEE 118-bus test system. The results show that the MDEP technique produces the best optimal islanding solution compared to the MDPSO and other previously published techniques.

Author 1: N. Z. Saharuddin
Author 2: I. Z. Abidin
Author 3: H. Mokhlis
Author 4: M.Y. Hassan

Keywords: Controlled islanding; modified discrete evolutionary programming (MDEP) technique; modified discrete particle swarm optimization (MDPSO) technique; minimal power flow disruption; power imbalance

PDF

Paper 57: Development of Technology to Support Large Information Storage and Organization of Reduced User Access to this Information

Abstract: This article solves the problem of developing a technology for supporting large information storages and organizing delimited user access to this information, which provides a service both for managing these objects and organizing access to these objects. Solving the problem will allow you to create a conceptual model with the allocation of basic entities among information objects and the establishment of relationships between them. It will also allow the development of technical documentation reflecting the results of the first stage of creating an information system: solving problems of syntactic and technical interoperability, developing a single interface, interacting with users, etc. In existing DL developments, as a rule, search and access to information are provided only through visual graphical interfaces. The task of the subsystem for integrating various digital resources is to provide other subsystems with a single interface for access to information stored in the data sources of the system. That is, any resource must be cataloged in a standard way, provided with metadata, access rules, and a unique identifier. To implement search functions outside of graphical interfaces, support for special network services and query languages is required. Ideally, all IS should support a single search profile and a single query language.

Author 1: Serikbayeva Sandugash Kurmanbekovna
Author 2: Batyrkhanov Ardak Gabitovich
Author 3: Sambetbayeva Madina Aralbaevna
Author 4: Sadirmekova Zhana Bakirbaevna
Author 5: Yerimbetova Aigerim Sembekovna

Keywords: Information systems; digital library; metadata; collection; privilege; rights; administrator

PDF

Paper 58: Open Text Ontology Mining to Improve Retrievals of Information

Abstract: Information retrieval is the main task to extract relevant information from documents. Mostly, the information retrieval system is based on the keyword approach to extract the knowledge of relevant documents. The experiment shows the ontology can improve the result to overcome the weakness of keyword approach. Ontology implementation method is based on phrase formation and semantic relationships between words. This study tested 10 Malay documents using ontology to retrieve information. The results obtained were compared with the result obtained from manual information retrieval done by experts for precision and recall measure. In this study, there are three semantic relationships between words that are capable of expressing knowledge in documents. They are taxonomy relationship, attribute relationship and non-taxonomy relationship. The relationship of ontology can be formed by using taxonomy relationships algorithm, attribute relationships algorithm and non-taxonomy relationships algorithm based on the linguistic rules of the Malay language. The result of precision and recall for this experiment shows that the ontology approach can enhance the performance of information retrieval from the relevant documents.

Author 1: Mohd Pouzi Hamzah
Author 2: Syarifah Fatem Na’imah Syed Kamaruddin

Keywords: Information retrieval; ontology; Malay text; taxonomy relationship; non-taxonomy relationship

PDF

Paper 59: Adaptive Control Technique Effects on Single Link Bilateral Articulated Robot Arm

Abstract: This paper describes a technique for addressing the issue of instability within force controller by developing a model of a bilateral master-slave haptic system that incorporates a Disturbance Observer (DOB) in a robotic simulation. The suggested modeling is used in conjunction with conventional controllers to be correcting undesired noise that occurs inside the working system of a particular joint of the youBot arm. To acquire the target position, the controller will additionally compensate for interference by changing its position response. Two tests were carried out to examine and compare the system’s feedback that employed the proposed approach and another system with the conventional and standard-setting. The experimental findings demonstrate the resilience of the suggested system, as the system integrated with observers is more precise and faster. All of the system feedbacks from conducted experiments are measured in the simulation platform.

Author 1: Nuratiqa Natrah Mansor
Author 2: Muhammad Herman Jamaluddin
Author 3: Ahmad Zaki Shukor

Keywords: Force and position controller; disturbance observer; simulated bilateral system; adaptive control; manipulator arm

PDF

Paper 60: A Novel Method for Rainfall Prediction and Classification using Neural Networks

Abstract: In the field of food production, it is an important and difficult job to maintain water sources for major population centres and reduce the risk of flooding, to forecast rainfall reliably and accurately. Accurate and genuine forecasts of rainfall on monthly and seasonal time scales help to provide beneficiaries with knowledge on the control of water supplies, farm forecasting and integrated crop insurance applications. Present rainfall prediction is the challenging task for the researchers and most of the rainfall prediction techniques are fail in accuracy. For this we propose a new effective hybrid approach for forecasting and classifying rainfall using the neural network and ACO method. The collected rainfall data were preprocessed by filling missing data and normalized by min-max normalization, the processed data is given to various classifiers for evaluating its performance. The performance of the existing and proposed models is compared. Performance comparison of existing feed-forward, cascade-forward and pattern recognition NN classifier and the proposed ACO+feed-forward backpropagation, ACO+ cascade-forward backpropagation and ACO+ pattern recognition NN classifier are done. The entire HNN forecasting protocol consists of pre-processing and choosing the input vector and maximising the number of hidden nodes using ACO and ANN modelling.

Author 1: K. Varada Rajkumar
Author 2: K. Subrahmanyam

Keywords: Pattern recognition; ant colony optimization; artificial neural network; rainfall prediction; feed-forward; cascade-forward; data processing

PDF

Paper 61: A Hybrid Model to Profile and Evaluate Soft Skills of Computing Graduates for Employment

Abstract: Emerging tools such as Game Based Assessments have been valuable in talent screening and matching soft skills for job selection. However, these techniques/models are rather stand alone and are unable to provide an objective measure of the effectiveness of their approach leading to mismatch of skills. In this research study, we are proposing a Theoretical Hybrid Model, combining aspects of Artificial Intelligence and Game Based Assessment in profiling, assessing and ranking graduates based on their soft skills. Firstly, an Intelligent Controller is used to extract and classify the graduate skill profile based on data findings extracted using traditional assessment methods of self-evaluation and interview. With motivation and engagement as a competitive difference, an existing Game Based Assessment (OWIWI) is then used to assess the soft skills of these graduates hence generating a Graduate Profile based on results of the game. Moving forward, a ranking technique is then applied to match the profile to selected job requirements based on soft skills required for the job and the graduate strength. Finally, a comparison analysis is concluded based on the soft skills profile obtained before employment (pre-employment) and objective measure feedback of soft skills obtained after employment (post-employment) to provide a validity check to study the effectiveness of the overall Hybrid Model. Specifically, data obtained from this study can be useful in solving issues of unemployment due to mismatch of soft skills at the Higher Learning Institution level.

Author 1: Hemalatha Ramalingam
Author 2: Raja Sher Afgun Usmani
Author 3: Ibrahim Abakar Targio Hashem
Author 4: Thulasyammal Ramiah Pillai

Keywords: Soft skills; artificial intelligence; intelligent con-troller; game based assessment; graduate profile; hybrid model

PDF

Paper 62: System Dynamics Modeling for Solid Waste Management in Lima Peru

Abstract: This research work focuses on environmental care based on the treatment of solid, organic and inorganic waste. These inappropriate wastes cause deterioration of the environ-ment and the ozone layer. This is why we are currently seeing an abrupt change in climate and diseases caused by environmental pollution. The objective of the research work is to perform a system dynamics modeling for effective and efficient solid waste management in Lima Peru, and thus contribute to the scientific community to achieve a future vision for solid waste management. The methodology used was system dynamics, which made it possible to analyze and understand the behavior of a complex solid waste system in a given time. In addition, vensim software was used for system dynamics modeling, creating the causal diagram and forrester diagram for solid waste management. The results obtained are the system dynamics modeling proposed for solid waste management, which were modeled from 2020 to 2030, where by 2030 it will be reduced in favorable equilibrium to 23,066 tons. Thanks to this system dynamics modeling, society will be made aware of the need to sort and use solid waste, in order to reduce environmental pollution. Likewise, having a healthy environment that will benefit health, agriculture and education will benefit society as a whole.

Author 1: Margarita Giraldo Retuerto
Author 2: Dayana Ysla Espinoza
Author 3: Laberiano Andrade-Arenas

Keywords: Causal diagram; environmental pollution; forrester diagram; systems dynamics; vensim

PDF

Paper 63: Analysis of Distance Learning in the Professional School of Systems Engineering and Informatics

Abstract: The distance modality exponentially accelerated the use of technological tools in times of pandemic. In this context, educational institutions at all levels implemented actions to strengthen teaching work through training. The present study was carried out in the University of Sciences and Humanities considering the distance teaching process that is based on three dimensions: teaching strategy, resources and pedagogical mate-rials, and evaluation. The study objective to analyze the distance learning process in its 3 dimensions to propose solutions in virtual teaching. The applied methodology was of a mixed approach; that is, qualitative through focus group and quantitative through student survey. The student population of 159 and a sample of 113 with a confidence level of 95% and margin of error 5%.The result obtained in the focus group shows that teachers have difficulties in the application of teaching strategies in the virtual modality evidenced in the management of digital tools, elaboration of rubrics to evaluate learning, and in the use of resources and pedagogical materials. This is complemented with surveys that show partial acceptance of teaching work in the distance modality; that is, the teaching strategy has an average of 3,76 and standard deviation (S.D) ,63 and 58,41% agrees with the teacher’s teaching strategy; likewise, the pedagogical resources and materials dimension was obtained an average of 3,72 and S.D ,74 and agrees 51,33%.Also in the evaluation, an average of 3,76 and S.D ,72 were obtained with a 55,75% according to the way the teacher evaluates.The research work serves as input for future curricular designs in the distance modality.

Author 1: Eleazar Flores Medina
Author 2: Yrma Principe Somoza
Author 3: Laberiano Andrade-Arenas
Author 4: Janet Corzo Zavaleta
Author 5: Roberto Yon Alva
Author 6: Samuel Vargas Vargas

Keywords: Distance modality; evaluation; focus group; re-sources and pedagogical materials; teaching strategy

PDF

Paper 64: An ICU Admission Predictive Model for COVID-19 Patients in Saudi Arabia

Abstract: Globally, COVID-19 already emerged in around 170 million confirmed cases of infected people and, as of May 31, 2021, affected more than 3.54 million deaths. This pandemic has given rise to numerous public health and socioeconomic issues, emphasizing the significance of unraveling the epidemic’s history and forecasting the disease’s potential dynamics. A variety of mathematical models have been proposed to obtain a deeper understanding of disease transmission mechanisms. Machine Learning (ML) models have been used in the last decade to identify patterns and enhance prediction efficiency in healthcare applications. This paper proposes a model to predict COVID-19 patients admission to the intensive care unit (ICU). The model is built upon robust known classification algorithms, including classic Machine Learning Classifiers (MLCs), an Artificial Neural Network (ANN) and ensemble learning. This model’s strength in predicting COVID-19 infected patients is shown by performance analysis of various MLCs and error metrics. Among other used ML models, the ANN model resulted in the highest accuracy, 97.9% over other models. Mean Squared Error showed that the ANN method had the lowest error (0.0809). In conclusion, this paper could be beneficial to ICU staff to predict ICU admission based on COVID-19 patients’ clinical characteristics.

Author 1: Hamza Ghandorh
Author 2: Muhammad Zubair Khan
Author 3: Raed Alsufyani
Author 4: Mehshan Khan
Author 5: Yousef M. Alsofayan
Author 6: Anas A. Khan
Author 7: Ahmed A. Alahmari

Keywords: Covid-19; ANN; ensemble learning method; predic-tion; ICU admission; Saudi Arabia

PDF

Paper 65: Applying Custom Algorithms in Windows Active Directory Certificate Services

Abstract: The article presents a solution to the problem of not recognizing the O’zDst 1092:2009 algorithm by the operating system and the problem of using digital certificates generated using the O’zDst 1092:2009 algorithm and O’zDst 1106:2009 algorithm. These algorithms were adopted in 2009. But these algorithms are still not recognized by the operating system. For other cryptographic algorithms used in Windows, cryptographic service providers have been developed that provide cryptographic operation functions to other software. These cryp-tographic service providers do not support the above algorithms. From here it becomes necessary to develop the cryptographic provider supporting the O’zDSt 1106:2009 hashing algorithm and the O’zDSt 1092:2009 signature algorithm. But to work with digital certificates, one cryptographic provider is not enough. Special extensions are also required to encode and decode digital certificate data. Therefore, the development of an extension for cryptographic providers is given. Also, for managing digital cer-tificates and key lifecycle, a method of integrating cryptographic providers with Windows Active Directory Certificate Services is presented. Developed cryptographic providers are composed of 3 types of providers such as hash provider, signature provider, and key storage provider. The architecture of the key storage provider, a method for secure storage of cryptographic keys, as well as key access control are proposed. The description of the O’zDst 1092:2009 algorithm and the implementation of the functions of the Key storage provider interface are shown.

Author 1: Alaev Ruhillo

Keywords: O’zDSt 1106:2009 hashing algorithm; the O’zDSt 1092:2009 signature algorithm; active directory certificate services; digital certificate; key access control; key storage provider

PDF

Paper 66: Robust Real-time Head Pose Estimation for 10 Watt SBC

Abstract: Head Pose Estimation has always been an essential part for many applications such as autonomous driving and driv-ing assist systems and hence performance optimization provides better performance as well as lower computing and power needs that allows us to run such applications over embedded devices inside these systems. In this article we present an implementation over a Single board computer for a new system of 3D Head pose estimation that estimates the Head pose of a person in real-time for applications such as Driver monitoring systems, Drones, Gesture recognition and tracking devices. The system is developed over a single board computer (SBC) that is suitable for very low powered applications, it only utilizes the data provided through the IR camera sensor to estimate both the Head and camera pose without any need for external sensors. This system will combine methods that include traditional image processing techniques for image projection, feature detection, key point description and 3D pose estimation along with Machine Learning techniques for face detection and facial landmarks detection.

Author 1: Emad Wassef
Author 2: Hossam E. Abd El Munim
Author 3: Sherif Hammad
Author 4: Maged Ghoneima

Keywords: Head Pose Estimation; real-time; face detection; face landmarks localization; single board computing; SBC; GPU optimization

PDF

Paper 67: SIP-MBA: A Secure IoT Platform with Brokerless and Micro-service Architecture

Abstract: The Internet of Things is one of the most interesting technology trends today. Devices in the IoT network are often geared towards mobility and compact in size, thus having a rather weak hardware configuration. There are many light weight protocols, tailor-made suitable for limited processing power and low energy consumption, of which MQTT is the typical one. The current MQTT protocol supports three types of quality-of-service (QoS) and the user has to trade-off the security of the packet transmission by transmission rate, bandwidth and energy consumption. The MQTT protocol, however, does not support packet storage mechanisms which means that when the receiver is interrupted, the packet cannot be retrieved. In this paper, we present a broker-less SIP-MBA Platform, designed for micro-service and using gRPC protocol to transmit and receive messages. This design optimizes the transmission rate, power consumption and transmission bandwidth, while still meeting reliability when communicating. Besides, we implement users and things management mechanisms with the aim of improving secu-rity issues. Finally, we present the test results by implementing a collect data service via gRPC protocol and comparing it with streaming data by using the MQTT protocol.

Author 1: Lam Nguyen Tran Thanh
Author 2: Nguyen Ngoc Phien
Author 3: The Anh Nguyen
Author 4: Hong Khanh Vo
Author 5: Hoang Huong Luong
Author 6: Tuan Dao Anh
Author 7: Khoi Nguyen Huynh Tuan
Author 8: Ha Xuan Son

Keywords: Internet of Things (IoT); gRPC; Single Sign-On; brokerless; micro-service; MQTT; message queue; security

PDF

Paper 68: IoHT-MBA: An Internet of Healthcare Things (IoHT) Platform based on Microservice and Brokerless Architecture

Abstract: Internet of Thing (IoT), currently, is one of the technology trends that are most interested. IoT can be divided into five main areas including: Health-care, Environmental, Smart city, Commercial and Industrial. The IoHT-MBA Platform is considered the backbone of every IoT architecture, so the optimal design of the IoHT-MBA Platform is essential issue, which should be carefully considered in the different aspects. Although, IoT is applied in multiple domains, however, there are still three main features that are challenge to improve: i) data collection, ii) users, devices management, and iii) remote device control. Today’s medical IoT systems, often too focused on the big data or access control aspects of participants, but not focused on collecting data accurately, quickly, and efficiently; power redundancy and system expansion. This is very important for the medical sector - which always prioritizes the availability of data for therapeutic purposes over other aspects. In this paper, we introduce the IoHT Platform for Healthcare environment which is designed by microservice and brokerless architecture, focusing strongly on the three aforementioned characteristics. In addition, our IoHT Platform considers the five other issues including (1) the limited processing capacity of the devices, (2) energy saving for the device, (3) speed and accurate of the data collection, (4) security mechanisms and (5) scalability of the system. Also, in order for the IoHT Platform to be suitable for the field of health monitoring, we also add realtime alerts for the medical team. In the evaluation section, moreover, we describe the evaluation to prove the effectiveness of the proposed IoHT Platform (i.e. the proof-of-concept) in the performance, non-error, and non affected by geographical distance. Finally, a complete code solution is publicized on the authors’ GitHub repository to engage further reproducibility and improvement.

Author 1: Lam Nguyen Tran Thanh
Author 2: Nguyen Ngoc Phien
Author 3: The Anh Nguyen
Author 4: Hong Khanh Vo
Author 5: Hoang Huong Luong
Author 6: Tuan Dao Anh
Author 7: Khoi Nguyen Huynh Tuan
Author 8: Ha Xuan Son

Keywords: Internet of Health Things (IoHT); microservice; brokerless; gRPC; kafka; single sign-on; RBAC

PDF

Paper 69: A Survey on the Effectiveness of Virtual Reality-based Therapy and Pain Management

Abstract: Virtual reality refers to the technology used to create multi-sensory three-dimensional environments that can be navigated, manipulated, and interacted by a user. This paper’s objective is to categorize the most common areas that use virtual reality (VR) for managing pain (psychological and physical). To our knowledge, this is the first survey that summarizes all of these areas in one place. This paper reviews the conducted studies that used VR for psychological treatment, especially with phobias. Also, this paper summarizes the current literature on using virtual reality interventions for managing acute, chronic, and cancer pain. Based on the review, virtual reality shows great potential for controlling acute pain - such as pain associated with burn wound care. However, limited studies only investigated the impact of using virtual reality on patients with chronic pain. The findings indicated that VR distraction has a great impact on pain and distress related to cancer and its treatments. This paper also discusses the challenges and limitations of the current research. Notably, the identified studies recommend VR distraction as a promising adjunct for pain reduction and psychological treatment. However, further research needs to be conducted to determine under what conditions VR distraction will provide more analgesic effects.

Author 1: Fatma E. Ibrahim
Author 2: Neven A. M. Elsayed
Author 3: Hala H. Zayed

Keywords: Virtual reality; mental health; cancer pain; distrac-tion; pain management

PDF

Paper 70: Improved Medical Image Classification Accuracy on Heterogeneous and Imbalanced Data using Multiple Streams Network

Abstract: Small and massively imbalanced datasets are long-standing problems on medical image classification. Traditionally, researchers use pre-trained models to solve these problems, however, pre-trained models typically have a huge number of trainable parameters. Small datasets are challenging for them to train a model adequately and imbalanced datasets easily lead to overfitting on the classes with more samples. Multiple-stream networks that learn a variety of features have recently gained popularity. Therefore, in this work, a quad-stream hybrid model called QuadSNet using conventional as well as separable convolutional neural networks is proposed to achieve better performance on small and imbalanced datasets without using any pre-trained model. The designed model extracts hybrid features and the fusion of such features makes the model more robust on heterogeneous data. Besides, a weighted margin loss is used to handle the problem of class imbalance. The QuadSNet is trained and tested on seven different classification datasets. To evaluate the advantages of QuadSNet on small and massively imbalanced data, it is compared with six state-of-the-art pre-trained models on three benchmark datasets based on Pneumonia, COVID-19, and Cancer classification. To assess the performance of QuadSNet on general classification datasets, it is compareed with the best model on each of the remaining four datasets, which contain larger, balanced, grayscale, color or non-medical image data. The results show that QuadSNet handles the class imbalance and overfitting better than existing pre-trained models with much fewer parameters on small datasets. Meanwhile, QuadSNet has competitive performance in general datasets.

Author 1: Mumtaz Ali
Author 2: Riaz Ali
Author 3: Nazim Hussain

Keywords: Medical image classification; convolutional neural networks; class imbalance; small dataset; margin loss

PDF

Paper 71: A Randomized Hyperparameter Tuning of Adaptive Moment Estimation Optimizer of Binary Tree-Structured LSTM

Abstract: Adam (Adaptive Moment Estimation) is one of the promising techniques for parameter optimization of deep learning. Because Adam is an adaptive learning rate method and easier to use than Gradient Descent. In this paper, we propose a novel randomized search method for Adam with randomizing parameters of beta1 and beta2. Random noise generated by normal distribution is added to the parameters of beta1 and beta2 every step of updating function is called. In the experiment, we have implemented binary tree-structured LSTM and adam optimizer function. It turned out that in the best case, randomized hyperparameter tuning with beta1 ranging from 0.88 to 0.92 and beta2 ranging from 0.9980 to 0.9999 is 3.81 times faster than the fixed parameter with beta1 = 0.999 and beta2 = 0.9. Our method is optimization algorithm independent and therefore performs well in using other algorithms such as NAG, AdaGrad, and RMSProp.

Author 1: Ruo Ando
Author 2: Yoshiyasu Takefuji

Keywords: Adaptive moment estimation; gradient descent; tree-structured LSTM; hyperparameter tuning

PDF

Paper 72: Encryption on Multimodal Biometric using Hyper Chaotic Method and Inherent Binding Technique

Abstract: Chaotic maps are non-convergent and highly sensitive to initial values. The applications include secure digital identity in distributed systems. Face and fingerprint biometric templates are subjected to hyper-chaotic map leading to encrypted image. Encrypted image is fed as an input to deoxyribonucleic acid (DNA) sequence. Dimensionality of generated DNA sequence is reduced by hashing. The intra-variation for a subject is measured with inter-quartile range. Image set with minimal variation value is identified for selecting the consistent image of a subject. 256 bit Key is generated from the consistent image. Generated key is reduced to 128 bits by eliminating subject specific outliers and redundant values. User specific features are extracted for both the traits using ResNet 50 convolutional neural network and are fused by addition. Final key is bound to feature vector by permutation function and time taken towards key binding is estimated with benchmark database SDUMLA-HMT. Outcome reveals that time taken for key binding varies between 45ms and 58ms for an image of size 80 MB.

Author 1: Nalini M K
Author 2: Radhika K R

Keywords: Chaotic systems; DNA sequences; cryptographic techniques; Convolutional Neural Networks (CNN); key binding

PDF

Paper 73: Evaluation of Routing Protocols and Mobility in Flying Ad-hoc Network

Abstract: The ability of dynamic reconfigurability, quick response and ease of deployment has made Unmanned Aerial Vehicles (UAVs), a paramount solution in several areas such as military applications. Flying ad-hoc network (FANET) is a net-work of UAVs connected wirelessly and configured continuously without infrastructures. Routing on its own is not significant, but the mobility sequence of a UAV in FANETs is a more significant factor and an interesting research topic. The routing protocols gives us a certain and better perception of routing structure for FANETs. In this paper, routing protocols such as Ad-hoc On-Demand Vector (AODV), Dynamic Source Routing (DSR), Temporally Ordered Routing Algorithm (TORA), Geographic Routing Protocol (GRP) and Optimized Link State Routing (OLSR) are compared using performance parameters such as number-of-hops, packet loss ratio, throughput, end-to-end delay and throughput. The mobility models like Pursue Mobility Model (PRS), Semi-Circular Random Movement (SCRM), Manhattan Grid Mobility Model (MGM) and Random Waypoint Mobility (RWPM). The evaluation is carried out with three scenarios including one sender node and one receiver node, all senders one receiver and all senders all receivers are considered for above protocols and mobility models. For all evaluation scenarios, the performance of OLSR is the most efficient among the five routing protocols under four different performance parameters due to its proactive nature which makes the routing information up to date with the help of MPR (Multi Point Relay) in the network, resulting in the reduction of routing overhead in the network.

Author 1: Emad Felemban

Keywords: Flying ad-hoc network (FANET); mobility models; adhoc routing protocols; OPNET; Unmanned Aerial Vehicles (UAVs)

PDF

Paper 74: Combining Word Embeddings and Deep Neural Networks for Job Offers and Resumes Classification in IT Recruitment Domain

Abstract: Now-a-days, the use of web portals known as job boards for publishing job offers by recruiters has grown consid-erably. The candidates in their turn, apply to the job positions via the job boards. Since the opportunities are available on a wide range and the job application process is fast and straightforward, the data flow is transformed to large-volume data sets which are hard to handle. Most companies tend to automate the candidate selection process that aims to match the job offers with suitable resumes. In this paper, we propose a supervised learning approach to classify the job offers and CVs shared in the recruitment sites in order to enhance automatic recruitment process. We used natural language processing techniques for job offers and CV preprocessing. Next, we used word embeddings and deep neural networks to train two models, the first one categorizes recruitment documents based on job skills, and the second one predicts the expertise degree class. The experiment results show that our proposal is very efficient.

Author 1: Amine Habous
Author 2: El Habib Nfaoui

Keywords: IT recruitment; word embeddings; deep neural net-works; text classification; natural language processing

PDF

Paper 75: Mean Value Estimation of Shape Operator on Triangular Meshes

Abstract: The principal curvatures, eigenvalues of the shape operator, are an important differential geometric features that characterize the object’s shape, as a matter of fact, it plays a central role in geometry processing and physical simulation. The shape operator is a local operator resulting from the matrix quotient of normal derivative with the metric tensor, and hence, its matrix representation is not symmetric in general. In this paper, the local differential property of the shape operator is exploited to propose a local mean value estimation of the shape operator on triangular meshes. In contrast to the stat-of-art approximation methods that produce a symmetric operator, the resulting estimation matrix is accurate and generally not symmet-ric. Various comparative examples are presented to demonstrate the accuracy of proposed estimation. The results show that the principle curvature arising from the estimated shape operator are accurate in comparison with the standard estimation in the literature.

Author 1: Ahmed Fouad El Ouafdi
Author 2: Hassan El Houari

Keywords: Curvature estimation; shape operator; triangular meshes; discrete differential operator

PDF

Paper 76: Content-based Image Retrieval using Tesseract OCR Engine and Levenshtein Algorithm

Abstract: Image Retrieval Systems (IRSs) are applications that allow one to retrieve images saved at any location on a network. Most IRSs make use of reverse lookup to find images stored on the network based on image properties such as size, filename, title, color, texture, shape, and description. This paper provides a technique for obtaining full image document given that the user has some portions of the document under search. To demonstrate the reliability of the proposed technique, we designed a system to implement the algorithm. A combination of Optical Character Recognition (OCR) engine and an improved text-matching algorithm was used in the system implementation. The Tesseract OCR engine and Levenshtein Algorithm was integrated to perform the image search. The extracted text is compared to the text stored in the database. For example, a query result is returned when a significant ratio of 0.15 and above is obtained. The results showed a 100% successful retrieval of the appropriate file base on the match even when partial query images were submitted.

Author 1: Charles Adjetey
Author 2: Kofi Sarpong Adu-Manu

Keywords: Image Retrieval Systems; image processing; Optical Character Recognition (OCR); text matching algorithm; Tesseract OCR engine; Levenshtein Algorithm

PDF

Paper 77: New Data Placement Strategy in the HADOOP Framework

Abstract: Today, the data quantities generated and exchanged between information systems continues to increase. Storing and exploiting such quantities require can’t be done without bigdata systems with mechanisms capable of meeting technological challenges commonly grouped under the four Vs (Volume, Velocity, Variety and Veracity). These technologies include mainly the Distributed File System (DFS). Like Hadoop, which is based on HDFS, the main Big Data systems use a data distributed storage where a subsystem is responsible for subdividing data (data striping) and replicating it on a network of nodes called Grid. In the typical case of Hadoop, a Grid generally consists of many nodes, grouped in multiple Racks. The logic of distributing the stored data through the Grid respects a simple strategy that guarantees the durability of the data and a certain speed of writing. This strategy does not take into consideration neither the technical characteristics of nodes, nor the number of requests on the data, which means a considerable loss in processing capacity of the grid. In this work we proposed a new placement strategy based on exploitation analysis of new information integrated into the HDFS metadata model. A significant 20% improvement in overall processing time was reached through the simulations we conducted on Hadoop.

Author 1: Akram Elomari
Author 2: Larbi Hassouni
Author 3: Abderrahim MAIZATE

Keywords: Big data; data storage; Hadoop; DFS; HDFS; data striping; chunks; placement strategy; performance optimization

PDF

Paper 78: A Proposed Framework for Big Data Analytics in Higher Education

Abstract: Students, faculties, and other members of the higher education (HEd) system are increasingly reliant on various information technologies. Such a reliance results in a plethora of data that can be explored to obtain relevant statistics or insights. Another reason to explore the data is to acquire valuable insight regarding the novel unstructured forms of data that are discovered and often found to have a connection with elements of social media such as pictures, videos, Web pages, audio files, etc. Moreover, the data can bring additional valuable benefits when processed in the context of HEd. When used strategically, Big Data (BD) provides educational institutions with the chance to improve the quality of education from all the perspectives and steer students of HEd toward higher rates of completion. Further, this will improve student persistence and results, all of which are facilitated by technology. With this aim, the current research proposes a framework that analyzes the data collected from heterogeneous sources and analyzes using BD analytics tools to do various types of analysis that will be beneficial for different learners, faculties and other members of HEd system. Moreover, current research also focuses on the challenges of acquiring BD from various sources.

Author 1: Beenu Mago
Author 2: Nasreen Khan

Keywords: Big data analysis; higher education; learning analytics; academic analytics

PDF

Paper 79: Fine-tuned Predictive Model for Verifying POI Data

Abstract: Mapping websites and geo portals are playing a vital role in daily life due to the availability of geo-tagged data. From booking a cab to search a place, getting traffic information, review of the place, searching for a doctor or best school available in the locality, we are heavily dependent on the map services and geo portals available for finding such information. There is voluminous data available on these sources and it is getting increasing every moment. These data are majorly collected through crowdsourcing methods where people are contributing. As a basic principle of Garbage in garbage out, the quality of this data impacts the quality of the services based on this data. Therefore, it is highly desired to have a model which can predict the quality/accuracy of the geotagged Point of interest data. We propose a novel Fine-Tuned Predictive Model to check the accuracy of this data using the best suitable supervised machine learning approach. This work focuses on the complete life cycle of the model building, starting from the data collection to the fine-tuning of the hyperparameters. We covered the challenges particularly to the geotagged POI data and remedies to resolve the issues to make it suitable for predictive modeling for classifying the data based on their accuracy. This is a unique work that considers multiple sources including ground truth data to verify the geotagged data using a machine learning approach. After exhaustive experiments, we obtained the best values for hyperparameters for the selected predictive model built on the real data set prepared specifically to target the proposed solution. This work provides a way to develop a robust pipeline for predicting the accuracy of crowdsourced geotagged data.

Author 1: Monika Sharma
Author 2: Mahesh Bundele
Author 3: Vinod Bothale
Author 4: Meenakshi Nawal

Keywords: Crowdsourced; fine-tuning; geotagged data; hyperparameters; predictive model

PDF

Paper 80: A New Secure Algorithm for Upcoming Sensitive Connection between Heterogeneous Mobile Networks

Abstract: One of the most important concepts in the heterogeneous mobile networks is Vertical Handover (VHO). The VHO is a vital process taken place by Mobile Users (MUs) in order to satisfy their preferences of security and cost, in addition to the rest of parameters of network and terminal such as latency and velocity, respectively. However, a proactive security for upcoming sensitive connection and performing VHO between heterogeneous mobile networks have not been considered. This paper therefore comes up with a new secure algorithm to address this issue: Proactive Security for Upcoming Sensitive Connection (PSUSC). Analysis of the PSUSC algorithm proves reducing potential attacks extremely compared with previous works which rely on using less secure RAT.

Author 1: Omar Khattab

Keywords: Vertical handover security; mobile networks; wireless networks; heterogeneous wireless

PDF

Paper 81: A Planar 2×2 MIMO Antenna Array for 5G Smartphones

Abstract: Here, a planar 2×2 MIMO configuration for the 5G smartphone has been presented. A single element modified planar tree profile shape (MPTPS) antenna is implemented to investigate the suitability in future 5G communication for different sub-6 GHz spectrum band. The size of the single MPTPS antenna is 40 × 25 mm2. The electronic band gap (EBG) and partial ground plane (PGP) techniques have been utilized to tune this antenna. The antenna works from 2.81 – 7.23 GHz, with a (VSWR < 2) bandwidth of 4.42 GHz that covers all the mid-range sub-6 GHz 5G frequencies. It also has a comparatively good gain of 3.14 dBi, high efficiency of 96% and a bi-directional radiation pattern. The antenna has been implemented in a 145 × 75 mm2 smartphone mainboard with MIMO configuration using polarization diversity. More than -21.1 dB isolation has been found between different ports. A good gain of as high as 6.59 dBi is observed for the MIMO in the band. Also, as MIMO performance, excellent envelope correlation coefficient of less than 0.0029 and minimum diversity gain of 9.9853 has been observed. The investigation has been further stretched by adding a liquid crystal display (LCD) for radiation performance and a hand phantom to assess the performance in terms of specific absorption rate (SAR). It is observed that the SAR value is as low as 0.887641 at 3.5 GHz. This design will motivate the researcher to develop high performance MIMO arrays for 5G smartphones.

Author 1: A. K. M. Zakir Hossain
Author 2: Nurulhalim Bin Hassim
Author 3: W. H. W. Hassan
Author 4: Win Adiyansyah Indra
Author 5: Safarudin Gazali Herawan
Author 6: Mohamad Zoinol Abidin Bin Abd. Aziz

Keywords: MIMO; smartphone antenna; isolation; envelope correlation coefficient; diversity gain; specific absorption rate (SAR)

PDF

Paper 82: Machine Learning Approach of Hybrid KSVN Algorithm to Detect DDoS Attack in VANET

Abstract: Most of the self-driving vehicles are suspect of the of the different types attacks due to its communication pattern and changing network topology characteristics, these types of vehicles are dependent on external communication sources of VANET, which is a vehicular network, It has attracted great interest of industry and academia, but it is having a number of issues like security, traffic congestion, road safety which are not addressed properly in recent years. To address these issues it’s required to build secure framework for the communication system in VANET and to detect different types of attack are the most important needs of the network security, which has been studied adequately by many researchers. However to improve the performance and to adapt the scenario of VANET, here in this paper we proposed a novel Hybrid KSVM scheme which is based on KNN and SVM algorithm to build a secure framework to detect Distributed Daniel of Service attack which is the part of Machine Learning approach. The experimental results shows that this approach gives the better results as compared to different Machine Learning based Algorithms to detect Distributed Daniel of Service attack.

Author 1: Nivedita Kadam
Author 2: Krovi Raja Sekhar

Keywords: K-Nearest neighbor (KNN); support vector machine (SVM); DDoS (distributed denial of service attack)

PDF

Paper 83: An Intelligent Approach for Data Analysis and Decision Making in Big Data: A Case Study on E-commerce Industry

Abstract: A recent informational phenomenon has emerged as one of the considerable innovations in information systems, commonly referred to as "Big Data". The latter is currently trendy, both in academia and industry, and is used to describe a wide range of concepts, from data extraction, storage, and management, to data processing and analysis using well-known schemas, to extract patterns in hidden relationships in order to make better decisions and to derive new knowledge using analytical techniques and solutions. The technology that enables the potential of big data to be exploited is called "Big Data Analytics". Big data analytics is a major challenge that enables researchers, analysts and business users to make better decisions faster. Big Data became an important part of marketing research and marketing strategies. The e-commerce industry is one of the industries that currently benefits most from the potential of big data collection and analysis. This paper therefore aims to demonstrate the use of big data to understand customers and to improve and facilitate the decision making process. In this research, we apply multiple machine learning (ML) models on large dataset present in the e-commerce area by studying several practical cases on online markets.

Author 1: EL FALAH Zineb
Author 2: RAFALIA Najat
Author 3: ABOUCHABAKA Jaafar

Keywords: Big data; data analytics; decision making; big data analytics; big data analysis; machine learning; marketing; e-commerce

PDF

Paper 84: Improved Incentive Pricing Wireless Multi-service Single Link with Bandwidth Attribute

Abstract: Several objectives that service providers have to achieve are to determine the increase or decrease in the price change due to the change in service quality and the amount of service quality value. Multi-service wireless Internet pricing schemes that apply the quality of the bandwidth advantage are designed to take into account the need of ISPs to provide high-quality services to users and increase their revenue, considering the limited bandwidth of the resources. The modified model is an improvement of the original model by adding variables and parameters to the multiple service network model by specifying the base price for QoS (alpha) and premium quality (β) as variables, parameters, and service class load factor, Pregnancy basis factor and differentiation factor. The models are solved by the program Lingo 18.0 to get the best solution. The results prove that the modified model is the best and yields the best profit for the service provider when the cost of all changes in quality of service is increased and the variable α and β is set as constant or variable.

Author 1: Nael Hussein
Author 2: Kamaruzzaman Seman
Author 3: Fitri Maya Puspita
Author 4: Khairi Abdulrahim
Author 5: Mus’ab Sahrim

Keywords: Optimal solution; multi service network; wireless pricing scheme; bandwidth QoS attribute

PDF

Paper 85: Drip Irrigation Detection for Power Outage-Prone Areas with Internet-of-Things Smart Fertigation Managemant System

Abstract: In drip irrigation agriculture or fertigation technique, sufficient amount of water and nutrients are crucial for a plant's growth and development. An electronic timer is usually used to control the plant watering automatically and the scheduling is set according to different levels of plant growth. The timer has to be adjusted frequently since the required amount of water is different according to the growth stages. In power outage-prone regions, the problem with scheduled irrigation using timer becomes worsen since the watering schedule is disrupted by occasional black-outs, leading to an insufficient supply of water and nutrients, which leads to poor crop yields. Typical solution for such problems is by hiring field works to monitor the functionality of the automated system, plant health, and to re-adjust the timer once a power outage occurs. However, this solution is ineffective, time-consuming, and acquires high overhead costs. This paper proposes a systematic irrigation method using the internet-of-things (IoT) framework in order to improve the monitoring of plant growth and consequently improves the efficiency of the workflow. This systematic fertigation monitoring system consists of power outage alerts, and on-line notifications of plant irrigation, pesticide delivery, and polybag cleaning schedule. As a result, by using the proposed system, higher efficiency in farming management is achieved, with a 40% reduction in manpower, as compared to a typical fertigation-based farming system. This system demonstrates greater control over irrigation scheduling, plant growth, recording of pesticide scheduling automatically, and polybag cleaning, all of which will improve crop yields significantly.

Author 1: Dahlila Putri Dahnill
Author 2: Zaihosnita Hood
Author 3: Afzan Adam
Author 4: Mohd Zulhakimi Ab Razak
Author 5: Ahmad Ghadafi Ismail

Keywords: Irrigation technique; water and nutrient; automatic drip irrigation; crop; power-outage

PDF

Paper 86: View-independent Vehicle Category Classification System

Abstract: Vehicle category classification is important, but it is a challenging task, especially, when the vehicles are captured from a surveillance camera with different view angles. This paper aims to develop a view-independent vehicle category classification system. It proposes a two-phase system: one phase recognizes the view angles helping the second phase to recognize the vehicle category including bus, car, motorcycle, and truck. In each phase, several descriptors and Machine Learning techniques including traditional algorithms and Deep neural networks are employed. In particular, we used three descriptors: HOG (Histogram of Oriented Gradient), LBP (Local Binary Patterns) and Gabor filter with two classifiers SVM (Support Vector Machine) and k-NN (k-Nearest Neighbor). And also, we used the Convolutional Neural Network (CNN, or ConvNet). Three experiments have been elaborated based on many datasets. The first experiment is dedicated to choosing the best approach for the recognition of views: rear or front. The second experiment aims to classify the vehicle categories based on each view. In the third experiment, we developed the overall system, the categories were classified independently of the view. Experimental results reveal that CNN gives the highest recognition accuracy of 94.29% in the first experiment, and HOG with SVM or k-NN gives the best results (99.58%, 99.17%) in the second experiment. The system can robustly recognize vehicle categories with an accuracy of 95.77%.

Author 1: Sara Baghdadi
Author 2: Noureddine Aboutabit

Keywords: Vehicle category classification; view recognition; machine learning; deep learning; convolutional neural network

PDF

Paper 87: Machine Learning Predictors for Sustainable Urban Planning

Abstract: While essential for economic reasons, rapid urbanization has had many negative impacts on the environment and the social wellbeing of humanity. Heavy traffic, unexpected geohazards are some of the effects of uncontrollable development. This situation points its fingerto urban planning and design; there are numerous automation tools to help urban planners assess and forecast, yet unplanned development still occurs, impeding sustainability. Automation tools use machine learning classification models to analyze spatial data and various trend views before planning a new urban development. Although there are many sophisticated tools and massive datasets, big cities with colossal migration still witness traffic jams, pollution, and environmental degradation affecting urban dwellers' quality. This study will analyze the current predictors in urban planning machine learning models and identify the suitable predictors to support sustainable urban planning. A correct set of predictors could improve the efficiency of the urban development classification models and help urban planners to enhance the quality of life in big cities.

Author 1: Sarojini Devi Nagappan
Author 2: Salwani Mohd Daud

Keywords: Urban planning; sustainable development; urban development classification model; machine learning; urban development predictors

PDF

Paper 88: Designing Strategies for Autonomous Stock Trading Agents using a Random Forest Approach

Abstract: Machine learning-based autonomous agents are valuable for back-testing stock trading strategies, including algorithmic trading. Several studies in the financial literature have proposed artificial intelligence-based algorithms that support decision making for financial investment, but few studies have provided systematic processes for designing intelligent trading agents. This paper overviews the steps involved in designing agents that forecast stock prices in a trading strategy. These steps include data preprocessing, time-series segmentation, dimensionality reduction, clustering, and others. Our main contributions are: (i) a systematic process that guides the design and development of trading agents, and (ii) a random forest forecasting model.

Author 1: Monira Aloud

Keywords: Decision trees; financial forecasting; machine learning; random forest; trading agents; trading strategy

PDF

Paper 89: Risk Assessment of Attack in Autonomous Vehicle based on a Decision Tree

Abstract: Risk management has become increasingly essential in all areas, and it represents a cornerstone of the Safety Management System. In principle, it brings together all the procedures to identify and evaluate risks to improve systems performance. With the development of the transportation system and the appearance of intelligent ones (ITS) that are changing citizens' mobility nowadays, the risks associated with them have also increased exponentially. In ITS, vehicles can reach 100% autonomy since they are equipped with sensors to move safely. The vehicle's architecture and embedded sensors enfold inherent vulnerabilities that attackers may exploit to craft malicious acts. In addition, vehicles communicate with each other and with the road infrastructure via vehicular adhoc network (VANET) and may use Internet connections, raising the risk that an attacker performs malicious actions and may take control of a vehicle to perform terrorist acts. This paper aims to draw attention to the risks associated with autonomous vehicles (AV) and the interest in evaluating flaws inherent in AV. For this purpose, our paper will extensively detail a new approach to assess the risk of attacks targeting autonomous vehicles. Our proposed approach will use a decision tree model to predict risk criticality based on the probability of attack success and its impact on the targeted system.

Author 1: Sara FTAIMI
Author 2: Tomader MAZRI

Keywords: Vehicular adhoc network; intelligent transportation system; decision tree; risk assessment; impact; autonomous vehicle; attacks

PDF

Paper 90: An Automated Framework for Enterprise Financial Data Pre-processing and Secure Storage

Abstract: The analysis on the financial data is highly crucial and critical as the results or the conclusion communicated based on the analysis can generate a greater impact on the personal and enterprise scale business processes. The primary source of the financial data is the business process and often the data is collected by automation tools deployed at various points of the business process data flow. The data entered in the business process is primary done by the stake holders of the process and at various levels of the process the data is modified, translated and sometimes completed transverter, due to which the impurities or anomalies are introduced in the data. These impurities, such as outliers and missing values, cause a high impact on the final decision after processing these datasets. Hence an appropriate pre-processing for financial data is the demand of the research. A good number of parallel research outcomes can be observed to solve these problems. Nonetheless, majority of the solutions are either highly time complex or not accurate effectively. Thus, this work proposes an automated framework for identification and imputation of the outliers using the iterative clustering method, identification and imputation of the missing values using Differential count based binary iterations method and finally the secure data storage using regression based key generation. The proposed framework has showcased nearly 100% accuracy in detection of outliers and missing values with highly improved time complexity.

Author 1: Sirisha Alamanda
Author 2: Suresh Pabboju
Author 3: G. Narasimha

Keywords: Financial data pre-processing; outlier treatment; missing value treatment; regression; differential iterations; iterative clustering

PDF

Paper 91: System Design and Case Study Reporting on AQASYS: A Web-based Academic Quality Assurance System

Abstract: The demands of modern education have evolved from a teacher-centric requirement to a learner-centric requirement. Knowledge, skill, and competence are the most sought-after attributes of a graduate. Features such as the objective focus of learning, curriculum planning, a set of high expectations, and extended opportunities to the learner after completion of education are at the center of all the planning. It is all about skill-oriented, outcome-based standardization that has been infused by the societal stakeholders into the modern global education system to create a work-ready human capital. In this paper, a software product for academic quality assurance is presented. The software provides a generic framework to any educational institution that operates to implement known international standards of education. The software accepts the data and computes the quality parameters as per the selected standards. It has an analytical module that provides summary analytics and generates the course reports in the given format automatically. The software is tested with a case study and results are presented. The paper also presents the system design approach with discussion on the technologies selected for the development.

Author 1: Adel Alfozan
Author 2: Mohammad Ali Kadampur

Keywords: Outcome-based education; quality standards; automated software; system design; education technology; accreditation

PDF

Paper 92: A Systematic Literature Review of the Types of Authentication Safety Practices among Internet Users

Abstract: The authentication system is one of the most important methods for maintaining information security in smart devices. There are many authentication methods, such as password authentication, biometric authentication, signature authentication, and so on, to protect cloud users’ data. However, online information is not yet effectively authenticated. The purpose of this systematic literature review is to examine the current types of authentication methods as a safety practices for information security among Internet users. The PRISMA method was adopted to present a systematic literature review of 28 articles from three main databases (20 articles from Scopus, one article from Google Scholar, and seven articles from Dimension). This study used the Prediction Study Risk of Bias Assessment Tool to appraise the quality of the included studies. From the findings of the study, a total of three main themes were identified: password authentication, biometric authentication, and multiple-factor authentication. Multiple-factor authentication was found to be the most secure and most frequently recommended authentication method. It is highly recommended to implement three-factor authentication and multi-biometric model in the future, as it provides a higher surveillance level in terms of information security among cloud computing users.

Author 1: Krishnapriyaa Kovalan
Author 2: Siti Zobidah Omar
Author 3: Lian Tang
Author 4: Jusang Bolong
Author 5: Rusli Abdullah
Author 6: Akmar Hayati Ahmad Ghazali
Author 7: Muhammad Adnan Pitchan

Keywords: Password authentication; biometric authentication; multi-factor authentication; information security; safety practices

PDF

Paper 93: Development of Learning Analytics Dashboard based on Moodle Learning Management System

Abstract: Digitalization catalyzes drastic changes to a particular subject or area. Digitalization is an operational structure transformation process, such as in the educational domain. Digitalization in the academic field has brought the classroom to the users' fingertips with the prevalence of e-learning applications, learning management systems, etc. However, with the increasing number of digital learning platform users, educators find it hard to monitor their students' progress. Analytics that analyze data generated from the usage pattern of the users contribute to giving the educators an insight regarding the performance of their students. With that, they can apply early intervention and modification of their delivery method to suit the students' needs and, at the same time, increase the quality of the content. This study illustrates the development of a learning analytics dashboard that can improve learning outcomes for educators and students.

Author 1: Ong Kiat Xin
Author 2: Dalbir Singh

Keywords: Learning analytics; learning management system; moodle

PDF

Paper 94: Real-time Driver Drowsiness Detection using Deep Learning

Abstract: Every year thousands of lives pass away worldwide due to vehicle accidents, and the main reason behind this is the drowsiness in drivers. A drowsiness detection system will help to reduce this accident and save many lives around the world. To defend this problem, we propose a methodology based on Convolutional Neural Networks (CNN) that illustrates drowsiness detection as a task to detect an object. It will detect and localize whether the eyes are open or close based on the real-time video stream of drivers. The MobileNet CNN Architecture with Single Shot Multibox Detector is the technology used for this object detection task. A separate algorithm is used based on the output given by the SSD_MobileNet_v1 architecture. A dataset that consists of around 4500 images was labeled with the object’s face yawn, no-yawn, open eye, and closed eye to train the SSD_MobileNet_v1 Network. Around 600 randomly selected images are used to test the trained model using the PASCAL VOC metric. The proposed approach is to ensure better accuracy and computational efficiency. It is also affordable as it can process incoming video streams in real-time and does not need any expensive hardware support. There only needs a standalone camera to be implemented using cheap devices in cars using Raspberry Pi 3 or other IP cameras.

Author 1: Md. Tanvir Ahammed Dipu
Author 2: Syeda Sumbul Hossain
Author 3: Yeasir Arafat
Author 4: Fatama Binta Rafiq

Keywords: Deep learning; drowsiness detection; object detection; MobileNets; Single Shot Multibox Detector

PDF

Paper 95: Wireless Intrusion and Attack Detection for 5G Networks using Deep Learning Techniques

Abstract: A Wireless Intrusion Detection System is an important part of any system or company connected to the internet and has a wireless connection inside it because of the increasing number of internal or external attacks on the network. These WIDS systems are used to predict and detect wireless network attacks such as flooding, DoS attack, and evil- twin that badly affect system availability. Artificial intelligence (Machine Learning, Deep Learning) are popular techniques used as a good solution to build effective network intrusion detection. That's because of the ability of these algorithms to learn complicated behaviors and then use the learned system for discovering and detecting network attacks. In this work, we have performed an autoencoder with a DNN deep algorithm for protecting the companies by detecting intrusion and attacks in 5G wireless networks. We used the Aegean Wi-Fi Intrusion dataset (AWID). Our WIDS resulted in a very good performance with an accuracy of 99% for the dataset attack types: Flooding, Impersonation, and Injection.

Author 1: Bayana Alenazi
Author 2: Hala Eldaw Idris

Keywords: Wireless intrusion detection system; 5G; autoencoder; deep learning; attack detection

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org