The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 8

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A New Approach of e-Commerce Web Design for Accessibility based on Game Accessibility in Chinese Market

Abstract: China is the largest e-commerce market globally, with a share of more than 40% of the total value of e-commerce transactions in the world, down from just 1% a decade ago. The Chinese are the most used electronic payment, ordering services, and watching videos on smart devices worldwide. The study of e-commerce is one of the branches of business administration established electronically through the use of Internet networks, which aim to carry out buying and selling operations. With the popularity of e-commerce, people from more and more backgrounds are using e-commerce websites and apps, but among these users, some people are unable to use these apps/websites or have barriers to use them. Accessibility design enables anyone (regardless of ability, for example, Color-blind) to successfully navigate, understand and use some applications. The accessibility design is widely used in video games, which can give guidance to the e-commerce accessibility design. This study will analyze five well-known e-commerce websites worldwide and the consumption habits of people with barriers to use from the perspective of accessible design to suggest two new concepts of accessible design methods based on game accessibility and web accessibility to make these e-commerce websites/apps more suitable with the user habits of the particular group.

Author 1: Hemn Barzan Abdalla
Author 2: Lu Zhen
Author 3: Zhang Yuantu

Keywords: e-Commerce; accessibility design; color-blind; game accessibility

Download PDF

Paper 2: Research on the Relationship between Exploratory Behavior and Consumer Values using Eye Tracking Gaze Data

Abstract: In recent years, the popularity of e-commerce has witnessed a significant uptick. Physical apparel stores need to implement measures that focus on the behavioral experience of shopping at physical stores, a trait that e-commerce lacks. The purpose of this paper is to clarify the relationship between customer values and product search behavior and proposed product placement and customer service methods based on their values. We used questionnaire data on the values of customer purchasing to perform factor analysis and cluster analysis. Moreover, we extracted the product search behavior using eye-tracking gaze data from an apparel physical store. The results showed that product search behavior differed based on three types: trend cluster, self-esteem cluster, and conservative cluster. Finally, we proposed product placement in a store considering the features of these clusters.

Author 1: Mei Nonaka
Author 2: Kohei Otake
Author 3: Takashi Namatame

Keywords: Consumer values; eye tracking; factor analysis; cluster analysis

Download PDF

Paper 3: Lip Detection and Tracking with Geometric Constraints under Uneven Illumination and Shadows

Abstract: In the modern era, recent advancement in computer vision has led to emergent attention in lip reading. Indeed, lip-reading is used to understand speech without hearing it, and the process is mentioned as a lip-reading system. To construct an automatic lip-reading system, locating the lip and defining the lip region is essential, especially under different lighting conditions, significantly impacting the robustness of the lip-reading system. Unluckily, in previous studies, lip localization under illumination and shadow consideration has not been well solved. In this paper, we extant a local region-based approach towards the lip-reading system. It consists of four significant parts, firstly detecting/localizing the human face, mouth and lip region of interest in the first video frame. Secondly, apply pre-processing to overwhelmed the inference triggered by illumination effects, shadow and teeth appearance, thirdly create contour line using sixteen key points with geometric constraint and stored the coordinates of these constraints. Finally, track the coordinates of sixteen points in the following frames. The proposed method adapts to the lip movement and is robust in contrast to the appearance of teeth, shadows, and low contrast environment. Extensive experiments show encouraging results and the proposed method's effectiveness compared to the existing methods.

Author 1: Waqqas ur Rehman Butt
Author 2: Luca Lombardi

Keywords: Lip detection; lip tracking; illumination equalization; shadow filtering; 16 points lip model

Download PDF

Paper 4: Data Hiding Method with Principal Component Analysis and Image Coordinate Conversion

Abstract: Data hiding method with Principal Component Analysis (PCA) and image coordinate conversion as a preprocessing of wavelet Multi Resolution Analysis (MRA) is proposed. The method introduced in this paper, based on the characteristics of the original multispectral image, allows recovering the secret data. Through experiments, it is found that the proposed method is superior to the conventional data hiding method without any preprocessing. The method introduced in this paper allows only I who knows the characteristics of the original multispectral image to recover the secret data, i.e., when the information of the original image needs to be protected. Moreover, in the introduced method, the information of the secret data is protected by the existence of the eigenvector and the oblique coordinate transformation, that is, the secret data is restored if at least the information of the true original image is not known. The principal component transformation coefficient differs for each original image and is composed of the eigenvectors of the original image.

Author 1: Kohei Arai

Keywords: Multi-dimensional wavelet transformation; multi resolution analysis (MRA); image data hiding; secrete image; Daubechies basis function

Download PDF

Paper 5: DCRL: Approach for Pattern Recognition in Price Time Series using Directional Change and Reinforcement Learning

Abstract: Developing an intelligent pattern recognition model for electronic markets has been a vital research direction in the field. Ongoing research continues for intelligent learning algorithms capable of recognizing and classifying price patterns and hence providing investors and market analysts with better insights into price time-series. In this paper, an adaptive intelligent Directional Change (DC) pattern recognition model with Reinforcement Learning (RL) is proposed, so called DCRL model. Compared with traditional analytical approaches that uses fixed time interval and specified features of the market, the DCRL is an alternative intelligent approach that samples price time-series using an event-based time interval and RL. In this model, the environment’s behavior is incorporated into the RL process to automate the identification of directional price changes. The DCRL learns the price time-series representation by adaptively selecting different price features depending on the current state. DCRL is evaluated using Saudi stock market data with different price trends. A series of analyses demonstrate the effective analytical performance in detecting price changes and the extensive applicability of the DCRL model.

Author 1: Nora Alkhamees
Author 2: Monira Aloud

Keywords: Machine learning; reinforcement learning; directional-change event; pattern recognition; stock market

Download PDF

Paper 6: Constrained Quantum Optimization for Resource Distribution Management

Abstract: The cloud computing field suffers from the heavy processing caused by the exponentially increasing data traffic. Therefore, optimizing the network performance and achieving a better quality of service (QoS) became a central goal. In cloud computing, the problem of energy consumption of resource distribution management system (RDMS) is presented as an optimization problem. Most of the existing classical optimization approaches, such as heuristic and metaheuristic have high computational complexity. In this work, we proposed a quantum optimization strategy that executes the tasks exponentially faster and with high accuracy named constrained quantum optimization algorithm (CQOA). We exploit the CQOA in RDMS as a toy example for pointing out the efficiency of the proposed quantum strategy in reducing energy consumption and computational complexity. Following that, we investigate the CQOA's implementation, setup, and computational complexity. Finally, we create a simulation environment to evaluate the efficiency of the suggested implemented constrained quantum strategy.

Author 1: Sara El Gaily
Author 2: Sándor Imre

Keywords: Quantum computing; constrained quantum optimization algorithm; quantum extreme values searching algorithm; resource distribution management; cloud computing

Download PDF

Paper 7: Prototype Design and Experimental Evaluation e-Healthcare System based on Molecular Analysis Devices

Abstract: Recently, wireless body area networks (WBANs) and mobile Internet of Things (IoT) have been greatly increased and integrated into a different type of systems such as electronic/mobile-healthcare (e/m-healthcare) systems. In addition, analyzing the composition of drugs or performing the medical rapid tests in the conditions field are the main tasks of the m-healthcare system, in which the traditional laboratory methods of analysis are not suitable. Therefore, in this study, we proposed a novel structure with a distributed e-health system to perform such analysis, where portable infrared micro spectrometers are utilized and then boundary calculations are applied. More specifically, this system is proposed to use a portable infrared micro spectrometer with a specially designed application connected to a public communication network, which can process the results of analysis using boundary calculations. Moreover, it provides remote processing and long-term storage of analysis data using artificial neural networks and cloud technologies. Finally, simulation results show that preprocessing (error checking), data buffering and Edge Computing can significantly reduce the network latency and volume of transferred data.

Author 1: Maxim Zakharov
Author 2: Alexander Paramonov
Author 3: Ammar Muthanna
Author 4: Ruslan Kirichek

Keywords: Micro spectrometer; molecular analysis; NIR spectroscopy; public communication networks; internet of things; e-health; m-health; edge computing

Download PDF

Paper 8: MNN and LSTM-based Real-time State of Charge Estimation of Lithium-ion Batteries using a Vehicle Driving Simulator

Abstract: Lithium-ion batteries (a type of secondary battery) are now used as a power source in many applications due to their high energy density, low self-discharge rates, and ability to store long-term energy. However, overcharging is inevitable due to frequent charging and discharging of these batteries. This may result in property damage caused by system shutdown, accident, or explosion. Therefore, reliable and efficient use requires accurate prediction of the battery state of charge (SOC). In this paper, a method of estimating SOC using vehicle simulator operation is proposed. After manufacturing the simulator for the battery discharge experiment, voltage, current, and discharge-time data were collected. The collected data was used as input parameters for multilayer neural network (MNN) and recurrent neural network–based long short-term memory (LSTM) to predict SOC of batteries and compare errors. In addition, discharge experiments and SOC estimates were performed in real time using the developed MNN and LSTM surrogate models.

Author 1: Si Jin Kim
Author 2: Jong Hyun Lee
Author 3: Dong Hun Wang
Author 4: In Soo Lee

Keywords: Lithium-ion battery; state of charge; multilayer neural network; long short-term memory; vehicle driving simulator; real time

Download PDF

Paper 9: Learning Optimum Number of Bases for Indian Languages in Non-negative Matrix Factorization based Multilingual Speech Separation

Abstract: Non-negative matrix factorization-based audio source separation separating a target source has shown significant performance improvement when the spectral bases attained after factorization exhibits latent structures in the mixed audio signal comprising multiple speaker sources. If all the sources are known, the spectral bases may be inferred on priority by using a training process on the database of isolated sources. The number of bases inferred for a source should not include bases matching spectral patterns of the interfering sources in the audio mixture; otherwise, the estimated target source after separation will be incorporated with undesirable spectral patterns. It is difficult to distinguish and separate similar audio sources in an overlapped speech, leading to a complex speech processing task. Therefore, this research attempts to learn an optimum number of bases for Indian languages leading to successful separation of target source in multi-lingual multiple speaker speech mixtures using non-negative matrix factorization. The languages used for utterances are Hindi, Marathi, Gujarati, and Bengali. The speaker combinations used are female-female, male-male, and female-male. The optimum number of bases which was determined by evaluating improvement in the separation performance was found to be 40 for all the languages considered.

Author 1: Nandini C Nag
Author 2: Milind S Shah

Keywords: Indian languages; optimum number of bases; non-negative matrix factorization; speech separation

Download PDF

Paper 10: Determining Local Hematology Reference Ranges: A Data-driven Approach

Abstract: Hematology is the study of blood, blood-forming organs, and blood diseases. Hematological tests such as Full Blood Count (FBC) can be used to diagnose a wide range of infections and diseases by comparing their results with the standard hematology reference (SHR) ranges. These ranges were established many years ago by considering the Caucasian population and all countries have used them until recent times to measure the healthiness of the people. But these reference ranges can be varied according to various reasons such as dietary habits, geographical location, climate, environmental factors, etc., and the use of them by all countries may not be correct. Many researchers have started research in finding Local Hematology Reference (LHR) ranges. Most of them used statistical analyses which have their limitations. Machine learning is a solution to overcome those limitations. Finding an approach to determine the LHR range based on machine learning techniques is the goal of this research. The dataset was generated using FBC test reports in Sri Lanka. The LHR range of WBC count of healthy adults in Sri Lanka is only addressed in this research. A difference between the SHR range of WBC and the LHR range of WBC is observed.

Author 1: K. A. Hasara Semini
Author 2: H.A.Caldera

Keywords: Hematology science; standard hematology reference range; domestic hematology reference ranges; local hematology reference range; machine learning; white blood cell count

Download PDF

Paper 11: A WSM-based Comparative Study of Vision Tracking Methodologies

Abstract: Vision tracking is a key component of a video sequence. It is the process of locating single or multiple moving objects over time using one or many cameras. The latter’s function consists of detecting, categorizing, and tracking. The development of the trustworthy solution for video sequence analysis opens up new horizons for a variety of applications, including intelligent transportation systems, biomedical, agriculture, human-machine interaction, augmented reality, video surveillance, robots, and many crucial research areas. To make efficient models, there are challenges in video observation to deal with, such as problems with the environment, light variation, pose variation, motion blur, clutter, occlusion, and so on. In this paper, we present several techniques that addressed the issues of detecting and tracking multiple targets on video sequences. The proposed comparative study relied on different methodologies. This paper's purpose is to list various approaches, classify them, and compare them, using the Weighted Scoring Model (WSM) comparison method. This includes studying these algorithms, selecting relevant comparison criteria, assigning weights for each criterion, and lastly computing scores. The obtained results of this study will reveal the strong and weak points of each algorithm mentioned and discussed.

Author 1: Sara Bouraya
Author 2: Abdessamad Belangour

Keywords: Multiple object tracking; object detection; WSM method; computer vision; video analysis

Download PDF

Paper 12: Mobile Malware Classification for iOS Inspired by Phylogenetics

Abstract: Cyber-attacks such as ransomware, data breaches, and phishing triggered by malware, especially for iOS (iPhone operating system) platforms, are increasing. Yet not much works on malware detection for the iOS platform have been done compared to the Android platform. Hence, this paper presents an iOS malware classification inspired by phylogenetics. It consists of mobile behaviour, exploits, and surveillance features. The new iOS classification helps to identify, detect, and predict any new malware variants. The experiment was conducted by using hybrid analysis, with twelve (12) malwares datasets from the Contagio Mobile website. As a result, twenty-nine (29) new classifications have been developed. One hundred (100) anonymous mobile applications (50 from the Apple Store and 50 from iOS Ninja) have been used for evaluation. Based on the evaluation conducted, 13% of the mobile applications matched with the developed classifications. In the future, this work can be used as guidance for other researchers with the same interest.

Author 1: Muhammad Afif Husainiamer
Author 2: Madihah Mohd Saudi
Author 3: Azuan Ahmad
Author 4: Amirul Syauqi Mohamad Syafiq

Keywords: iOS; mobile malware; reverse engineering; exploitation; phylogenetic

Download PDF

Paper 13: ANNMDD: Strength of Artificial Neural Network Types for Medical Diagnosis Domain

Abstract: The abundance of medical evidence in health institutions necessitates the creation of effective data collection methods for extracting valuable information. For several years, scholars focused on the use of computational techniques and data processing techniques in order to enhance the study of broad historical datasets. There is a deficiency to investigate the collected data of health disease in the data sources such as COVID-19, Chronic Kidney, Epileptic Seizure, Parkinson, Hard diseases, Hepatitis, Breast Cancer and Diabetes, where millions of people are killed in the world by these diseases. This research aims to investigate the neural network algorithms for different types of medical diseases in order to select the best type of neural network suitable for each disease. The data mining process has been applied to investigate the mentioned medical disease datasets. The related works and literature review of machine learning in the medical domain were studied in the initial stage of this research. Then, the experiments behind the initial stage have been designed with six neural network algorithm styles which are Multiple, Radial Based Function Network (RBFN), Dynamic, Quick and Prune algorithms. The extracted results for each algorithm have been analyzed and compared with each other to select the perfect neural network algorithm for each disease. T-test statistical significance test has been applied as one of the investigation strategies for the NN optimal selection. Our findings highlighted the strong side of the Multiple NN algorithm in terms of training and testing phases in the medical domain.

Author 1: Ahmed Hamza Osman

Keywords: Medical data; neural network algorithm; multiple; radial based function network; dynamic; quick; prune; accuracy

Download PDF

Paper 14: Optical Character Recognition Engines Performance Comparison in Information Extraction

Abstract: Named Entity Recognition (NER) is often used to acquire important information from text documents as a part of the Information Extraction (IE) process. However, the text documents quality affects the accuracy of the data obtained, especially for text documents acquired involving the Optical Character Recognition (OCR) process, which never reached 100% accuracy. This research tried to examine which OCR engine with the highest performance for IE using NER by comparing three OCR engines (Foxit, PDF2GO, Tesseract) over 8,562 government human resources documents within six document categories, two document structures, and four measurements. Several essential entities such as name, employee ID, document number, document publishing date, employee rank, and family member's name were trying to be extracted automatically from the documents. NER processes were done using Python programming language, and the preprocessing tasks were done separately for Foxit, PDF2GO, and Tesseract. In summary, each OCR engine has its drawbacks and benefit, such as Tesseract has better NER extraction and conversion time with better accuracy but lack in the number of entities acquired.

Author 1: Tosan Wiar Ramdhani
Author 2: Indra Budi
Author 3: Betty Purwandari

Keywords: Named entity recognition; information extraction; optical character recognition; government human resources documents

Download PDF

Paper 15: Higher Order Statistics and Phase Synchronization as Features in a Motor Imagery Paradigm

Abstract: The paper proposes an approach based on higher order statistics and phase synchronization for detection and classification of relevant features in electroencephalographic (EEG) signals recorded during the subjects are performing motor tasks. The method was tested on two different datasets and the performance was evaluated using k nearest neighbor classifier. The results (classification rates higher than 90%) have shown that the method can be used for discriminating right and left motor imagery tasks as an offline analysis for EEG in a brain computer interface system.

Author 1: Oana-Diana Hrisca-Eva
Author 2: Madalina-Giorgiana Murariu
Author 3: Anca Mihela Lazar

Keywords: Brain computer interface; motor imagery; higher order statistics; phase synchronization; EEG

Download PDF

Paper 16: Enhanced Framework for Big Data Requirement Elicitation

Abstract: Requirement engineering is one of the software development life cycle phases; it has been recognized as an important phase for collecting and analyzing a system’s goals. However, despite its importance, requirement engineering has several limitations such as incomplete requirements, vague requirements, lack of prioritization, and less user involvement, all of which affect requirement quality. With the emergence of big data technology, the complexity of big data, which is defined by large data volume, high velocity, and large data variety, has gradually increased, affecting the quality of big data software requirements. This study proposes a framework with four sequential phases to improve requirement engineering quality through big data software development. By integrating the proposed framework’s phases in which user requirements are collected in a complete vision using traditional requirement elicitation techniques with agile methodology and mind mapping, the collected requirements are displayed via a graphical representation using mind maps to achieve high requirement accuracy with connectivity and modifiability, enabling the accurate prioritization of requirements implemented using agile SCRUM methodology. The proposed framework improves requirement quality in big data software development, which is represented by accuracy, completeness, connectivity, and modifiability to understand the value of the collected requirements and effectively affect the quality of the implementation phase.

Author 1: Aya Hesham
Author 2: Osama E. Emam
Author 3: Marwa Salah

Keywords: Requirement engineering; big data requirement; agile methodology; mind mapping

Download PDF

Paper 17: Analyzing Predictive Algorithms in Data Mining for Cardiovascular Disease using WEKA Tool

Abstract: Cardiovascular Disease (CVD) is the foremost cause of death worldwide that generates a high percentage of Electronic Health Records (EHRs). Analyzing these complex patterns from EHRs is a tedious process. To address this problem, Medical Institutions requires effective Predictive Algorithms for the Prognosis and Diagnosis of the Patients. Under this work, the current state-of-the-art studied to identify leading Predictive Algorithms. Further, these algorithms namely Support Vector Machine (SVM), Naïve Bayes (NB), Decision Tree (DT), Random Forest (RF), Artificial Neural Network (ANN), Logistic Regression (LR), AdaBoost and k-Nearest Neighbors (k-NN) analyzed against the two datasets on open-source WEKA software. This work used two similar structured datasets i.e., Statlog Dataset and Cleveland Dataset. For Pre-Processing of Datasets, The missing values were replaced with the Mean value and later 10 Fold Cross-Validation was utilized for the evaluation. The result of the performance analysis showed that SVM outperforms other algorithms against both datasets. SVM showed an accuracy of 84.156% against the Cleveland dataset and 84.074% against the Statlog dataset. LR showed a ROC Area of 0.9 against both datasets. The findings of the work will help Health Institutions to understand the importance and usage of Predictive Algorithms for the automatic prediction of CVD based on the symptoms.

Author 1: Aman
Author 2: Rajender Singh Chhillar

Keywords: Logistic regression (LR); support vector machine (SVM); Statlog; Cleveland; WEKA

Download PDF

Paper 18: Enhanced Clustering-based MOOC Recommendations using LinkedIn Profiles (MR-LI)

Abstract: With the rapid development of massive open online courses (MOOCs), the interest of learners in MOOCs has increased significantly. MOOC platforms offer thousands of varied courses with many options. These options make it difficult for learners to choose courses that suit their needs and compatible with their interests. So, they become exposed to many courses on all topics. Therefore, there is an urgent need for personalized recommendation systems that assist learners in filtering courses according to their interests. Therefore, in this research, we target learners on the professional platform, LinkedIn, to be the basis for user modeling; the number of extracted profiles equals 5,039. Then, skill-based clustering algorithms were applied to LinkedIn users. Subsequently, we applied the similarity measurement between the vector features of the resulting clusters and the extracted course vectors. In the experiment result, four clusters were provided with the top-N course recommendations. Ultimately, the proposed approach was evaluated, and the F1-score of the approach was .81.

Author 1: Fatimah Alruwaili
Author 2: Dimah Alahmadi

Keywords: MOOCs; recommendation systems; content-based; clustering; term frequency-inverse document frequency (TF-IDF); LinkedIn

Download PDF

Paper 19: Constructing IoT Botnets Attack Pattern for Host-based and Network-based Platform

Abstract: Internet of things (IoT) is the things or devices with software, intelligent sensors interconnected via the internet to send and receive data with another device. This capacity makes things, i.e., smartphones, smart homes, intelligent toys, baby monitors, IP cameras, and many more to act as intelligent devices like artificial intelligence (AI) and be utilized in the everyday life-world widely. IoT has enormous expansion potential, and many challenges have been acknowledged but are still open today. The botnet is a collection of bots from IoT devices used to launch extensive network attacks. In addition, rapid growth in technology has led to an incomplete understanding of IoT. The increasing number of IoT devices has led to the spread of malware targeting IoT devices make IoT Botnet behaviors challenging to identify and determine. To detect these IoT Botnets, a preliminary experiment on flow analysis is necessary. This paper is to identify IoT Botnet attack patterns from the IoT Botnet behavior that get from IoT Botnet activities. Therefore, this research is to identify IoT Botnet attack patterns in a host-based and network-based environment. First, this paper contributes to discovering, recognizing, categorizing, and detecting IoT Botnet activities. Next, organizing information to have a better understanding of the IoT botnet's problem and potential solutions. Then, construct the IoT Botnet attack pattern by analyzing the characteristics of the IoT Botnet behavior. This IoT Botnet attack pattern divides into two environments which are host-based and network-based. As a result, this paper aims to inform people about the attack pattern when the IoT device has been infected and become part of the botnet.

Author 1: Wan Nur Fatihah Wan Mohd Zaki
Author 2: Raihana Syahirah Abdullah
Author 3: Warusia Yassin
Author 4: Faizal M.A
Author 5: Muhammad Safwan Rosli

Keywords: IoT; botnet; IoT botnet; host-based; network-based

Download PDF

Paper 20: A Classification of Essential Factors for the Development and Implementation of Cyber Security Strategy in Public Sector Organizations

Abstract: To ensure the achievement of quality security to safeguard business objectives, implementing, and maintaining an effective Cyber Security Strategy (CSS) is crucial. Inevitably, we need to recognize and evaluate the essential factors, such as technological, cultural, regulatory, economic, and others, that may hinder the efficacy of a CSS development and implementation. From the literature review, it is evident that such factors are either abstractly stated, or only assessed from singular viewpoint and are scattered across the literature. Moreover, there is a lack of holistic studies that could assist us in comprehending the critical factors affecting a CSS. In this paper, we present a systematic classification of distinct, structured, and comprehensive list of key factors covering multiple aspects of an organization’s CSS, including organizational, cultural, economic, legal and political, and security, to provide a more complete view of understanding the essentials and analyzing the aptitude of a planned or given CSS. The proposed classification is further evaluated to examine the critical factors verified by conducting semi-structured interviews from security experts in different public sector organizations. Furthermore, we present a comparison of our work with the recent attempts that reflects that a significant accumulation of essential factors have been holistically identified in this study.

Author 1: Waqas Aman
Author 2: Jihan Al Shukaili

Keywords: Cyber security strategy; critical factors; risk treatment; culture; threats

Download PDF

Paper 21: Automatic Speech Recognition Features Extraction Techniques: A Multi-criteria Comparison

Abstract: Features extraction is an important step in Automatic Speech Recognition, which consists of determining the audio signal components that are useful for identifying linguistic content while removing background noise and irrelevant information. The main objective of features extraction is to identify the discriminative and robust features in the acoustic data. The derived feature vector should possess the characteristics of low dimensionality, long-time stability, non-sensitivity to noise, and no correlation with other features, which makes the application of a robust feature extraction technique a significant challenge for Automatic Speech Recognition. Many comparative studies have been carried out to compare different speech recognition feature extraction techniques, but none of them have evaluated the criteria to be considered when applying a feature extraction technique. The objective of this work is to answer some of the questions that may arise when considering which feature extraction techniques to apply, through a multi-criteria comparison of different features extraction techniques using the Weighted Scoring Method.

Author 1: Maria Labied
Author 2: Abdessamad Belangour

Keywords: Automatic speech recognition; feature extraction; comparative study; MFCC; PCA; LPC; DWT; WSM

Download PDF

Paper 22: An Approach for Requirements Engineering Analysis using Conceptual Mapping in Healthcare Domain

Abstract: Healthcare systems aim to achieve the best possible support for patient care and to provide good medical care. Good analysis of requirements is essential to avoid any crises. Elicitation of healthcare systems requirements is an emerging and critical phase. It is a challenging task to deal with constraints from the stakeholders and restrictions of the legal issues. In this research, an approach "Conceptual Mapping for non-functional Health care Requirements; CMHR" is proposed to perform an analysis and to evaluate the relationship between the clinical non-functional requirements of medical devices (ventilators as an example in this research) according to the following five attributes: prioritization of requirements, suitability, feasibility, achievability, and risky. Requirements are automatically clustered using the K-means++ algorithm to find out the optimal number of clusters. Requirements are then clustered to visualize the concept map. Clustering is applied on different combinations of the attributes to sort the requirements and to visualize them. Label names are assigned to the classes of requirements to assign each requirement to the appropriate class. Consequently, a prediction of a new requirement can be figured automatically. The approach achieved less rework, fast delivery of the project with good quality, and achieved a higher level of user satisfaction.

Author 1: Aya Radwan
Author 2: A. Abdo
Author 3: Sayed Abdel Gaber

Keywords: Conceptual mapping; healthcare systems; clustering; requirement engineering analysis

Download PDF

Paper 23: Improved Rough-fuzzy C-means Clustering and Optimum Fuzzy Interference System for MRI Brain Image Segmentation

Abstract: The categorization of brain tissues plays a vital role in various neuro-anatomical identification and implementations. In manual detection, misidentification of location and sound of unwanted tissues may occur due to visual fatigue by humans. Also, it consumes more time and may exhibit enormous partially inner or outer the manipulator. At present, automatic identification of brain tissues in MRI is vital for investigation and healing applications. This work proposed MRI image tissue segmentation using Improved Rough Fuzzy C Means (IRFCM) algorithm and classification using multiple fuzzy systems. Proposed research work comprises four modules: pre-processing, segmentation, categorization, and extracting features. Initially, the elimination of boisterous occur in the given image is done through pre-processing. After the pre-processing, segmentation is carried out for the pre-processed brain image to segment the tissue based on clustering concept using Improved Rough Fuzzy C Means algorithm. Later, the features of Gray-Level Co-Occurrence Matrix (GLCM) are extracted from segmentation, and the features extracted from segmented images are applied to Optimum Fuzzy Interference System (OFIS). Then the entire system parameters are optimized using Enhanced Grasshopper Optimization Algorithm (EGOA). Finally, the novel OFIS classifier helps to classify the brain-based tissue images as Gray Matter (GM), White Matter (WM), Cerebrospinal Fluid (CSF), and Tumor Tissues (TT). The results using MRI data sets are analyzed and compared with other existing techniques through performance metrics to show the superiority of the proposed methodology.

Author 1: D. Maruthi Kumar
Author 2: D. Satyanarayana
Author 3: M. N. Giri Prasad

Keywords: Cerebrospinal fluid; fuzzy interference system; enhanced grasshopper optimization algorithm; improved rough fuzzy c-means clustering

Download PDF

Paper 24: Empirical Validation of WebQMDW Model for Quality-based External Web Data Source Incorporation in a Data Warehouse

Abstract: In recent years, World Wide Web has emerged as the most promising external data source for organizations’ Data Warehouses for valuable insights required in comprehensive decision making to gain a competitive edge. However, when the Data Warehouse uses external data sources from the Web without quality evaluation, it can adversely impact its quality. Quality models have been proposed in the research literature to evaluate and select Web Data sources for their integration in a Data Warehouse. However, these models are only conceptually proposed and not empirically validated. Therefore, in this paper, the authors present the empirical validation conducted on a set of 57 subjects to thoroughly validate the set of 22 quality factors and the initial structure of the multi-level, multi-dimensional WebQMDW quality model. The validated and restructured WebQMDW model thus obtained can significantly enhance the decision-making in the DW by selecting high-quality Web Data Sources.

Author 1: Priyanka Bhutani
Author 2: Anju Saha
Author 3: Anjana Gosain

Keywords: Data warehouse; external data sources; web data sources; quality evaluation model; quality model validation

Download PDF

Paper 25: Correlating Discriminative Quality Factors (CDQF) for Optimal Resource Scheduling in Cloud Networks

Abstract: The Correlating Discriminative Quality Factors (CDQF) for Optimal Resource Scheduling in cloud networks has been addressed in this manuscript. It is since the resources under the cloud platform are loosely coupled according to the SLA between the cloud platform and the resource partakers. This enables the possibility of multiple resources from diversified partakers, those intended to accomplish similar services. The resource scheduling intends to select one resource among available resources to accomplish the scheduled task(s). The contemporary contributions related to resource scheduling are specific to traditional QoS factors, including cost, deadline constraints, and power consumption. However, the quality of service is often influenced by the contextual factors of the IAAS. Hence, this manuscript portrayed a novel resource scheduling strategy that orders the resources under the degree of optimality proposed in this manuscript. Unlike traditional resource scheduling methods, this manuscript portrayed a set of context-related factors that are further used to define the heuristic measure called “Degree of Optimality.” The experimental study on the simulated environment elevates the proposal performance advantage as opposed to other existing methods.

Author 1: B. Ravindra Babu
Author 2: A.Govardhan

Keywords: Resource management (RM); resource scheduling (RS); resource provisioning (RP); QoS; infrastructure-as-a-service (IAAS)

Download PDF

Paper 26: Development of Novel Algorithm for Data Hiding on Mobile Application

Abstract: We can easily observe that in the current era of communication, Computers, mobile phones, and the Internet are widely used mediums. In such an environment, information security is an important issue. The most common techniques used for information security are Cryptography and Steganography. Many research works have been done on image Steganography. These images have different kinds of formats like BMP, GIF, and PNG. Several compression methods are available which applied to the images having the format BMP, GIF, and PNG is comparatively less effective than the JPEG image. JPEG compression steganography method is complex to implement even then some algorithms have been developed for it. They provide single-level or two-level information securities. This paper presents a method in which is the combination of Cryptography and Steganography is used over android mobile phones. Here we are performing encryption two times on data to be hidden and then that encrypted data hidden both in text and image. We are using the Advanced Encryption Standard (AES) algorithm to hide text within a text, Discrete Cosine transform (DCT) for image steganography, and Data Encryption Standard (DES) for encryption of text. This paper gives the idea of three-level information security over the android environment.

Author 1: Roopesh Kumar
Author 2: Ajay Kumar Yadav

Keywords: Steganography encryption; discrete cosine transform; DES; F5Algorithm

Download PDF

Paper 27: Fuzzy and Genetic Algorithm-based Decision-making Approach for Collaborative Team Formation: A Study on User Acceptance using UTAUT

Abstract: Forming an optimal collaborative team is achieved using members characteristics to improve team efficiency. A team’s performance may have a negative effect when a team is formed randomly. Moreover, it is quite impossible to achieve an optimal team manually as the formation can expand into countless possibilities. Hence, this paper presents a decision-making framework for collaborative team formation by incorporating Fuzzy Logic and Genetic Algorithm (Fuzzy-GA). The framework has been initiated by combining effective team formation factors such as skills, trust, leadership, and individual performance. Unified Theory of Acceptance and Use of Technology (UTAUT) is utilised to survey the readiness and technology acceptance of the organisations’ employees in adopting the proposed decision-making approach to form a collaborative team. The UTAUT survey had proven that behavioural intention (BI) had a positive impact on the performance expectancy (PE), effort expectancy (EE), social influence (SI) and facilitating conditions (FC). However, behavioural intention (BI) had a negative impact on the voluntariness of use (VU); thus the transformation of collaborative team formation must be further explored to increase the team’s voluntarism towards this automated collaborative team formation.

Author 1: Azleena Mohd Kassim
Author 2: Norhanizah Minin
Author 3: Yu-N Cheah
Author 4: Fazilah Othman

Keywords: Collaborative team formation; genetic algorithm; fuzzy logic; unified theory of acceptance and use of technology (UTAUT)

Download PDF

Paper 28: Secured SECS/GEM: A Security Mechanism for M2M Communication in Industry 4.0 Ecosystem

Abstract: The manufacturing industry has been revolutionized by Industry 4.0, vastly improving the manufacturing process, increasing production quality and capacity. Machine-to-Machine (M2M) communication protocols were developed to strengthen and bind this ecosystem by allowing machines to communicate with each other. The SECS/GEM protocol is at the heart of the manufacturing industry, thriving as a communication protocol and control system for years. It is a manufacturing equipment protocol used for equipment-host data communications. However, it is not without drawbacks, despite being a widely adopted communication protocol used by leading industries. SECS/GEM does not offer any type of security features as it was designed to work in a closed network. Such shortcomings in the protocol will allow attackers to steal secrets such as manufacturing processes by looking at recipes, perform reconnaissance prior to sabotage attempts, and can have severe implications on the entire industry. This paper proposes a mechanism to secure SECS/GEM data messages with AES-GCM encryption and evaluate the performance with the standard SECS/GEM protocol. The results from our evaluations showed that the proposed mechanism achieves data confidentiality and authenticity with a negligible overhead of 0.8 milliseconds and 0.37 milliseconds when sending and receiving a message, respectively, compared to the standard protocol.

Author 1: Ashish Jaisan
Author 2: Selvakumar Manickam
Author 3: Shams A. Laghari
Author 4: Shafiq Ul Rehman
Author 5: Shankar Karuppayah

Keywords: SECS/GEM; HSMS; cybersecurity; industry-4.0; machine-to-machine communication; AES-GCM

Download PDF

Paper 29: Evaluating and Comparing the Usability of Privacy in WhatsApp, Twitter, and Snapchat

Abstract: With the increased use of social networking platforms, especially with the inclusion of sensitive personal information, it has become important for those platforms to have adequate levels of security and privacy. This research aimed to evaluate the usability of privacy in the WhatsApp, Twitter, and Snapchat applications. The evaluation was conducted based on the structured analysis of privacy (STRAP) framework. Seven expert evaluators performed heuristic evaluations and applied the 11 STRAP heuristics to the privacy policy statements and settings provided in the WhatsApp, Twitter, and Snapchat applications. This study provides useful information for designers and developers of social media applications as well as users of the apps. In particular, the results indicate that Snapchat had the highest severity rating, followed by Twitter and WhatsApp. Moreover, the most notable severity rating for all the apps was regarding the ability to revoke consent, where all of the apps had a very high number of usability problems.

Author 1: Abdulmohsen S. Albesher
Author 2: Thamer Alhussain

Keywords: HCI; usability; heuristics evaluation; STRAP; privacy; usable privacy; social media

Download PDF

Paper 30: Genetic Algorithm and Ensemble Learning Aided Text Classification using Support Vector Machines

Abstract: Text classification is one of the areas where machine learning algorithms are used. The size of the dataset and the methods used for converting the textual words into vectors play a major role in classifying them. This paper proposes a heuristic based approach to classify the documents using Genetic Algorithm aided Support Vector Machines (SVM) and Ensemble Learning approach. The real valued representation of the textual data into vectors is done on applying Term Frequency – Inverse Document Frequency (TF-IDF) and Bi-Normal Separation (BNS). However, in this paper, the common data misclassification issue in SVM is overcome by introducing two algorithms that adds weightage to accurate classification. The first algorithm applied BNS and TF-IDF along with ensemble learning and constructs a voting classifier for classifying the textual documents. The results produced justify that TF-IDF produces good results with voting classifier than BNS for classification. Henceforth TF-IDF is applied in the subsequent approach for vector generation. Secondly, genetic algorithm is applied along with OneVsRest strategy in SVM to overcome the drawback of multiclass multilabel classification. The results show that Genetic algorithm improves the accuracy of classification even with a very small labelled dataset, as genetic algorithm applies the process of Mutation and Cross over across many generations to understand the pattern of right classification.

Author 1: Anshumaan Chauhan
Author 2: Ayushi Agarwal
Author 3: Razia Sulthana

Keywords: Genetic algorithm; ensemble learning; support vector machines; text classification

Download PDF

Paper 31: Object Detection Approaches in Images: A Weighted Scoring Model based Comparative Study

Abstract: Computer vision is a branch of artificial intelligence that trains computers to acquire high-level understanding of images and videos. Some of the most well-known areas in Computer Vision are object detection, object tracking and motion estimation among others. Our focus in this paper concerns object detection subarea of computer vision which aims at recognizing instances of predefined sets of objects classes using bounding boxes or object segmentation. Object detection relies on various algorithms belonging to various families that differs in term of speed and quality of results. Hence, we propose in this paper to provide a comparative study of these algorithms based on a set of criteria. In this comparative study we will start by presenting each of these algorithms, selecting a set of criteria for comparison and applying a comparative methodology to get results. The methodology we chose to this purpose is called WSM (Weighted Scoring Model) which fits exactly our needs. Indeed, WSM method allows us to assign a weight to each of our criterion to calculate a final score of each of our compared algorithms. The obtained results reveal the weaknesses and the strengths of each one of them and opened breaches for their future enhancement.

Author 1: Hafsa Ouchra
Author 2: Abdessamad Belangour

Keywords: Computer vision; object detection; images; WSM method; object detection algorithms

Download PDF

Paper 32: On the Training of Deep Neural Networks for Automatic Arabic-Text Diacritization

Abstract: Automatic Arabic diacritization is one of the most important and challenging problems in Arabic natural language processing (NLP). Recurrent neural networks (RNNs) have proved recently to achieve state-of-the-art results for sequence transcription problems in general, and Arabic diacritization in specific. In this work, we investigate the effect of varying the size of the training corpus on the accuracy of diacritization. We produce a cleaned corpus of approximately 550k sequences extracted from the full dataset of Tashkeela and use subsets of this corpus in our training experiments. Our base model is a deep bidirectional long short-term memory (BiLSTM) RNN that transcribes undiacritized sequences of Arabic letters with fully diacritized sequences. Our experiments show that error rates improve as the size of training corpus increases. Our best performing model achieves average diacritic and word error rates of 1.45% and 3.89%, respectively. When compared with state-of-the-art diacritization systems, we reduce the word error rate by 12% over the best published results.

Author 1: Asma Abdel Karim
Author 2: Gheith Abandah

Keywords: Arabic text; automatic diacritization; bidirectional neural network; long short-term memory; natural language processing; recurrent neural networks; sequence transcription

Download PDF

Paper 33: Unsupervised Clustering of Comments Written in Albanian Language

Abstract: Now-a-days, social media and communications in social media have become very important for services providers and those play a key role in service quality improvement as well as in decision making. The services consumers’ discussions usually are written in their local languages and extracting important knowledge sometimes is very hard and problematic. In this field the natural language processing techniques are helpful, but different languages have their specifics and difficulties, and some languages are not prosperous enough in the techniques and methods on NLP, especially the local speaking of the language. In this scientific paper, we have tried to solve such a problem for the Albanian language spoken in Kosovo. Namely, for a dataset of the comments, written in Albanian language in Kosovo (local speaking), collected from the social media, by use of unsupervised clustering techniques, to make clustering regarding the topic of discussion in the comment. In this research, the different techniques of text feature extraction (vectorization and others) and clustering algorithms (K-means, Spectral, Agglomerative, etc.), are used with the idea to find and define more appropriate techniques for the Albanian language. In this paper are shown the results of the conducted experiments as well as discussions about what to use in case of the Albanian language and other languages similar or in group with Albanian (those which have a weak NLP).

Author 1: Mërgim H. HOTI
Author 2: Jaumin AJDARI

Keywords: Unsupervised clustering; k-means; spectral; agglomerative; vectorization; Albanian language

Download PDF

Paper 34: Mitigating Traffic Congestion at Road Junction using Fuzzy Logic

Abstract: The timing of traffic lights at intersections is determined by the Local Authority. It is based on the peak hour statistics and the timing is maintained even during an off-peak hour. This one standard green time will be used every time in a day regardless of the number of vehicles and the road width. This approach will have a long green traffic light even though the number of vehicles is only a few and hence, will cause a long waiting time at the intersections. Therefore, the aim of this study is to vary the timing of traffic lights at junctions according to the number of vehicles. This paper will also consider road width variable which have not been considered so far. Fuzzy logic rules will be used to classify the number of vehicles and road width and time taken for vehicles to move at the intersections which was proposed in a previous work. The new timing will commensurate with the number of vehicles and road width. Field test data were gathered from Sala Benda and Semplak intersections which are amongst the busiest intersections in Bogor, Indonesia. Comparisons were made and show that the green light timing obtained is appropriate to the two factors considered. Also the waiting time for vehicles in each traffic cycle was also reduced. This study have formulated an optimal green lights timing in each intersection and will be used by local authorities to determine the timing of green traffic lights at the intersection and hence, can implement traffic control and an appropriate waiting time.

Author 1: Amir Hamzah Pohan
Author 2: Liza A. Latiff
Author 3: Rudzidatul Akmam Dziyauddin
Author 4: Nur Haliza Abdul Wahab

Keywords: Traffic fuzzy classification; congestion; traffic lights control

Download PDF

Paper 35: Adaptive Demand Response Management System using Polymorphic Bayesian Inference Supported Multilayer Analytics

Abstract: The most significant limitation of stand-alone microgrid systems is the challenge of meeting unexpected additional demands. If demand exceeds the capacity of a stand-alone system, the system may be unable to satisfy demand. This issue is alleviated in grid-connected technology since the utility system will provide more power if it is demanded. As a result, load scheduling is an integral element of the demand response of a standalone system. There are two components to this problem. If the capacity of a battery-supported power system is restricted, for the period of time that the source is available, it will not be able to meet the entire demand. Appropriately the demand is dispersed across a period of time until the next charge becomes available. Some demands may be disregarded in order to accomplish peak load trimming or if the system is incapable of meeting demand without compromising other important technical and consumer objectives. This is a challenging assignment. This article aims to develop an Adaptive Demand Response Management System (ADRMS) capable of load scheduling and load shedding using an interwoven multidimensional Bayesian inference supported by multiple mathematical models. A two-stage hardware architecture is being developed, with the first hardware measuring demand and source capacity before sending the data to the second hardware via LPWAN for mathematical analysis. In the first phase, two approaches are used to forecast demand: Gaussian Naive Bayes Model (GNBM) and Bayesian Structural Time Series analysis. GNBM is rapid but fails to properly forecast the output when there is zero frequency error whereas BSTS can offer more precise results than GNBM but is slower. Hence two approaches are employed in tandem. The next stage is to assign demand source integration. This is accomplished using Bayesian Reinforcement Learning (BRL), which is based on a number of incentives, including anomaly, cost factors, usefulness, reliability, and size. All Bayesian models are subjected to much of the common Bayes rule, resulting in the formulation of a blended polymorphism model that reduces computing time and memory allocation, and improves processing reliability. The Isolation Forest (IF) method is used to identify and avoid vulnerable loads by determining demand anomalies. The last step employs a Dynamic Preemptive Priority Round Robin (DPPRR) algorithm for preemptive priority based load scheduling based on forecasted data to allocate the next loads to be added.

Author 1: Sarin CR
Author 2: Geetha Mani

Keywords: Adaptive control; Bayesian; demand response; energy management system; load scheduling

Download PDF

Paper 36: Determine the Main Target Audience Characteristics in M-learning Applications in Saudi Arabian University Communities

Abstract: In the fourth economic revolution, which is based on digital transformation, the e-learning process represents one of the most important processes needed to deal with the revolution through increased skills and knowledge. Thus, the automation services in the e-learning field represent one of the most important and supportive means of transferring and disseminating knowledge to reach a diverse and wide segment. This study focuses on defining the parameters and characteristics of the target audience, who are interested in the e-learning method, through smart device applications in higher education institutions in Saudi Arabia. The study used a quantitative method and data collected from 539 participants from several universities and institutes to determine their characteristics. The study segment represents one of the basic aspects of and full motivations for accepting new technology; 70% of Saudi smart device users form the youth segment, which is the university age group. This is the category that is expected to have the most use of e-learning in light of the coronavirus pandemic, which has cast a shadow over the six continents of the world. This approach could help the adoption of M-learning applications by the target audience according to a number of technical and design requirements, which are presented in this study.

Author 1: Alaa Badwelan
Author 2: Adel A. Bahaddad

Keywords: M-learning; mobile learning; UTAUT; KSA; MOE; application quality; qualitative study

Download PDF

Paper 37: Online Discussion Support System with Facilitation Function

Abstract: In this paper, we present a discussion board system (DBS) with a facilitator function that we developed for the purpose of facilitating discussions that require decision-making that incorporates diverse values and opinions. Its function is to constantly extract nouns from the utterances of discussion participants and display them on the DBS on each participant's personal computer. Items to be decided in the discussion are displayed together with a frame (called a “box”), and each participant puts the displayed keywords in the box according to their own opinions and intentions. No other participant can see what the individual is doing. The color of any keyword that all participants put in the same box changes to green. Furthermore, comments are automatically presented based on the time when each participant last spoke and the time when the keyword was moved. This is intended to encourage participants who appear to be less involved to join the discussion. Experiments with the DBS suggested that it might be possible to capture the will of participants who disagree but do not speak. However, it was also deemed necessary to post comments that encourage participants to express their intentions independently, and to have a mechanism that can link the motivated manifestation of intentions to appropriate actions.

Author 1: Chihiro Sasaki
Author 2: Tatsuya Oyama
Author 3: Chika Oshima
Author 4: Shin Kajihara
Author 5: Koichi Nakayama

Keywords: Discussion board system; decision-making; facilitation function; diverse values and opinions

Download PDF

Paper 38: Hybrid Spelling Correction and Query Expansion for Relevance Document Searching

Abstract: A digital library is a type of information retrieval (IR) system. The existing IR methodologies generally have problems on keyword searching. Some of search engine has not been able to provide search results with partial matching and typographical error. Therefore, it is required to be able to provide search results that are relevant to keywords provided by the user. We proposed a model to solve the problem by combining the spell correction and query expansion. Searching is starting with indexing the title of the document by preprocessing the title of all incoming document data and then weighting the Term Frequency – Inverse Document Frequency (TF-IDF) against all terms of the whole document. Levenshtein Distance algorithm is used in the search process to correct typo-indicated keywords. Before calculating the relevance between the keywords and the documents using Cosine Similarity, the keywords are expanded using Query Expansion to increase number of documents retrieved. Calculation results using Cosine Similarity are then added to Query Expansion weight calculation to get final ranking result. Results show improvements over IR system compared with system without spell check and query expansion. The results of the study in the form of web-based application conducted testing for 50 times with number of data of 2,045. The system was able to correct typo-indicated keywords and search documents with average recall value of 95.91%, average precision value of 63.82% and average Non Interpolated Average Precision (NIAP) value of 86.29%.

Author 1: Dewi Soyusiawaty
Author 2: Denny Hilmawan Rahmatullah Wolley

Keywords: Cosine similarity; information retrieval; Levenshtein distance; TF-IDF; typographical error; query expansion

Download PDF

Paper 39: Usability Perceptions of the Madrasati Platform by Teachers in Saudi Arabian Schools

Abstract: As a result of the COVID-19 pandemic, the Saudi Ministry of Education launched the Madrasati Platform for distant teaching and learning for schools. Therefore, recognizing the design and the usability challenges that are linked to this new platform is significant. This paper reports the results of a study that examines the usability level of the Madrasati Platform from the perspectives of the schoolteachers in Saudi Arabia. It also investigates the usability issues that teachers faced when using the platform. A total of 759 teachers responded to the Computer System Usability Questionnaire (CSUQ). Semi-structured interviews were also conducted with ten teachers. The findings of the study indicated that the usability of the Madrasati Platform for the schoolteachers is inadequate and needs to be improved further. Navigation issues were the most reported issues by the participants. Finally, the paper presents some recommendations for improving the usability and experience with the Madrasati Platform.

Author 1: Wesam Shishah

Keywords: Usability; Madrasati Platform; e-learning; Saudi Arabia; schools

Download PDF

Paper 40: IDD-HPO: A Proposed Model for Improving Diabetic Detection using Hyperparameter Optimization and Cloud Mapping Storage

Abstract: Readmission to the hospital is an important and critical procedure for the quality of health care as it is very costly and helps in determining the quality level of the point of care provided by the hospital to the patient. This paper proposes a group model to predict readmission by choosing between Machine Learning and Deep Learning algorithms based on performance improvement. The algorithms used for Machine Learning are Logistic Regression, K-Nearest Neighbors, and Support Vector Machine, while the algorithms used for Deep Learning are a Convolutional Neural Network and Recurrent Neural Network. The reasons for the appearance of the efficiency of the model depend on the are preparation of correct parameters and the values that control the learning. This paper aims to enhance the performance of both machine learning and deep learning based readmission models using hyperparameter optimization in both Personal Computer environments and Mobile Cloud Computing systems. The proposed model is called improving detection diabetic using hyperparameter optimization , the proposed model aims to achieve the best rate of between prediction rate accuracy for hospital readmission at the same time minimizing resources such as time delay and energy consumption. Results achieved by proposed model for Logistic Regression, K-Nearest Neighbors, and Support Vector Machine are (accuracy=0.671, 0.883, 0.901, time delay=5, 7, 20, and energy consumed=25, 32, 48) respectively, for Recurrent Neural Network and Convolutional Neural Network are (accuracy=0.854, 0.963, time delay=25, 660 energy consumed=89, 895) respectively. However, this proposed model takes a lot of time and energy consumed especially in Convolutional Neural Network. So, the experiments were conducted again, but in the cloud environment, based on the existence of two types of storage to preserve the accuracy but decreasing time and energy, the proposed model in cloud environment achieve for Logistic Regression, K-Nearest Neighbors, and Support Vector Machine (accuracy=0.671, 0.883, 0.901, time delay=2, 3, 8, and energy consumed=8, 9, 11) respectively, for Recurrent Neural Network, Convolutional Neural Network (accuracy=0.854, 0.963, time delay=15, 220, and energy consumed=20, 301) respectively.

Author 1: Eman H. Zaky
Author 2: Mona M. Soliman
Author 3: A. K. Elkholy
Author 4: Neveen I. Ghali

Keywords: Machine learning; deep learning; diabetes; hospital readmission; hyper parameter optimization; cloud computing; mobile cloud computing

Download PDF

Paper 41: Emotional Cascade Model and Deep Learning

Abstract: Emotional Cascade Model proposes that the emotional and behavioral dysregulation of individuals with Borderline Personality Disorder can be understood through emotional cascades. Emotional cascades are vicious cycles of intense rumination and negative affect that may induce aversive emotional states that generate abnormal behaviors to reduce the effect of intense rumination. Borderline Personality Disorder is a psychiatric disorder whose main symptoms to diagnose it are mood instability and impulsivity. This disorder often involves risky behaviors such as non-suicidal self-injury or substance abuse. Recently, Selby and collaborators have proved that the Emotional Cascade Model has a high explanatory and diagnostic capacity using Temporal Bayesian Networks. Taking into consideration the meta-analytic study developed by Richman et al., in this article it has been designed a deep learning model, based on cascading artificial neural networks, following the correlations established for the Emotional Cascade Model. It has been confirmed with accuracy estimates reaching up to 99%, the predictive power of this model relative to the various types of rumination that influence some of the basic classes of symptoms of Borderline Personality Disorder.

Author 1: Carlos Pelta

Keywords: Emotional cascade model; borderline personality disorder; rumination; deep learning; cascade-correlation algorithm

Download PDF

Paper 42: Diabetes Classification using an Expert Neuro-fuzzy Feature Extraction Model

Abstract: Diabetes is one of the challenging diseases prevailing in recent times. Due to the incompleteness, uncertainty and imprecise details, classification of diabetes using machine learning algorithms is turning out to be even more challenging. The efficiency of the classification model is influenced by the data present in the dataset. This study enhances the classification of diabetes by using a Neuro-Fuzzy model with special attention to Feature Extraction. The main goal of the present study is to enhance the diabetes prediction technique that helps the medical practitioners to easily identify the disease and diagnose it appropriately to reduce several complications that diabetes may cause to the patient in the future. The proposed model initially applies fuzzification on diabetes data to produce membership values. Later the membership values are examined by the proposed model to check the contribution of the features in diabetes classification. The feature extraction algorithm passes the significant features to a neural network after the features are extracted. The proposed model is tested on standard PIMA diabetic dataset to evaluate the performance. The proposed model is able to outperform all the existing machine learning algorithms.

Author 1: P. Bharath Kumar Chowdary
Author 2: R. Udaya Kumar

Keywords: Diabetes; neuro-fuzzy model; feature extraction; artificial neural network

Download PDF

Paper 43: A Multiagent and Machine Learning based Hybrid NIDS for Known and Unknown Cyber-attacks

Abstract: The objective of this paper is to propose a hybrid Network Intrusion Detection System (NIDS) for the detection of cyber-attacks that may target modern computer networks. Indeed, in the era of technological evolution that the world is currently experiencing, hackers are constantly inventing new attack mechanisms that can bypass traditional security systems. Thus, NIDS are now an essential security brick to be deployed in corporate networks to detect known and zero-day attacks. In this research work, we propose a hybrid NIDS model based on the use of both a signature-based NIDS and an anomaly detection NIDS. The proposed system is based on agent technology, SNORT signature-based NIDS, machine learning techniques and the CICIDS2017 dataset is used for training and evaluation purposes. Thus, the CICIDS2017 dataset has undergone several pre-processing actions, namely, dataset cleaning, and dataset balancing as well as reducing the number of attributes (from 79 to 33 attributes). In addition, a set of machine learning algorithms are used, such as decision tree, random forest, Naive Bayes and multilayer perceptron, and are evaluated using some metrics, such as recall, precision, F-measure and accuracy. The detection methods used give very satisfactory results in terms of modeling benign network traffic and the accuracy reaches 99.9% for some algorithms.

Author 1: Said OUIAZZANE
Author 2: Malika ADDOU
Author 3: Fatimazahra BARRAMOU

Keywords: Intrusion detection; zero-day attacks; machine learning; multi-agent systems; security

Download PDF

Paper 44: Software Quality Analysis for Halodoc Application using ISO 25010:2011

Abstract: The rapid spread of the Covid-19 disease and the ongoing social distancing have disrupted community activities. This consequently makes people use information and communication technology, especially in the fields of health, education, and business. The use of information and communication technology in the health sector plays an important role in preventing the spread of the Covid-19 disease. One of its uses is Telemedicine. One of the developing telemedicine in Indonesia is Halodoc. This study will test the quality of the Halodoc application to determine the quality of the Halodoc application using the model of the International Organization for Standardization (ISO), namely ISO 25010:2011. Software quality assurance can use this model as the basis for testing. Therefore the test will include 8 characteristics and 29 sub-characteristics. The assessment in this tests will use a weight where this weight is determined using Analytical Hierarchy Process (AHP) method. The testing method is carried out using Black-Box testing, stress testing, and distributing questionnaires to 100 respondents for Usability testing. Functional Suitability, Compatibility, Reliability, and Maintainability got a maximum score of 5. Performance Efficiency get a score of 4,886, Usability gets a score of 4, Security gets a score of 3,549, Portability gets a score of 3,718. The total results of the Halodoc application assessment get a score of 4,515 out of a maximum score of 5.

Author 1: Aditia Arga Pratama
Author 2: Achmad Benny Mutiara

Keywords: Halodoc; ISO 25010:2011; software quality assurance; telemedicine

Download PDF

Paper 45: The Novel CPW 2.4 GHz Antenna with Parallel Hybrid Electromagnetic Solar for IoT Energy Harvesting and Wireless Sensors

Abstract: The design and implementation's novelty simultaneously utilizes the antenna's frequency, polarization, and feed structure to maximize the harvested RF energy and become a microstrip communication circuit for wireless sensor or communication systems in IoT devices. In addition, the optimization of the parallel circuit configuration has a voltage doubler model with an integrated parallel system and thin-film solar cells. Implementation of the antenna structure has two-line feeds in one antenna. Usage both feeds have the same function as CPW circular polarization. Another advantage is that there is no miss-configuration when installing the port exchanges when using both output ports simultaneously. The 2-port antenna has an area of 1/2 per port (where accessible wavelengths work well at the 2.4 GHz frequency). It has been shown to achieve a relatively narrow bandwidth of 86.5 percent covering WiFi frequency band networks and IoT communications. It does not require additional filters and analog matching circuits that cause power loss in the transmission process in parallel voltage doubler circuits. Integrating a reflector on the CPW antenna with two ports for placement of thin-film solar cells provides antenna gain of up to 8.2 dB. It provides a wide beam range with directional radiation. Using a multi-stage parallel to increase voltage output and integrated with a thin-film solar cell converter proves efficient in the 2.4 GHz frequency band. When the transmission power density is -16.15 dBm with a tolerance of 0.023, the novel energy harvester configuration circuit can produce an output voltage of 54 mV dc without adding solar cell energy. And Integrated thin-film solar cell a light beam of 300 lux in the radiation beam area of -16.15 dBm, the energy obtained has a value of 1,74467V. It also shows that the implementation of this configuration can produce an optimal dc output voltage in the actual indoor and outdoor ambient settings. The optimization of antenna implementation and the communication process with Multiple signal classifications improves the configuration of antennas that are close to each other and have identical phase outputs. It is instrumental and efficient when applied to IoT devices.

Author 1: Irfan Mujahidin
Author 2: Akio Kitagawa

Keywords: Double CPW antenna; energy harvesting; wireless sensors; IoT communications

Download PDF

Paper 46: Smart Air Pollution Monitoring System with Smog Prediction Model using Machine Learning

Abstract: Air Pollution is a harsh reality of today’s times. With rapid industrialization and urbanization, the polluting gases emitted by the burning of fossil fuels in industries, factories and vehicles, cities around the world have become “gas chambers”. Unfortunately, New Delhi too happens to be among the most polluted cities in the world. The present paper designs and demonstrates an IoT(Internet of Things) based smart air pollution monitoring system that could be installed at various junctions and high traffic zones in urban metropolis and megalopolis to monitor pollution locally. It is designed in a novelistic way that not just monitors air pollution by taking varied inputs from various sensors (temperature, humidity, smoke, Carbon monoxide, gas) and but also presents it on a smart mirror. Its unique feature is the demonstration of a smog prediction model by determining PM10 (Particulate Matter 10) concentration using the most efficient machine learning model after an extensive comparison by taking into account environmental conditions. This data generated can also be sent as a feedback to the traffic department to avoid incessant rush and to maintain uniform flow of traffic and also to environmental agencies to keep pollution levels under check.

Author 1: Salman Ahmad Siddiqui
Author 2: Neda Fatima
Author 3: Anwar Ahmad

Keywords: Air pollution; IoT; machine learning; smart mirror; temperature and humidity sensor

Download PDF

Paper 47: Novel Secure Validation Scheme to Assess Device Legitimacy in Internet of Things

Abstract: The increasing security concerns in Internet-of-Things (IoT) have led the researchers to evolve up with multiple levels of the research-based solution towards identifying and protecting the lethal threats. After reviewing the existing literature, it is found that existing approaches are highly specific toward stopping threats via predefined threat information. Hence, the proposed system introduces a new computational model capable of building up a flexible validation model for evaluating the legitimacy of the IoT nodes. The proposed system develops an algorithm that uses a simplified generation of secret keys, performs encryption, and generates validation tokens to ensure a higher degree of privacy and data integrity. The proposed model also contributes towards a unique energy allocation approach to ensure better energy conservation while performing security operations. The simulated study outcome shows better security and data transmission performance compared to the existing scheme.

Author 1: Ayasha
Author 2: M Savitha Devi

Keywords: Internet of things; security; validation; privacy; integrity; attacks

Download PDF

Paper 48: An Efficient IoT-based Smart Water Meter System of Smart City Environment

Abstract: Water is a precious need of our lives. Due to the rapid population and urbanization, water usage monitoring is a significant problem facing our society. One solution is to control, analyze, and reduce the water consumption of the houses. The emerging of the Internet of Things (IoT) concept lately in our lives has offered the opportunity to establish water usage-efficient smart devices, systems and applications for buildings and cities. Many studies have suggested designing an IoT-based smart meter system; however, the IoT sensor node has limited studies, especially in battery life. Therefore, this study aims to implement and analyze an efficient data collection algorithm for IoT-based smart metering applications in consideration with energy consumption. The system items used are Arduino Uno, Wi-Fi-ESP8266, and water flow sensors. The applied algorithm is an efficient data collection algorithm for water meter (EDCDWM) to reduce the number of packet transmissions. Arduino performed this system’s implementation, while the simulation and analysis performed by MATLAB R2019b. The average percentage of energy saved by the applied algorithms of EDCDWM absolute change; and EDCDWM with relative differences in all nodes are around 60% and 93%, respectively.

Author 1: Raad AL-Madhrahi
Author 2: Nayef.A.M. Alduais
Author 3: Jiwa Abdullah
Author 4: Hairulnizam B. Mahdin
Author 5: Abdul-Malik H. Y. Saad
Author 6: Abdullah B. Nasser
Author 7: Husam Saleh Alduais

Keywords: Internet of things; smart water metering; energy consumption; smart city

Download PDF

Paper 49: White-Grey-Black Hat Hackers Role in World and Russian Domestic and Foreign Cyber Strategies

Abstract: The article aims at establishing the role of three different types of hackers in the domestic cyber space, policy and welfare, international relations and warfare of the Russian Federation compared to the situation in the world as for the beginning of 2020 year. The character and structure of hackers’ participation in the information policy and cybersecurity are characterized in connection with the intensity and duration of their intervention, national interests, technical and political outcomes. The new role of cyberwarfare as the fifth sphere of military activity is highlighted and proved. Positive and negative influence of hackers’ activities, methods of their detection and control on the level of an individual Internet user, company, government and international organization are differentiated. Examples of certain criminal groups of hackers’ activities are given. Main organizational measures, tools and cryptography techniques for the protection against hackers’ invasion are proposed. The article reveals and analyzes the specifics of various hacker attacks: white, gray and black. The article emphasizes that cybersecurity in the world is now the object of hacker attacks, which can affect the functioning of not only national or private corporations: but also, the work of government agencies.

Author 1: Mikhail A. Shlyakhtunov

Keywords: Cyber wars; cybercriminals; disinformation; espionage; hackers; information security; military technology

Download PDF

Paper 50: Mobile Computational Vision System in the Identification of White Quinoa Quality

Abstract: Quinoa is currently in high commercial demand due to its large benefits and vitamin components. The process of selecting this grain is mostly done manually, being prone to errors, because many times this work is subject to fatigue and to subjective criteria of those in charge, causing the quality to decrease due to not making an adequate selection subject to standards. For this reason, a study focused on determining the influence of the computer vision system for the identification of the quality of white quinoa, based on the standards and techniques for the development of a computer vision system through the phases of PDI. Managing to determine the influence of this, concluding that it is possible to ensure the implementation of robust systems to solve problems by applying computer vision thanks to technological advances for mobile devices.

Author 1: Percimil Lecca-Pino
Author 2: Daniel Tafur-Vera
Author 3: Michael Cabanillas-Carbonell
Author 4: José Luis Herrera Salazar
Author 5: Esteban Medina-Rafaile

Keywords: Computer vision system; quinoa quality; digital image processing

Download PDF

Paper 51: Radiofrequency Temperature Control System for Fish Capture

Abstract: Artisanal fishing is one of the activities with the greatest economic impact worldwide. In the present research, the radiofrequency temperature monitoring system for fish catches was aimed to determine the influence on the satisfaction of fish catches. The dimensions included were time delay and direct cost; on the other hand, in the monitoring system were usability and reliability. It was shown that the use of the radiofrequency temperature monitoring system decreased the direct cost incurred, reduced the time required to locate hot spots, increased satisfaction and confidence in the system by 95%.

Author 1: Walter Vásquez-Cubas
Author 2: Marvin Zarate-Cuzco
Author 3: Michael Cabanillas-Carbonell
Author 4: José Luis Herrera Salazar
Author 5: Oswaldo Casazola-Cruz

Keywords: Temperature monitoring system; satisfaction; radiofrequency; fish capture

Download PDF

Paper 52: Ontology based Semantic Query Expansion for Searching Queries in Programming Domain

Abstract: Information on the web is growing rapidly; learning programming languages has become one of the most widely searched topics. Programmers and people of several backgrounds are now using Web search engines to acquire programming information, including information about a specific language or topic. Nonetheless, due to a lack of programming knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, so they used to write short queries, which leads to vocabulary problems and contextless query. Besides, semantics is almost neglected in traditional search query since it is just keyword-based searches, which deem their search to be imprecise with irrelevant results, due to the use of unclear keywords. A Semantic query expansion method is proposed for disambiguating queries in computer programming domain using ontology. The integration of Cosine similarity method into the proposed model improved the expanded query, and accordingly, the searching results. The proposed system has been tested with several ambiguous and misspelled queries and the generated extensions proved to retrieve more relevant results, when applied to a search engine. The quality of the retrieved results for the expanded queries, are much higher than that for the crude queries. The proposed technique was implemented, and then tested with external independent testers. They confirmed the mechanical test results and displayed its improvement with an average of precision @10 82.2% and 91.1% respectively. The results were promising and therefore open further research directions.

Author 1: Manal Anwer Khedr
Author 2: Fatma A. El-Licy
Author 3: Akram Salah

Keywords: Query expansion; ontology; semantic search; short queries; cosine similarity

Download PDF

Paper 53: A Cryptographic Technique for Communication among IoT Devices using Tiger192 and Whirlpool

Abstract: The heterogeneous standards and operational platforms of IoT devices, introduce additional security loopholes into the network thereby increasing the attack surface for the IoT. Most of the devices used in these IoT systems are not secure by design. Such vulnerable devices pose a great threat to the IoT system. In recent times, there have been a lot of research works on improving existing mechanisms for securing IoT data at both the software and hardware levels. Although there exist cryptographic research solutions to secure data at the node level in IoT systems, there is not a lot of these security solutions that target securing both the IoT data and validating IoT nodes. The Authors propose a cryptographic solution that uses double hashing to provide improved security for IoT node data and validating nodes in IoT system. A cryptographic mechanism that is composed of the Tiger192 cryptographic hash and the whirlpool hash function is proposed in authenticating IoT data and validating devices in this paper. The use of digital ledger technology and cryptographic double hashing algorithm provided enhanced security, privacy, and integrity of data among IoT nodes. It also assured the availability of IoT data.

Author 1: Bismark Tei Asare
Author 2: Kester Quist-Aphetsi
Author 3: Laurent Nana

Keywords: IoT devices; whirlpool; Tiger192; internet of things security; cryptographic communication

Download PDF

Paper 54: User-centred Design and Evaluation of Web and Mobile based Travelling Applications

Abstract: Travelling has been known as one of the top-rated activities people do during their leisure time. In this digital time, people usually research before visiting a new place to avoid unpleasant events and to have a well-planned trip. Due to the complexity of search engine browsers, people have been switching to designated travelling applications. Travelling applications should be designed by taking into consideration user’s needs and requirements; and usability. This research aims to design a travelling application based on a user-centred design approach and compare its performance on different platforms. Two prototypes of travelling applications were designed and evaluated; web-based and mobile-based. Then, System Usability Scale (SUS) questionnaire was used to evaluate the usability of the two prototypes. Pearson correlation coefficient test and t-test were used to analyses the data collected from the questionnaire. The results showed no statistically significant difference in SUS scores for both prototypes, which indicates that the participants do not prefer any of the prototypes more than another one.

Author 1: Nor Azman Ismail
Author 2: Siti Fatimah Nizam
Author 3: Simon Yuen
Author 4: Layla Hasan
Author 5: Su Elya Mohamed
Author 6: Wong Yee Leng
Author 7: Khalid Krayz Allah

Keywords: Usability; travelling application; SUS questionnaire: low-fidelity prototype; user-centred design (UCD)

Download PDF

Paper 55: Experimental Study of Hybrid Genetic Algorithms for the Maximum Scatter Travelling Salesman Problem

Abstract: We consider the maximum scatter travelling salesman problem (MSTSP), a travelling salesman problem (TSP) variant. The problem aims to maximize the shortest edge in the tour that travels each city only once in the given network. It is a very complicated NP-hard problem, and hence, exact solutions are obtainable for small sizes only. For large sizes, heuristic algorithms must be applied, and genetic algorithms (GAs) are observed to be very successful in dealing with such problems. In our study, a simple GA (SGA) and four hybrid GAs (HGAs) are proposed for the MSTSP. The SGA starts with initial population produced by sequential sampling approach that is improved by 2-opt search, and then it is tried to improve gradually the population through a proportionate selection procedure, sequential constructive crossover, and adaptive mutation. A stopping condition of maximum generation is adopted. The hybrid genetic algorithms (HGAs) include a selected local search and perturbation procedure to the proposed SGA. Each HGA uses one of three local search procedures based on insertion, inversion and swap operators directly or randomly. Experimental study has been carried out among the proposed SGA and HGAs by solving some TSPLIB asymmetric and symmetric instances of various sizes. Our computational experience reveals that the suggested HGAs are very good. Finally, our best HGA is compared with a state-of-art algorithm by solving some TSPLIB symmetric instances of many sizes. Our computational experience reveals that our best HGA is better.

Author 1: Zakir Hussain Ahmed
Author 2: Asaad Shakir Hameed
Author 3: Modhi Lafta Mutar
Author 4: Mohammed F. Alrifaie
Author 5: Mundher Mohammed Taresh

Keywords: Hybrid genetic algorithm; maximum scatter travelling salesman problem; sequential constructive crossover; adaptive mutation; local search; perturbation procedure

Download PDF

Paper 56: Knowledge Extraction and Data Visualization: A Proposed Framework for Secure Decision Making using Data Mining

Abstract: The decision-making process, promptly on time, is a crucial success factor in large organizations. Generally, the data warehouses of these organizations grow rapidly with the data generated from various business activities. This huge volume of data needs to be analyzed and decisions must be made quickly to meet the market challenges. Accurate knowledge extraction and its visualization from big data can guide decision-makers to conduct key analysis and make correct predictions. This paper proposes a decision-making framework that not only takes into account knowledge extraction and visualization but also considers the security of the data. The proposed framework uses data mining techniques to extract useful patterns, then, visualizes those patterns for further analysis and decision making. The significance of the proposed framework lies in the mechanism through which it protects the data from intruders. The data is first processed and then stored in an encrypted format on the cloud. When the data is needed for analysis and decision making, a temporary copy of the data is first decrypted, and then important patterns are visualized. The proposed framework will assist managers and other decision-makers to analyze and visualize the data in real-time with an enhanced security mechanism.

Author 1: Hazzaa N. Alshareef
Author 2: Ahmed Majrashi
Author 3: Maha Helal
Author 4: Muhammad Tahir

Keywords: Big data; data mining; data visualization; classification; cloud computing; security

Download PDF

Paper 57: Automated Labeling of Hyperspectral Images for Oil Spills Classification

Abstract: The constant increase in oil demand caused a huge loss in the form of oil spills during the process of exporting the product, which leads to an increase in pollution, especially in the marine environment. This research assists in providing a solution for this problem through modern technology by detecting oil spills using satellite imagery, more specifically hyperspectral images (HSI). The obtained dataset from the AVIRIS satellite is considered raw data, which leads to the availability of a vast amount of unlabeled data. This was one of the main reasons to propose a method to classify the HSI by automatically labeling the raw data first through unsupervised K-means clustering. The automatically labeled HSI is used to train various classifiers, that are Support Vector Machine (SVM), Random Forest (RF), and K-nearest neighbor (K-NN), to accomplish the optimal accuracy to be comparable with another research accuracy. In addition, the results of the first region of interest (ROI) indicate that the SVM with RBF kernel obtains 99.89% with principle component analysis (PCA) and 99.86% without the PCA, which revealed better accuracy than RF and the K-NN, while in the second ROI the RF obtained 99.9% with PCA and 99.91% without the PCA, better than K-NN and SVM. The region of interests selected lies within the Gulf of Mexico area. This area was selected based on the frequency of usage in previous research in detecting oil spills.

Author 1: Madonna Said
Author 2: Monica Hany
Author 3: Monica Magdy
Author 4: Omar Saleh
Author 5: Maha Sayed
Author 6: Yomna M.I. Hassan
Author 7: Ayman Nabil

Keywords: Oil spills; hyperspectral imagery; unlabeled data; k-means cluster; classification

Download PDF

Paper 58: Logistic Regression Modeling to Predict Sarcopenia Frailty among Aging Adults

Abstract: Sarcopenia and frailty have been associated with low aging population capacities for exercise and high metabolic instability. To date, the current models merely support one classification with an accuracy of 83%. The models also reflect overfitting dataset complexities in predicting the accuracy and detecting the misclassifications of rare diseases. As multiple classifications led to incongruent data analyses and methods, each evaluation yielded inaccurate results regarding high prediction accuracy. This study intends to contribute to the current medical informatics literature by comparing the most optimal model to identify relevant patterns and parameters for prediction model development. The methods were duly assessed on a real dataset together with the classification model. Meanwhile, the obesity physical frailty (OPF) model was presented as a conceptual study model. A matrix of accuracy, classification, and feature selection was also utilized to compare the computer output and deep learning models against current counterparts. Essentially, the study findings predicted that an individuals’ risk of sarcopenia corresponded to physical frailty. Each model was compared with an accuracy matrix to determine the best-fitting model. Resultantly, logistic regression produced the highest results with an accuracy rate of 97.69% compared to the other four study models.

Author 1: Sukhminder Kaur
Author 2: Azween Abdullah
Author 3: Noran Naqiah Hairi
Author 4: Siva Kumar Sivanesan

Keywords: Sarcopenia; frailty; logistic regression model; prediction

Download PDF

Paper 59: Adaptive Continuous Authentication System for Smartphones using Hyper Negative Selection and Random Forest Algorithms

Abstract: As smartphones have become a part of our daily lives, including payment and banking transactions; therefore, increasing current data and privacy protection models is essential. A continuous authentication model aims to track the smartphone user's interaction after the initial login. However, current continuous authentication models are limited due to dynamic changes in smartphone user behavior. This paper aims to enhance smartphone user privacy and security using continuous authentication based on touch dynamics by proposing a framework for smartphone devices based on user touch behavior to provide a more accurate and adaptive learning model. We adopt a hybrid model based on the Hyper Negative Selection Algorithm (HNSA) as an artificial immune system (AIS) and the random forest ensemble classifier to instantly classify a user behavior. With the new approach, a decision model could detect normal/abnormal user behavior and update a user profile continuously while using his/her smartphone. The proposed approach was compared with the v-detector and HNSA, where it shows a high average accuracy of 98.5%, a low false alarm rate, and an increased detection rate. The new model is significant as it could be integrated with a smartphone to increase user privacy instantly. It is concluded that the proposed approach is efficient and valuable for smartphone users to increase their privacy while dynamic user behaviors evolve to change.

Author 1: Maryam M. Alharbi
Author 2: Rashiq Rafiq Marie

Keywords: Continuous authentication (CA); artificial immunes system (AIS); negative selection algorithm (NSA); random forest algorithm (RFA); smartphones

Download PDF

Paper 60: Lightweight Chain for Detection of Rumors and Fake News in Social Media

Abstract: Social media has become one of the most important sources of news in our lives, but the process of validating news and limiting rumors remains an open research issue. Many researchers have suggested using Blockchain to solve this problem, but it has traditionally failed due to the large volume of data and users in such environments. In this paper, we propose to modify the structure of the Blockchain while preserving its main characteristics. We achieve this by integrating customize blockchain with the Text Mining (TM) algorithm to create a modified Light Weight chain (LWC). LWC will speed up the verification process, which is carried out through proof of good history where the nodes will have the weights according to their previous posts. Moreover, the LWC will be compatible with different applications such as verifying the authenticity of news or legal religious ruling (fatwas). In this research, we have implemented a simple model to simulate the proposed LWC for the detection of fake news and preserving the characteristics and features of the traditional Blockchain. The results on experimental data reflect the effectiveness of the proposed algorithm in establishing the chain.

Author 1: Yazed Alsaawy
Author 2: Ahmad Alkhodre
Author 3: Nour M. Bahbouh
Author 4: Adnan Abi Sen
Author 5: Adnan Nadeem

Keywords: Component; fake news detection; text mining; blockchain; detection algorithms words

Download PDF

Paper 61: Auditor's Perception on Technology Transformation: Blockchain and CAATs on Audit Quality in Indonesia

Abstract: The purpose of this study is to analyze the auditor’s perception of the implementation of technology transformation such as: blockchain and CAATs that can affect audit quality at the Big Four Public Accounting Firm in Jakarta. This study uses quantitative research methods using a combination of primary and secondary data. The data collection techniques used in this study was questionnaires. The sample was taken using purposive sampling method, which resulted in 60 respondents. Data analysis was carried out with SmartPLS 3.0 and IBM SPSS 26 software which resulted in the conclusion that the auditor’s perception of the implementation of blockchain had a significant positive effect on audit quality, while the auditor’s perception of the implementation of CAATs had no significant positive effect on audit quality.

Author 1: Meiryani
Author 2: Monika Sujanto
Author 3: ASL Lindawati
Author 4: Arif Zulkarnain
Author 5: Suryadiputra Liawatimena

Keywords: Blockchain; CAATs; audit quality; auditor’s perception; technology transformation

Download PDF

Paper 62: The Effect of using Flipped Learning Strategy on the Academic Achievement of Eighth Grade Students in Jordan

Abstract: This study aimed to reveal the effect of using the flipped learning strategy on the development of achievement and the trend towards it among eighth grade students in the English language subject in the Hashemite Kingdom of Jordan. The sample of the study consisted of a sample of eighth-grade students who study in government schools affiliated with the Directorate of Education in the Northern Mazar District in the Hashemite Kingdom of Jordan, and their number (50) students were distributed into two groups chosen randomly, one of them is a control group, and the number of its students is (25) and the other Experimental and the number of its students (25) students. Where the control group was taught using the usual method, the experimental group was taught using the flipped learning strategy. To achieve the objectives of the study, the researcher used the descriptive approach and the experimental approach (with a quasi-experimental design), and the study tools were: an achievement test and a questionnaire to measure attitude. The study materials also included educational software. The results of the study indicated that there was a statistically significant difference between the average scores of the group of students. The experimental group and the control group in the post application of the attitude scale in favour of the experimental group, and this indicates that the use of the flipped classroom strategy had an impact on developing students' attitudes towards it.In light of this, the researcher recommends several recommendations, most notably taking advantage of the standards and the proposed educational model in the current research in the field of English language learning, as well as the application of multimedia programs in the use of the flipped classroom strategy, to raise the level of students in the basic stage in academic achievement, and to expand the application of e-learning And blended learning to improve students’ attitudes towards using the flipped classroom strategy to learn English, holding training courses for teachers on the use of the flipped learning strategy, and employing modern technologies and social networks in the educational process.

Author 1: Firas Ibrahem Mohammad Al-Jarrah
Author 2: Mustafa Ayasreh
Author 3: Fadi Bani Ahmad
Author 4: Othman Mansour

Keywords: Strategy; flipped learning; academic achievement; eighth grade; introduction

Download PDF

Paper 63: Analysis and Optimization of Delegation-based Sequenced Packet Exchange (SPX) Protocol: A Kailar Logic Approach

Abstract: Accountability within electronic commerce protocols has tremendous significance, especially those that require answerability for the actions taken by participants. In this study, the authors evaluate the delegation of accountability based on the Sequenced Packet Exchange (SPX) protocol. The study emphasizes the concept of provability as a benchmark to formalize accountability. Moreover, this paper proposed a new framework that enables principals to delegate individual rights to other principals and how the delegator's accountability is handed over or retained, which provides the crucial functionality of tracing how accountability is distributed among principals within a system. The study provides a novel solution to accountability challenges and analysis of protocols, such as introducing novel conditions for distributing essential credentials among the grantor and the grantee and analyzing delegation-based protocols. The approach adopted will help prevent potential compromises of the integrity of online transactions. By extension, it will also serve as a best practice solution for settling legal disputes among principals.

Author 1: Ebrima Jaw
Author 2: Mbemba Hydara
Author 3: Wang Xue Ming

Keywords: Delegator; accountability; grantor; Kailar logic; principal; client; delegate; grantee

Download PDF

Paper 64: Analysis of Big Data Storage Tools for Data Lakes based on Apache Hadoop Platform

Abstract: When developing large data processing systems, the question of data storage arises. One of the modern tools for solving this problem is the so-called data lakes. Many implementations of data lakes use Apache Hadoop as a basic platform. Hadoop does not have a default data storage format, which leads to the task of choosing a data format when designing a data processing system. To solve this problem, it is necessary to proceed from the results of the assessment according to several criteria. In turn, experimental evaluation does not always give a complete understanding of the possibilities for working with a particular data storage format. In this case, it is necessary to study the features of the format, its internal structure, recommendations for use, etc. The article describes the features of both widely used data storage formats and the currently gaining popularity.

Author 1: Vladimir Belov
Author 2: Evgeny Nikulchev

Keywords: Big data formats; data lakes; Apache Hadoop; data warehouses

Download PDF

Paper 65: An IoT-based Coastal Recreational Suitability System using Effective Messaging Protocol

Abstract: Coastal recreational activities are one of the main attractions for local public beachgoers and overseas tourists. The accessibility to better-quality coastal water will enhance safety and public health awareness when the information is available. Existing platforms showing the risk of whether a beach is suitable for public recreational use is less available in Malaysia. The Internet of Things (IoT) based system design specifically for coastal recreational suitability may differ from the existing configuration depending on the environment and requirements. This paper reports the design and implementation of an IoT-based system to capture the coastal environmental data and recommend recreational suitability. The system captures sensor data, store it in a database and displays the result using a dashboard. The variable data include the temperature, humidity, rain, pH, turbidity, oxidation-reduction potential (ORP), and total dissolved solids (TDS) in a coastal area. The hardware used in the design is the development boards such as Raspberry Pi, Arduino Uno, and ESP32 controller. The system is developed using PHP, MySQL, and Apache Web Server and can be accessed online at https://ipantai.xyz. When using Message Queuing Telemetry Transport (MQTT) as the effective messaging protocol and HiveMQ broker, the result has shown improvement for message size, throughput, and power consumption. The further potential of an IoT-based system is to bring value for coastal management and serve as a powerful tool to determine whether the coastal area is suitable for the public to access water recreational activities.

Author 1: Farashazillah Yahya
Author 2: Ahmad Farhan Ahmad Zaki
Author 3: Ervin Gubin Moung
Author 4: Hasimi Sallehudin
Author 5: Nur Azaliah Abu Bakar
Author 6: Rio Guntur Utomo

Keywords: Coastal recreational; internet of things; message queuing telemetry transport; sensor; water quality

Download PDF

Paper 66: Threat Analysis using N-median Outlier Detection Method with Deviation Score

Abstract: Any organization can only operate optimally if all employees fulfil their roles and responsibilities. For the majority of tasks and activities, each employee must collaborate with other employees. Every employee must log their activities related with their roles, responsibilities, and access permissions. Some users may deviate from their work or abuse their access rights in order to gain a benefit, such as money, or to harm an organization's reputation. Insider threats are caused by these types of users/employees, and those users are known as insiders. Detecting insiders after they have caused damage is more difficult than preventing them from posing a threat. We proposed a method for determining the amount of deviation a user has from other users in the same role group in terms of log activities. This deviation score can be used by role managers to double-check before sharing sensitive information or granting access rights to the entire role group. We first identified the abnormal users in each individual role, and then used distance measures to calculate their deviation score. In a large data space, we considered the problem of identifying abnormal users as outlier detection. The user log activities were first converted using statistics, and the data was then normalized using Min-Max scalar standardization, using PCA to transform the normalized data to a two-dimensional plane to reduce dimensionality. The results of N-Median Outlier Detection (NMOD) are then compared to those of Neighbour-based and Cluster-based outlier detection algorithms.

Author 1: Pattabhi Mary Jyosthna
Author 2: Konala Thammi Reddy

Keywords: Organizational roles; insider threats; outlier detection; deviation score

Download PDF

Paper 67: Methods and Architectural Patterns of Storage, Analysis and Distribution of Spatio-temporal Data

Abstract: The work describes the key principles of the process of building digital spatial data infrastructures for effective decision-making in the management of natural systems and for the sustainable development of the regional economy. The following reference points are considered in detail: increasing the accuracy of the deep learning and neural networks algorithmic and software for the process of analyzing spatial data, developing storage systems for large spatio-temporal data by developing new physical and logical storage models, introducing effective geoportal technologies and developing new architectural patterns for presentation and further dissemination of spatio-temporal using modern web technologies. The plan for working out a scientific problem of development of methods and architectural patterns of storage, analysis and distribution of spatio-temporal data determined the structure of the article. The first section concretizes the criteria of efficiency of information processes in the digital spatial data infrastructure (SDI), the second section discusses algorithmic support of the process of analysis of spatial data, the third – integration of spatial data, and finally, the final section – implementation and project-oriented use of geoportal systems.

Author 1: Stanislav A. Yamashkin
Author 2: Anatoliy A. Yamashkin
Author 3: Ekaterina O. Yamashkina
Author 4: Sergey M. Kovalenko

Keywords: Spatial data infrastructure; deep learning; neural networks; spatial data; geoportals

Download PDF

Paper 68: Development of Architecture and Software Implementation of Deep Neural Network Models Repository for Spatial Data Analysis

Abstract: The article presents the key aspects of designing and developing a repository of deep neural network models for analyzing and predicting the development of spatial processes based on spatial data. The framework of the system operates on the basis of the MVC pattern, in which the framework is decomposed into modules for working with the system's business logic, its data and graphical interfaces. The characteristics of the developed web interfaces, a module for visual programming for editing of models, an application programming interface for unified interaction with the repository are given. The stated aim of the study determined the structure of the scientific article and the results obtained. The paper describes the functional requirements for the repository as a signific part of software design, presents the developed formalized storage scheme for neural network models, describes the aspects of development of a repository of neural network models and an API of the repository. The developed system allows us to approach the solution of the scientific problem of integrating neural networks with the possibility of their subsequent use to solve design problems of the digital economy.

Author 1: Stanislav A. Yamashkin
Author 2: Anatoliy A. Yamashkin
Author 3: Ekaterina O. Yamashkina
Author 4: Milan M. Radovanovic

Keywords: Repository; deep learning; artificial neural network; spatial data; visual programming; software design

Download PDF

Paper 69: The Development of Green Software Process Model

Abstract: Software process and development are the fundamental activities in software engineering. Increasing software usage either develops in-house or outsourcing requires improving the software process accordingly to minimise the adverse effects on the environment. The resources and power consumption controlled by hardware affected the software to move the process that causes high emission of power and energy. Thus, most of the existing work on software development is aimed at the efficiency of hardware operation through CPU, memory, and processor. Although sustainability is still initial in software engineering, the green software process can be achieved through sustainable development that concerns the preservation of the environment. However, there is still a lack of study and effort in the software process that emphasises sustainability perspectives and software waste elimination. Therefore, this study proposes the green factors for the software process that consider the sustainability elements and waste reduction during development. The green factors are the benchmark to measure a sustainable and green software process. Besides, this paper also presents the qualitative interview design and pilot study. The pilot study analysis has demonstrated the reliability of the interview protocol. Therefore, the actual interview and data analysis are currently in progress.

Author 1: Siti Rohana Ahmad Ibrahim
Author 2: Jamaiah Yahaya
Author 3: Hasimi Salehudin
Author 4: Aziz Deraman

Keywords: Software process; green factor; waste reduction; qualitative instrument; green software process model

Download PDF

Paper 70: Robot Chat System (Chatbot) to Help Users “Homelab” based in Deep Learning

Abstract: Homelab is a discussion platform on course materials and assignments for students and is packed in an Android application product and website. The Homelab website is built using Laravel. For Android-based Homelab application development, a special Application Programming Interface (API) with JWT security is made in this research. In Homelab, besides the question and answer feature, a virtual conversation agent (chatbot) based on deep learning with a retrieval model that uses multilayer perceptron and a special text dataset for conversations about Homelab products is also created. The virtual conversation agent at Homelab is made by utilizing the Sastrawi library and natural language processing to facilitate the processing of user messages in Indonesian. The output of this research is the response from the chatbot and the probability value from the classification results of the available response classes. The system made has an accuracy rate of 96.43 percent with an average processing time of 0.3 seconds to get a response.

Author 1: Aji Naufal Aqil
Author 2: Burhanuddin Dirgantara
Author 3: Istikmal
Author 4: Umar Ali Ahmad
Author 5: Reza Rendian Septiawan
Author 6: Alex Lukmanto Suherman

Keywords: API; Chatbot; deep learning; Homelab; question and answer forum

Download PDF

Paper 71: Optimized Energy-efficient Load Balance Routing Protocol for Wireless Mesh Networks

Abstract: Wireless mesh network (WMN) technology gains the user attractions due to its deployment flexibility. The main challenging task in the WMN is the provision of Quality of Service due to its unbalanced traffic in communication. Moreover, the recent advancement in wireless technology has fueled the user’s attraction towards the delay-sensitive services, which in turn additional impact on WMN to provide QoS. The paper aims to provide the QoS in the WMN by load balancing, and energy efficiency. In literature, various mechanisms have been designed to address the issue, but they fail to achieve the optimal solution in terms of throughput and energy efficiency. Thus, the work aims to design the energy-efficient load balancing routing metric to address the limitations of existing conventional methods. Load balancing is accomplished by the selection of non-congested nodes, and energy efficiency is attained by the selection of the greatest packet processing capable nodes for communication. Both non-congested and greatest packet processing capable nodes are used to compute the route between source and destination. The performance of the proposed routing protocol is analyzed by network simulator and the results outperformed in comparison with recent existing methods.

Author 1: M Kiran Sastry
Author 2: Arshad Ahmad Khan Mohammad
Author 3: Arif Mohammad Abdul

Keywords: Wireless mesh network; routing; energy efficiency; optimization

Download PDF

Paper 72: Vertical Handover Algorithm for Telemedicine Application in 5G Heterogeneous Wireless Networks

Abstract: In this fast-paced technology era, the advancement of telecommunication systems has made many advanced technologies possible. With the help of the 5G technology, more technologies will become a reality and telemedicine is one of them. Numerous studies have shown that the fatal rate of ischemic heart disease cases can be reduced by sending the real-time patient health data from an ambulance to the medical centre so that healthcare professionals can make early preparation and give immediate treatment in the golden hour. 5G technology offers a high data rate and low latency. However, the coverage of 5G is small compared to 4G. It will induce a high number of unnecessary handovers when an ambulance traverses the 5G networks at high speed and lead to degradation of services quality. Therefore, a fast and accurate vertical handover decision-making algorithm is needed to minimize unnecessary handover in high-speed scenarios. This paper proposes a handover algorithm that integrates the Travelling Time Estimation, Fuzzy Analytic Hierarchy Process (FAHP) and Technique for order of preference by similarity to ideal solution (TOPSIS) algorithms to reduce unnecessary handover in 5G heterogeneous networks. The simulation results show that the proposed algorithm has successfully reduced up to 80.3% of handovers compared to FAHP-TOPSIS based handover algorithm in the high-speed scenario. The proposed handover algorithm can improve the quality of telemedicine services in high-speed scenarios.

Author 1: Boh Wen Diong
Author 2: Mark Irwin Goh
Author 3: Seng Kheau Chung
Author 4: Ali Chekima
Author 5: Hoe Tung Yew

Keywords: Mobile terminal; vertical handover; heterogeneous networks; unnecessary handover; telemedicine; TOPSIS

Download PDF

Paper 73: Analysis of Electroencephalography Signals using Particle Swarm Optimization

Abstract: Brain computer interface devices monitor the brain signals and convert them into control commands in an attempt to imitate certain human cognitive functions. Numerous studies and applications have developed, because of the researchers' interest in systems in recent years. The capacity to categorize electroencephalograms is essential for building effective brain-computer interfaces. In this paper, three experiments were performed in order to categorize the brain signals with the goal of improving a model for EEG data analysis. An investigation is carried out to detect the characteristics derived from interactions across channels that may be more accurate than features that could be taken from individuals. Many machine learning techniques were applied such as; K-Nearest Neighbors, Long Short-Term memory and Decision Tree in this paper in order to detect and analyze the EEG signals from three different datasets to determine the best accuracy results using the particle swarm optimization algorithm that obviously minimized the dimension of the feature vector and improved the accuracy results.

Author 1: Shereen Essam Elbohy
Author 2: Laila Abdelhamed
Author 3: Farid Mousa Ali
Author 4: Mona M. Nasr

Keywords: Electroencephalographic; k-nearest neighbors; long short-term memory; epileptic seizure recognition; decision tree

Download PDF

Paper 74: Fuzzy C-mean Missing Data Imputation for Analogy-based Effort Estimation

Abstract: The accuracy of effort estimation in one of the major factors in the success or failure of software projects. Analogy-Based Estimation (ABE) is a widely accepted estimation model since its flow human nature in selecting analogies similar in nature to the target project. The accuracy of prediction in ABE model in strongly associated with the quality of the dataset since it depends on previous completed projects for estimation. Missing Data (MD) is one of major challenges in software engineering datasets. Several missing data imputation techniques have been investigated by researchers in ABE model. Identification of the most similar donor values from the completed software projects dataset for imputation is a challenging issue in existing missing data techniques adopted for ABE model. In this study, Fuzzy C-Mean Imputation (FCMI), Mean Imputation (MI) and K-Nearest Neighbor Imputation (KNNI) are investigated to impute missing values in Desharnais dataset under different missing data percentages (Desh-Miss1, Desh-Miss2) for ABE model. FCMI-ABE technique is proposed in this study. Evaluation comparison among MI, KNNI, and (ABE-FCMI) is conducted for ABE model to identify the suitable MD imputation method. The results suggest that the use of (ABE-FCMI), rather than MI and KNNI, imputes more reliable values to incomplete software projects in the missing datasets. It was also found that the proposed imputation method significantly improves software development effort prediction of ABE model.

Author 1: Ayman Jalal AlMutlaq
Author 2: Dayang N. A. Jawawi
Author 3: Adila Firdaus Binti Arbain

Keywords: Analogy-based effort estimation; imputation; missing data; fuzzy c-mean

Download PDF

Paper 75: Simplified IT Risk Management Maturity Audit System based on “COBIT 5 for Risk”

Abstract: In recent years, the role of risk management has emerged as a key success factor in ensuring the growth on the one hand and the survival on the other hand of any organization. Moreover, dependence on IT has become systematic within any organization. This dependence therefore, implies the importance of implementation of an IT risk management system in order to well manage IT risks. There are several standards that deal with enterprise risk management in general or information security in particular. However, few standards deal with IT risk management. Noting, for example, COBIT 5 (Control Objectives for Information and related Technology) which deals with IT risk management but is complicated to deploy. The purpose of this article is to describe a simplified IT risk management maturity audit system in an organization based on “COBIT 5 for risk”. This system aims to evaluate the maturity of IT risk management before proceeding to the implementation or update of an IT risk management system within an organisation.

Author 1: Hasnaa Berrada
Author 2: Jaouad Boutahar
Author 3: Souhaïl El Ghazi El Houssaïni

Keywords: IT risk management; COBIT 5 for risk; maturity audit system; COBIT 5 enablers; analysis axes; maturity scale and score; maturity audit report

Download PDF

Paper 76: ExMrec2vec: Explainable Movie Recommender System based on Word2vec

Abstract: According to the user profile, a recommender system intends to offer items to the user that may interest him. The recommendations have been applied successfully in various fields. Recommended items include movies, books, travel and tourism services, friends, research articles, research queries, and much more. Hence the presence of recommender systems in many areas, in particular, movies recommendations. Most current Machine Learning recommender systems serve as black boxes that do not provide the user with any insight into or justification for the system's logic. What puts users at risk of losing their confidence. Recommender systems suffer from an overload of information, which poses numerous problems, including high cost, slow data processing, and low time complexity. That is why researchers in have been using graph embeddings algorithms in the recommendation field to reduce the quantity of data, as these algorithms have been successful in the last few years. This work aims to improve the quality of recommendation and the simplicity of recommendation explanation based on the word2vec graph embeddings model.

Author 1: Amina SAMIH
Author 2: Abderrahim GHADI
Author 3: Abdelhadi FENNAN

Keywords: Recommender system; explainable artificial intelligence machine learning; Word2vec

Download PDF

Paper 77: Using Blockchain in the Internet of Things Coordination

Abstract: Now-a-days, the Internet of Things (IoT) has generated enormous interest from industry to create distributed and innovative solutions. However, achieving this goal is a tedious task and presents several open challenges as the literature points out. One of the most complex is the IoT coordination service. Unfortunately, most research works give rarely importance to this service in their models or architectures proposals. Wherefore our current contribution deals with this open issue and proposes a solution capable of implementing advanced processes that can be based on orchestration, choreography or both mechanisms. More over and to conduct efficiently both coordination mechanisms when sharing knowledge or tasks between connected objects, we integrate smart contacts to guarantee the modalities of behavior change in the coordination mechanism. Smart contracts are a safe way to decide the coordination mechanism based on the state of the system environment. To prove our approach, we have built a technical architecture based on a multi-agent system to abstract connected objects of IoT systems, blockchain technology, and the frameworks and languages required for collaboration processes such as BPMN, BPEL and BPEL4CHOR. Carbon leakage as a case study is used for experimentation.

Author 1: Radia Belkeziz
Author 2: Zahi Jarir

Keywords: Internet of things; IoT; Internet of things coordination; blockchain; smart contract; multi-agent systems

Download PDF

Paper 78: Improving Data Security using Compression Integrated Pixel Interchange Model with Efficient Encryption Technique

Abstract: We exist in a digital era in which communication is largely based on the transfer of digital knowledge over data networks. Disclosures are sometimes regarded as a transmitter that transmits a digital file. A system of encoding, efficient and easy yet secure model must be developed for quick and prompt transmission. The file is sent from source to destination where it is difficult to maintain the privacy of knowledge. The encryption of images is vital for securing sensitive information or images from unauthorized readers. By using a technique of selective encryption, the original image pixel values are completely obscured in this way and an intruder cannot retrieve statistical data from its original image. This paper introduces a new methodology for the use of a protective facility for the transmission of digital data over the public network. A highly established field with image compression is allowed for speedy transmission and efficient information storage. During the initial process, the original image is divided into blocks of the same size and sub segmentation is performed for accurate extraction of images within the boundaries. A random matrix is used to swap the pixels of the neighboring sub blocks. Afterwards, each pixel is randomly exchanged for the neighboring blocks with a random matrix, then each block is encrypted with the proposed function and then the encrypted data can be stored in cloud. The proposed method uses Image Segmentation based Compression Model with Pixel Interchange Encryption (ISbCPIE) Model for providing high security to the Image transmitted in the network. Compressor is needed to achieve rapid transmission and efficient storage. The proposed model is compared with the traditional models and the results show that the proposed model security levels are better than the existing models.

Author 1: Naga Raju Hari Manikyam
Author 2: Munisamy Shyamala Devi

Keywords: Image segmentation; image pixel extraction; pixel compression; pixel interchange; image security

Download PDF

Paper 79: Cyclic Path Planning of Hyper-redundant Manipulator using Whale Optimization Algorithm

Abstract: This paper develops a path planning algorithm of hyper-redundant manipulators to achieve a cyclic property. The basic idea is based on a geometrical analysis of a 3-link planar series manipulator in which there is an orientation angle boundary of a prescribed path. To achieve the repetitive behavior, for hyper-redundant manipulators consisting of 3-link components, an additional path is chosen in such away so that it is a repetitive curve which has the same curve frequency with the prescribed end-effector path. To solve the redundancy resolution, meta-heuristic optimizations, namely Genetic Algorithm (GA) and Whale Optimization Algorithm (WOA), are applied to search optimal trajectories inside local orientation angle boundaries. Results show that using constant of the local orientation angle trajectories for the 3-link component, the cyclic properties can be achieved. The performance of the WOA shows very promising result where generally it obtains the lowest fitness value as compare with the GA. Depending on the complexity of the path planning, dividing the path into several stages via intermediate points may be necessary to achieve the good posture. The performance of the swarm based meta-heuristic optimization, namely the WOA, shows very promising result where generally it obtains the lowest fitness value as compare with the GA. Using the developed approach, not only the cyclic property is obtained but also the optimal movement of the hyper-redundant manipulator is achieved.

Author 1: Affiani Machmudah
Author 2: Setyamartana Parman
Author 3: Aijaz Abbasi
Author 4: Mahmud Iwan Solihin
Author 5: Teh Sabariah Abd Manan
Author 6: Salmia Beddu
Author 7: Amiruddin Ahmad
Author 8: Nadiah Wan Rasdi

Keywords: Hyper-redundant; path planning; whale optimization algorithm; sustainable manufacturing

Download PDF

Paper 80: Deep Learning Predictive Model for Colon Cancer Patient using CNN-based Classification

Abstract: In recent years, the area of Medicine and Healthcare has made significant advances with the assistance of computational technology. During this time, new diagnostic techniques were developed. Cancer is the world's second-largest cause of mortality, claiming the lives of one out of every six individuals. The colon cancer variation is the most frequent and lethal of the numerous kinds of cancer. Identifying the illness at an early stage, on the other hand, substantially increases the odds of survival. A cancer diagnosis may be automated by using the power of Artificial Intelligence (AI), allowing us to evaluate more cases in less time and at a lower cost. In this research, CNN models are employed to analyse imaging data of colon cells. For colon cell image classification, CNN with max pooling and average pooling layers and MobileNetV2 models are utilized. To determine the learning rate, the models are trained and evaluated at various Epochs. It's found that the accuracy of the max pooling and average pooling layers is 97.49% and 95.48%, respectively. And MobileNetV2 outperforms the other two models with the most remarkable accuracy of 99.67% with a data loss rate of 1.24.

Author 1: Zarrin Tasnim
Author 2: Sovon Chakraborty
Author 3: F. M. Javed Mehedi Shamrat
Author 4: Ali Newaz Chowdhury
Author 5: Humaira Alam Nuha
Author 6: Asif Karim
Author 7: Sabrina Binte Zahir
Author 8: Md. Masum Billah

Keywords: Colon cancer; MobileNetV2; Max pooling; Average pooling; data loss; accuracy

Download PDF

Paper 81: Towards Indian Sign Language Sentence Recognition using INSIGNVID: Indian Sign Language Video Dataset

Abstract: Sign language, a language used by Deaf community, is a fully visual language with its own grammar. The Deaf people find it very difficult to express their feelings to the other people, since the other people lack the knowledge of the sign language used by the Deaf community. Due to the differences in vocabulary and grammar of the sign languages, complete adoption of methods used for other international sign languages is not possible for Indian Sign Language (ISL) recognition. It is difficult to handle continuous sign language sentence recognition and translation into text as no large video dataset for ISL sentences is available. INSIGNVID - the first Indian Sign Language video dataset has been proposed and with this dataset as input, a novel approach is presented that converts video of ISL sentence in appropriate English sentence using transfer learning. The proposed approach gives promising results on our dataset with MobilNetV2 as pretrained model.

Author 1: Kinjal Mistree
Author 2: Devendra Thakor
Author 3: Brijesh Bhatt

Keywords: Indian sign language; sign language recognition; pretrained models; transfer learning; vision-based approaches

Download PDF

Paper 82: Automated Pavement Distress Detection, Classification and Measurement: A Review

Abstract: Road surface distress is an unavoidable situation due to age, vehicles overloading, temperature changes, etc. In the beginning, pavement maintenance actions took only place after having too much pavement damage, which leads to costly corrective actions. Therefore, scheduled road surface inspections can extend service life while guaranteeing users security and comfort. Traditional manual and visual inspections don’t meet the nowadays criteria, in addition to a relatively high time volume consumption. Smart City pavement management preventive approach requires accurate and scalable data to deduce significant indicators and plan efficient maintenance programs. However, the quality of data depends on sensors used and conditions during scanning. Many studies focused on different sensors, Machine Learning algorithms and Deep Neural Networks tried to find a sustainable solution. Besides all these studies, pavement distress measurement stills a challenge in Smarts Cities because distress detection is not enough to decide on maintenance actions required. Damages localization, dimensions and future development should be highly detected on real-time. This paper summarizes the state-of-the-art methods and technologies used in recent years in pavement distress detection, classification and measurement. The aim is to evaluate current methods and highlight their limitations, to lay out the blueprint for future researches. PMS (Pavement Management System) in Smarts Cities requires an automated pavement distress monitoring and maintenance with high accuracy for large road networks.

Author 1: Brahim Benmhahe
Author 2: Jihane Alami Chentoufi

Keywords: Automated pavement distress detection; smarts cities; pavement management system; machine learning; deep neural networks

Download PDF

Paper 83: A Model-driven Architecture for Collaborative Business Processes

Abstract: The Model Driven Engineering was developed to make application development more flexible, it provides a comprehensive interoperability framework for defining interconnected systems, and aims to reduce the inherent complexity that partners must face when developing their systems. In collaborative environments where systems are made through the collaboration of several departments or companies, the MDA (Model Driven Architecture) approach seems efficient in maintaining and developing this type of system. This paper show the use of MDE in the context of business process management and present in detail an architecture for the development of collaborative business processes.

Author 1: Leila Amdah
Author 2: Naima Essadi
Author 3: Adil Anwar

Keywords: Model Driven Engineering (MDE); Business Process Model and Notation (BPMN); Business Process Execution Language (BPEL); ATLAS Transformation Language (ATL); BPMN to BPEL Transformation

Download PDF

Paper 84: Comprehensive Study on Machine Learning Techniques for Software Bug Prediction

Abstract: Software bugs are defects or faults in computer programs or systems that cause incorrect or unexpected operations. These negatively affect software quality, reliability, and maintenance cost; therefore many researchers have already built and developed several models for software bug prediction. Till now, a few works have been done which used machine learning techniques for software bug prediction. The aim of this paper is to present comprehensive study on machine learning techniques that were successfully used to predict software bug. Paper also presents a software bug prediction model based on supervised machine learning algorithms are Decision Tree (DT), Naïve Bayes (NB), Random Forest (RF) and Logistic Regression (LR) on four datasets. We compared the results of our proposed models with those of the other studies. The results of this study demonstrated that our proposed models performed better than other models that used the same data sets. The evaluation process and the results of the study show that machine learning algorithms can be used effectively for prediction of bugs.

Author 1: Nasraldeen Alnor Adam Khleel
Author 2: Károly Nehéz

Keywords: Static code analysis; software bug prediction; software metrics; machine learning techniques

Download PDF

Paper 85: The Effect of Adaptive Learning Rate on the Accuracy of Neural Networks

Abstract: Learning rates in gradient descent algorithms have significant effects especially on the accuracy of a Capsule Neural Network (CNN). Choosing an appropriate learning rate is still an issue to date. Many developers still have a problem in selecting a learning rate for CNN leading to low accuracies in classification. This gap motivated this study to assess the effect of learning rate on the accuracy of a developed (CNN). There are no predefined learning rates in CNN and therefore it is hard for researchers to know what learning rate will give good results. This work, therefore, focused on assessing the effect of learning rate on the accuracy of a CNN by using different learning rates and observing the best performance. The contribution of this work is to give an appropriate learning rate for CNNs to improve accuracy during classification. This work has assessed the effect of different learning rates and came up with the most appropriate learning rate for CNN plant leaf disease classification. Part of the images used in this work was from the PlantVillage dataset while others were from the Nepal database. The images were pre-processed then subjected to the original CNN model for classification. When the learning rate was 0.0001, the best performance was 99.4% on testing and 100% on training. When the learning rate was 0.00001, the highest performance was 97% on testing and 99.9% on training. The lowest performance observed was 81% accuracy on testing and 99% on training when the learning rate was 0.001. This work observed that CNN was able to achieve the highest accuracy with a learning rate of 0.0001. The best Convolutional Neural Network accuracy observed was 98% on testing and 100% on training when the learning rate was 0.0001.

Author 1: Jennifer Jepkoech
Author 2: David Muchangi Mugo
Author 3: Benson K. Kenduiywo
Author 4: Edna Chebet Too

Keywords: CNN; ConvNet; learning rate; gradient descent

Download PDF

Paper 86: A Systematic Review Web Content Mining Tools and its Applications

Abstract: In recent years, the emergence of WWW (World Wide Web) led to the accumulation of huge amount of information and data. Hence the web is found to consist of unstructured and structured information that impacts the day to day life of the society. Because of such availability of huge information, utilization of the required information becomes more challenging. This paper provided a comprehensive survey on the current situation and recent trends on web content mining (WCM) and its applications thereby contributing to the enhancement of the upcoming research in WCM. The paper focused mainly on the mining and retrieval techniques, various WCM approaches, challenges and process of information retrieval and information extraction. The paper describes the four major tasks of web content mining that is information retrieval, information extraction, generalization and validation in detail. WCM concentrates on orchestrating, sorting, classifying, collecting, congregating of web data and provide the improved data which can be easily accessed by the users. Web content mining tools were needed to scan text, images and HTML documents and provide results to the search engine. It guides the search engine to provide better productive results for every search based on their importance. The paper also analysed different web content mining tools for the extraction of relevant information from the corresponding web page.

Author 1: Manjunath Pujar
Author 2: Monica R Mundada

Keywords: Web content mining; web structure mining; web usage mining; data mining; information retrieval; information extraction.

Download PDF

Paper 87: Usability and Learning Environment of a Virtual Reality Simulator for Laparoscopic Surgery Training

Abstract: One of the critical aspects of laparoscopic surgeries is the training and learning of medical students to acquire the experience and the correct use of the equipment, which is usually difficult due to different circumstances. One of the ways to improve these activities is by using Virtual Reality technology that allows immersion in scenarios that simulate reality, but to achieve its correct use, it is necessary to consider usability and its learning environment. This work aims to develop a simulator software in Virtual Reality that allows the training and learning of students in laparoscopic surgery, evaluating its usability and learning environment. The proposal was developed in five levels so that the student can have a greater concentration on each task; The levels developed are: clamps, clamps for the camera, cut, cut camera and cut with clamps; Each level has different tasks to perform with which the student will be able to interact in a more orderly way. The evaluation of usability and learning environment was carried out through the survey technique using the questionnaire instrument. The reliability analysis of the instrument was carried out using Cronbach's alpha test and using the correlation coefficient of its items by Spearman. The results showed that 85.85% of the surveys carried out were positive for the learning environment area and 81.18% of the positive surveys for the Usability area, so it is concluded that the proposal developed can help in the training of medical students procedurally and practically for the development of skills in laparoscopic surgery.

Author 1: Karina Rosas-Paredes
Author 2: José Esquicha-Tejada
Author 3: Héctor Manrique Morante
Author 4: Agueda Muñoz del Carpio Toia

Keywords: Usability; learning environment; virtual reality; training; simulator; medical; laparoscopic surgery

Download PDF

Paper 88: Detection of Hepatoma based on Gene Expression using Unitary Matrix of Singular Vector Decomposition

Abstract: Hepatoma is a long-term disease with a high risk of mortality. However, the disease is late detected, at the fourth level stadium due to silent symptoms. The infected hepatitis B virus gene HBx is a genome virus to trigger liver disease. This virus inserts material genetic into the host and disturbs the cell cycle. The regulation of gene expression is blocked to make work abnormal, especially for repairing and degrading. A microarray is a tool to quantify the RNA gene expression in huge volumes without any information for the related potential gene. Therefore, this study is proposed a feature extraction method using a unitary singular matrix for simplifying the classification model of hepatoma detection. Principally, the feature is decomposed using a singular vector to get the k-rank value of pattern. This matrix is applied to the representative machine learning algorithm, including KNN, Naïve Bayes, C5.0 Decision Tree, and SVM. The experimental result achieved high performance with Area under the Curve (AUC) of above 90% on average.

Author 1: Lailil Muflikhah
Author 2: Nashi Widodo
Author 3: Wayan Firdaus Mahmudy
Author 4: Solimun
Author 5: Ninik Nihayatul Wahibah

Keywords: Hepatoma; gene expression; feature extraction; unitary matrix

Download PDF

Paper 89: Anti-Islamic Arabic Text Categorization using Text Mining and Sentiment Analysis Techniques

Abstract: The aim of this research is to detect and classify websites based on their content if it encourages spreading hate speech toward Islam and Muslims, or Islamophobia using sentiment analysis and web text mining techniques. In this research, a large dataset corpus has been collected, to identify and classify anti-Islamic online contents. Our target is to automatically detect the content of those websites that are hostile to Islam and transmitting extremist ideas against it. The main purpose is to reduce the spread of those webpages that give the wrong idea about Islam. The proper dataset is collected from different sources, and the two datasets for the Arabic language (balanced and unbalanced) have been produced. The framework of the proposed approach has been described. The approach used in this framework is based on supervised Machine Learning (ML) approach using Support Vector Machines (SVM) and Multinomial Naive Bayes (MNB) models as classifiers, and Term Frequency-Inverse Document Frequency (TF-IDF) as feature extraction. Different experiments including word level and tri-gram level on the two datasets have been conducted, and compared the obtained results. The experimental results shows that the supervised ML approach using word level is the finest approach for both datasets that produce high accuracy with 97% applied on the balanced Arabic dataset using SVM algorithm with TF-IDF as feature extraction. Finally, an interactive web-application prototype has been developed and built in order to detect and classify toxic language such as anti-Islamic online text-contents.

Author 1: Rawan Abdullah Alraddadi
Author 2: Moulay Ibrahim El-Khalil Ghembaza

Keywords: Web Text mining; text classification; Arabic computational linguistics; natural language processing; SVM; MNB; opinion mining; hate speech; toxic language detection

Download PDF

Paper 90: Machine Learning Model to Analyze Telemonitoring Dyphosia Factors of Parkinson’s Disease

Abstract: For many years, lots of people have been suffering from Parkinson’s disease all over the world, and some datasets are generated by recording important PD features for reliable decision-making diagnostics. But a dataset can contain correlated data points and outliers that can affect the dataset’s output. In this work, a framework is proposed where the performance of an original dataset is compared to the performance of its reduced version after removing correlated features and outliers. The dataset is collected from UCI Machine Learning Repository, and many machine learning (ML) classifiers are used to evaluate its performance in various categories. The same process is repeated on the reduced dataset, and some improvement in prediction accuracy is noticed. Among ANOVA F-test, RFE, MIFS, and CSFS methods, the Logistic Regression classifier along with RFE-based feature selection technique outperforms all other classifiers. We observed that our improved system demonstrates 82.94%accuracy, 82.74% ROC, 82.9% F-measure, along with 17.46%false positive rate and 17.05% false negative rate, which are better compared to the primary dataset prediction accuracy metric values. Therefore, we hope that this model can be beneficial for physicians to diagnose PD more explicitly.

Author 1: Mohimenol Islam Fahim
Author 2: Syful Islam
Author 3: Sumaiya Tun Noor
Author 4: Md. Javed Hossain
Author 5: Md. Shahriar Setu

Keywords: Parkinson’s disease; correlation; outliers; machine learning; RFE-based analysis

Download PDF

Paper 91: Vietnamese Sentence Paraphrase Identification using Pre-trained Model and Linguistic Knowledge

Abstract: The paraphrase identification task identifies whether two text segments share the same meaning, thereby playing a crucial role in various applications, such as computer-assisted translation, question answering, machine translation, etc. Although the literature on paraphrase identification in English and other popular languages is vast and growing, the research on this topic in Vietnamese remains relatively untapped. In this paper, we propose a novel method to classify Vietnamese sentence paraphrases, which deploys both the pre-trained model to exploit the semantic context and linguistic knowledge to provide further information in the identification process. Two branches of neural networks built in the Siamese architecture are also responsible for learning the differences among the sentence representations. To evaluate the proposed method, we present experiments on two existing Vietnamese sentence paraphrase corpora. The results show that for the same corpora, our method using the PhoBERT as a feature vector yields 94.97% F1-score on the VnPara corpus and 93.49% F1-score on the VNPC corpus. They are better than the results of the Siamese LSTM method and the pre-trained models.

Author 1: Dien Dinh
Author 2: Nguyen Le Thanh

Keywords: Paraphrase identification; Vietnamese; pre-trained model; linguistics; neural networks

Download PDF

Paper 92: Optimized Design of Decoder 2 to 4, 3 to 8 and n to 2n using Reversible Gates

Abstract: The design of low consumption CMOS circuits, nanotechnologies and quantum computing has becomed more attached to the reversible logic. A set of gates have been recently exploited in reversible computer science for the design of certain circuits. Among them, we find the decoders. In this paper we have exploited a recent study making the design of the decoder 2 to 4, 3 to 8, and n to 2n, our work aims to enhance the previous designs , by replacing some reversible gates by others while maintaining their functionality and improving their performance criteria namely the number of gates (CG), number of garbage outputs (NGO), number of constant inputs(NCI), Quantum cost (QC) and hardware complexity (HC), compared to our study of the base and other recent studies from which we have obtained remarkable results.

Author 1: Issam Andaloussi
Author 2: Sedra Moulay Brahim
Author 3: Mariam El Ghazi

Keywords: Decoder 2to4; Decoder 3to8; Decoder n to2n; Number of Gates (CG); Number of Garbage Output (NGO); Number of Constant Inputs (NCI); Quantum Cost (QC); Hardware Complexity (HC)

Download PDF

Paper 93: Grammatical Error Correction with Denoising Autoencoder

Abstract: A denoising autoencoder sequence-to-sequence model based on transformer architecture proved to be useful for underlying tasks such as summarization, machine translation, or question answering. This paper investigates the possibilities of using this model type for grammatical error correction and introduces a novel method of remark-based model checkpoint output combining. This approach was evaluated by the BEA 2019 shared task. It was able to achieve state-of-the-art F-score results on the test set 73.90 and development set 56.58. This was done without any GEC-specific pre-training, but only by fine-tuning the autoencoder model and combining checkpoint outputs. This proves that an efficient model solving GEC might be trained in a matter of hours using a single GPU.

Author 1: Krzysztof Pajak
Author 2: Adam Gonczarek

Keywords: Denoising autoencoder transformer; sequence-to-sequence; grammatical error correction; model ensembling; error remarks filtering; fine-tuning

Download PDF

Paper 94: A Unique Glottal Flow Parameters based Features for Anti-spoofing Countermeasures in Automatic Speaker Verification

Abstract: The domain of Automatic Speaker Verification (ASV) is blooming with growing developments in feature en-gineering and artificial intelligence. Inspite of this, the system is liable to spoofing attacks in the form of synthetic or replayed speech. The difficulty in detecting synthetic speech is due to recent advancements in the Voice conversion and Text-to-speech systems which produce natural, indistinguishable speech. To prevent such attacks, there is a need to develop robust spoof detection systems. In order to achieve this goal, we are proposing estimation of Glottal Flow Parameters (GFP) from speech of genuine speech and synthetic spoof samples. The GFP are further parameterized using time, frequency and Liljencrants–Fant (LF) models. Along with GFP features, the Linear Prediction Cepstrum Co-efficient (LFCC) and statistical parameters are computed. The GFP features are investigated to prove their usefulness in detecting spoofed and genuine speech. The ASV spoof 2019 corpus is used to test the framework and evaluated against the baseline models. The proposed spoof detection framework produces an Equal Error Rate (EER) of 2.39% and tandem Detection Cost Function (t-DCF) of 0.0562 which is found to be better than the state-of-the art technique.

Author 1: Ankita Chadha
Author 2: Azween Abdullah
Author 3: Lorita Angeline

Keywords: Spoof detection; synthetic speech; glottal excitation; speaker verification; voice conversion; text-to-speech

Download PDF

Paper 95: Knowledge Base Driven Automatic Text Summarization using Multi-objective Optimization

Abstract: Automatic Text summarization aims to automati-cally generate condensed summary from a large set of documents on the same topic. We formulate text summarization task as a multi-objective optimization problem by defining information coverage and diversity as two conflicting objective functions. With this formulation, we propose a novel technique to improve the performance using a knowledge base. The main rationale of the approach is to extract important text features of the original text by detecting important entities in a knowledge base. Next, an improvement on the multi-objective optimization algorithm is also proposed for the automatic text summarization problem. The focus is on improving efficiency of the each steps in the evolutionary multi-objective optimization process which is applicable to all tasks with the same problem formulation. The result summary of the suggested method ensure the maximum coverage of the original documents and the diversity of the sentences in the summary among each other. The experiments on DUC2002 and DUC2004 multi-document summarization task dataset shows that the proposed model is effective compared to other methods.

Author 1: Chihoon Jung
Author 2: Wan Chul Yoon
Author 3: Rituparna Datta
Author 4: Sukhwan Jung

Keywords: Multi-document summarization; evolutionary multi-objective optimization; knowledge base; named entity recognition

Download PDF

Paper 96: An Adaptive Discrete Brain Storm Algorithm Solves 3D Protein Structure Prediction

Abstract: Brain Storm Optimization (BSO) is one of the major effective swarm intelligence algorithms that simulate the human brainstorming process to find optimality for optimization problems. BSO method has successfully been applied to many real-world problems. This study employs BSO method, called BSO-IP, to solve the integer programming problem. Our method collects best solutions to generate new solutions that then search for optimal solutions in all areas of search space.The BSO-IP method solves some benchmark integer programming problems to test its efficiency. The BSO-IP is used to simulate the 3D protein structure prediction problem, which is mathematically presented as an integer programming problem to approve the viability and helpfulness of our proposed Algorithm. The experimental results of different benchmarks protein structure show that our proposed method is superior in high performance, convergence, and stability in predicting protein structure. We examined our strategy results to be promising compared to other results.

Author 1: Alaa Fahim
Author 2: Nehad Abdelraheem

Keywords: Brain storm optimization; integer programming problem; three dimensional protein structure prediction

Download PDF

Paper 97: Design of a Web System to Optimize the Logistics and Costing Processes of a Chocolate Manufacturing Company

Abstract: The research work is focused on the solution to a problem that a company dedicated to the manufacture of chocolates has. This company does not have a computer system to help it improve its management. The information recorded from the logistics and cost processes is stored locally in Excel files, and the information from the operational processes is stored in bond sheets and then transferred to Excel files. Since information is very valuable for a company, it must be orderly, accessible, and secure. Therefore, the Scrum methodology was implemented for the development of a prototype web system for the company. The prototype of this web system was developed with Adobe XD software because it is easy to use and complete for the design of interfaces for web pages. As a result of the development of the prototype of the web system, there is a record of information in an orderly, interactive, easy, fast, and above all safe way. And it was concluded that the management of the logistics and cost area of the company was optimized and improved.

Author 1: Richard Arias-Marreros
Author 2: Keyla Nalvarte-Dionisio
Author 3: Laberiano Andrade-Arenas

Keywords: Adobe XD; costing; logistics; scrum methodology; web system

Download PDF

Paper 98: A RED-BET Method to Improve the Information Diffusion on Social Networks

Abstract: Information diffusion in the social network has been widely used in many fields today, from online marketing, e-government campaigns to predicting large social events. Some study focuses on how to discover a method to accelerate the parameter calculation for the information diffusion forecast in order to improve the efficiency of the information diffusion problem. The Betweenness Centrality is a significant indicator to identify the important people on social networks that should be aimed to maximize information diffusion. Thus, in this paper, we propose the RED-BET method to improve the information diffusion on social networks by a hybrid approach that allows to quickly determine the nodes having high Betweenness Centrality. Our main idea in the proposed method combines both the graph reduction and parallelization of the Betweenness Centrality calculation. Experimental results with the currently popular large datasets of SNAP and Animer have demonstrated that our proposed method improves the performance from 1.2 to 1.41 times compared to the TeexGraph toolkit, from 1.76 to 2.55 times than the NetworKit, and from 1.05 to 1.1 times in comparison with the bigGraph toolkit.

Author 1: Son N. Duong
Author 2: Hanh P. Du
Author 3: Cuong N. Nguyen
Author 4: Hoa N. Nguyen

Keywords: Information diffusion; graph reduction; between-ness centrality; parallel computing

Download PDF

Paper 99: A Comprehensive Study on Intrusion and Extrusion Phenomena

Abstract: This paper presents a comprehensive survey on intrusion and extrusion phenomena and their existing detection and prevention techniques. Intrusion and extrusion events, breach of security system, hamper the protection of the devices or systems. Needless to say that security threats are flourishing with new level of complexity making difficulty in recognizing them. Therefore, security is the remarkable issue at the core of developing a boundless, constant and reliable web. In this paper, our purpose is to unveil and categorize all possible intrusion and extrusion events, bring out issues related to events and explore solutions associated with them. Nevertheless, we suggest further recommendations to improve the security in these issues. We strongly believe that this survey may help understanding intrusion and extrusion phenomena, and pave the way for a better design to protect against security threats.

Author 1: Md. Abdul Hamid
Author 2: Marjia Akter
Author 3: M. F. Mridha
Author 4: Muhammad Mostafa Monowar
Author 5: Madini O. Alassafi

Keywords: Intrusion; extrusion; intrusion detection; security and survey

Download PDF

Paper 100: Identifying Central Nodes in Directed and Weighted Networks

Abstract: An issue of critical interest in complex network analysis is the identification of key players or important nodes. Centrality measures quantify the notion of importance and hence provide a mechanism to rank nodes within a network. Several centrality measures have been proposed for un-weighted, un-directed networks but applying or modifying them for networks in which edges are weighted and directed is challenging. Existing centrality measures for weighted, directed networks are by and large domain-specific. Depending upon the application, these measures prefer either the incoming or the outgoing links of a node to measure its importance. In this paper, we introduce a new centrality measure, Affinity Centrality, that leverages both weighted in-degrees as well as out-degrees of a node’s local neighborhood. A tuning parameter permits the user to give preference to a node’s neighbors in either incoming or outgoing direction. To evaluate the effectiveness of the proposed measure, we use three types of real-world networks - migration, trade, and animal social networks. Experimental results on these weighted, directed networks demonstrate that our centrality measure can rank nodes in consonance to the ground truth much better than the other established measures

Author 1: Sharanjit Kaur
Author 2: Ayushi Gupta
Author 3: Rakhi Saxena

Keywords: Centrality; weighted network; directed network; mi-gration network; world input output trade network; community structure

Download PDF

Paper 101: Real Time Vehicle Detection, Tracking, and Inter-vehicle Distance Estimation based on Stereovision and Deep Learning using YOLOv3

Abstract: In this paper, we propose a robust real-time vehicle tracking and inter-vehicle distance estimation algorithm based on stereovision. Traffic images are captured by a stereoscopic system installed on the road, and then we detect moving vehicles with the YOLO V3 Deep Neural Network algorithm. Thus, the real-time video goes through an algorithm for stereoscopy-based measurement in order to estimate distance between detected vehicles. However, detecting the real-time objects have always been a challenging task because of occlusion, scale, illumination etc. Thus, many convolutional neural network models based on object detection were developed in recent years. But they cannot be used for real-time object analysis because of slow speed of recognition. The model which is performing excellent currently is the unified object detection model which is You Only Look Once (YOLO). But in our experiment, we have found that despite of having a very good detection precision, YOLO still has some limitations. YOLO processes every image separately even in a continuous video or frames. Because of this much important identification can be lost. So, after the vehicle detection and tracking, inter-vehicle distance estimation is done.

Author 1: Omar BOURJA
Author 2: Hatim DERROUZ
Author 3: Hamd AIT ABDELALI
Author 4: Abdelilah MAACH
Author 5: Rachid OULAD HAJ THAMI
Author 6: Francois BOURZEIX

Keywords: Stereovision; stereo image; YOLOv3 deep neural network; convolutional neural network; vehicle detection; tracking; bounding boxes; distance estimation

Download PDF

Paper 102: Blind, Secured and Robust Watermarking for 3-D Polygon Mesh using Vertex Curvature

Abstract: In this paper a blind, imperceptible, robust and secure watermarking scheme for 3-D mesh models is presented. Here, the watermark is embedded in deeper surface vertices to minimize the perceivable distortion. Deeper surface vertices are selected on the basis of their mean curvature (lesser than zero) and converted to spherical coordinates. Out of the three spherical coordinates, radial distance represents approximate mesh and is invariant to distortionless attack. Therefore, watermark bits are embedded by modifying the distribution of radial distance to make the proposed scheme robust against such attacks. Radial distances are divided into bins and normalized to range in [0, 1]. Each bin accommodates one watermark bit. Watermark is embedded repeatedly in the 3-D mesh to resist cropping and simplification attack. To ensure higher security, a 128-bit unique watermark is generated by hashing (MD5 algorithm) the mean of histogram map obtained from a grayscale watermark image. Watermark bits are extracted from bins by comparing mean of each bin with a reference value. Since original mesh is not required at the time of extraction, the proposed scheme is blind. Through experimental results, it is demonstrate that the proposed scheme has good visual masking and higher robustness against various attacks. It shows improved performance as compared to some of the prominent schemes.

Author 1: Priyanka Singh
Author 2: K Jyothsna Devi

Keywords: 3-D mesh watermarking; mean curvature; radial distance; spherical coordinates; visual masking

Download PDF

Paper 103: Optimized Design of a Converter Decimal to BCD Encoder and a Reversible 4-bit Controlled Up/Down Synchronous Counter

Abstract: The field of quantum computing, reversible logic, and nanotechnology have earned much attention from researchers in recent years due to their low power dissipation. Quantum computing has been a guiding light for nanotechnology, optical computing of information, low power CMOS design, computer science. Moreover, the dissipation of energy in the field combina-torial logic circuits becomes one of the most important aspects to be avoided. This problem is remedied by a reversible logic favoring the reproduction of inputs to outputs, which is due to the absence of unused bits. Every bit of information not used generates a loss of information causing a loss of energy under the form of heat, the reversible logic leads to zero heat dissipation. Among the components affected by reversible logic are binary reversible counter and converter from decimal to BCD encoder(D2BE) which are considered essential elements. This article will propose an optimized reversible design of a converter from decimal to BCD encoder (D2BE) and an optimized design of reversible Binary counter with up/ down. Our designs show an improvement compared to previous works by replacing some reversible gates with others while keeping the same functionality and improving performance criteria in terms of the number of gates, garbage outputs, constant inputs, quantum cost, delay, and Hardware complexity.

Author 1: Issam Andaloussi
Author 2: Sedra Moulay Brahim
Author 3: Mariam El Ghazi

Keywords: Decimal to BCD Encoder (D2BE); Reversible Bi-nary Counter; Number of Gates (CG); Number of Garbage Output (NGO); Number of Constant Inputs (NCI); Quantum Cost (QC); Hardware Complexity (HC)

Download PDF

Paper 104: Financial Stumbling Detection Model for SMEs

Abstract: Access to financing is still one of the greatest obstacles facing Small and Medium Enterprises (SMEs) all over the world and prevents them from developing. Because a large percentage of these projects fail/stumble and go bankrupt, due to failures in their financial management and assets management decisions. SMEs do not only play an important role in the global economy but also it is one of the sources of social stability and an effective role in a country’s economy. The previous researches indicate that BI systems are mainly applied in large enterprises, but using BI in SMEs is very rare. Thus, to enhance financial decisions, the study uses various ICT tools such as business intelligence (BI). This research develops a Financial Stumbling Detection (FSD) Model using BI that can help SMEs stakeholders in taking the proper financial decisions. The FSD model identifies the initial stumble/defect in SMEs, by using some financial ratios. And it was created relying on Design Science Research (DSR) methodology. The results of using BI in SMEs give the necessary insight into a business's financial data, and highlight if the business is heading for a financial crisis and potential failure.

Author 1: Reham Nasser Farag
Author 2: Nashaat Al-Wakeel
Author 3: Mohamed Sameh Hassanein

Keywords: SMEs; financial statements; business intelligence (BI); DSR methodology; financial decisions

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org