The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 11 Issue 2

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A Review of Asset-Centric Threat Modelling Approaches

Abstract: The threat landscape is constantly evolving. As attackers continue to evolve and seek better methods of compro-mising a system; in the same way, defenders continue to evolve and seek better methods of protecting a system. Threats are events that could cause harm to the confidentiality, integrity, or availability of information systems, through unauthorized disclosure, misuse, alteration, or destruction of information or information system. The process of developing and applying a representation of those threats, to understand the possibility of the threats being realized is referred to as threat modelling. Threat modelling approaches provide defenders with a tool to characterize potential threats systematically. They include the prioritization of threats and mitigation based on probabilities of the threats being realized, the business impacts and the cost of countermeasures. In this paper, we provide a review of asset-centric threat modelling approaches. These are threat modelling techniques that focus on the assets of the system being threat modelled. First, we discuss the most widely used asset-centric threat modelling approaches. Then, we present a gap analysis of these methods. Finally, we examine the features of asset-centric threat modelling approaches with a discussion on their similarities and differences.

Author 1: Livinus Obiora Nweke
Author 2: Stephen D. Wolthusen

Keywords: Threat modelling; asset-centric; asset-centric threat modelling approaches

PDF

Paper 2: Fast and Accurate Fish Detection Design with Improved YOLO-v3 Model and Transfer Learning

Abstract: Object Detection is one of the problematic Computer Vision (CV) problems with countless applications. We proposed a real-time object detection algorithm based on Improved You Only Look Once version 3 (YOLOv3) for detecting fish. The demand for monitoring the marine ecosystem is increasing day by day for a vigorous automated system, which has been beneficial for all of the researchers in order to collect information about marine life. This proposed work mainly approached the CV technique to detect and classify marine life. In this paper, we proposed improved YOLOv3 by increasing detection scale from 3 to 4, apply k-means clustering to increase the anchor boxes, novel transfer learning technique, and improvement in loss function to improve the model performance. We performed object detection on four fish species custom datasets by applying YOLOv3 architecture. We got 87.56% mean Average Precision (mAP). Moreover, comparing to the experimental analysis of the original YOLOv3 model with the improved one, we observed the mAP increased from 87.17% to 91.30. It showed that improved version outperforms than the original YOLOv3 model.

Author 1: Kazim Raza
Author 2: Song Hong

Keywords: Deep learning; computer vision; transfer learning; improved YOLOv3; anchor box; custom dataset

PDF

Paper 3: Drying Process Simulation Methodology based on Chemical Kinetics Laws

Abstract: It is shown that the existing approaches to drying process modeling, based on a system of interconnected differential equations of heat and mass transfer or on statistical processing of drying process experimental data, have significant drawbacks. It greatly complicates the development of computer means for controlling production processes. During modeling it is proposed to consider drying process from the standpoint of physical chemistry as a quasi-topochemical heterogeneous reaction and perform mathematical modeling of this process based on the laws of chemical kinetics. The basic issues of methodology of drying process modeling based on the laws of chemical kinetics are reviewed: the study of the equation of drying rate during the removal of free and bound moisture; methods for determining composition of the aqueous fractions with different forms and energy of moisture in materials; methods of determination of an activation energy of moisture; the influence of the concentration of moisture and other process factors on the drying speed. The methodological approach considered in the article allows developing reliable mathematical models of drying kinetics for the purposes of computer technologies for managing production processes and avoiding the errors that the authors note in previously published works.

Author 1: Vladimir M. Arapov
Author 2: Dmitriy A. Kazartsev
Author 3: Igor A. Nikitin
Author 4: Maria V. Babaeva
Author 5: Svetlana V. Zhukovskaya
Author 6: Svetlana N. Tefikova
Author 7: Galina V. Posnova
Author 8: Igor V. Zavalishin

Keywords: Modeling; drying; chemical kinetics; activation energy

PDF

Paper 4: Vision-based Indoor Localization Algorithm using Improved ResNet

Abstract: The output of the residual network fluctuates greatly with the change of the weight parameters, which greatly affects the performance of the residual network. For dealing with this problem, an improved residual network is proposed. Based on the classical residual network, batch normalization, adaptive -dropout random deactivation function and a new loss function are added into the proposed model. Batch normalization is applied to avoid vanishing/exploding gradients. -dropout is applied to increase the stability of the model, which we select different dropout method adaptively by adjusting parameter. The new loss function is composed by cross entropy loss function and center loss function to enhance the inter class dispersion and intra class aggregation. The proposed model is applied to the indoor positioning of mobile robot in the factory environment. The experimental results show that the algorithm can achieve high indoor positioning accuracy under the premise of small training dataset. In the real-time positioning experiment, the accuracy can reach 95.37.

Author 1: Zeyad Farisi
Author 2: Tian Lianfang
Author 3: Li Xiangyang
Author 4: Zhu Bin

Keywords: Deep learning; residual network; loss function; dropout; indoor localization

PDF

Paper 5: Predicting Book Sales Trend using Deep Learning Framework

Abstract: A deep learning framework like Generative Adversarial Network (GAN) has gained popularity in recent years for handling many different computer visions related problems. In this research, instead of focusing on generating the near-real images using GAN, the aim is to develop a comprehensive GAN framework for book sales ranks prediction, based on the historical sales rankings and different attributes collected from the Amazon site. Different analysis stages have been conducted in the research. In this research, a comprehensive data preprocessing is required before the modeling and evaluation. Extensive predevelopment on the data, related features selections for predicting the sales rankings, and several data transformation techniques are being applied before generating the models. Later then various models are being trained and evaluated on prediction results. In the GAN architecture, the generator network that used to generate the features is being built, and the discriminator network that used to differentiate between real and fake features is being trained before the predictions. Lastly, the regression GAN model prediction results are compared against the different neural network models like multilayer perceptron, deep belief network, convolution neural network.

Author 1: Tan Qin Feng
Author 2: Murphy Choy
Author 3: Ma Nang Laik

Keywords: Generative adversarial network; deep learning framework; book sales forecasting; regression

PDF

Paper 6: Sea Breeze Damage Estimation Method using Sentinel of Remote Sensing Satellite Data

Abstract: Sea breeze damage estimation method using Sentinel of remote sensing satellite data is proposed. There are two kinds of sea breeze damage. Namely, one is vegetation degradation due to sea salt from sea breeze and the other one is leaf lodging due to strong winds from sea breeze. Kyushu, Japan had severe storm due to the typhoon #17 during from 21 September to 23 September 2019. Optical sensor and Synthetic Aperture Radar: SAR onboard remote sensing satellite are used for disaster relief. NDVI and false colored imagery data derived from the Sentinel-1 and 2 data are used for disaster relief. Through experiments, it is found that sea salt damage on rice paddy fields in particular can be relieved by NDVI and false colored imagery data while rice lodging can also be relieved by SAR data.

Author 1: Kohei Arai

Keywords: Sentinel; disaster relief; satellite remote sensing; flooding; oil spill; synthetic aperture radar; optical sensor; vegetation index

PDF

Paper 7: JEPPY: An Interactive Pedagogical Agent to Aid Novice Programmers in Correcting Syntax Errors

Abstract: Programming is a complicated task and correcting syntax error is just one among the many tasks that makes it difficult. Error messages produced by the compiler allow novice learners to know their errors. However, these messages are puzzling, and most of the times misleading due to cascading of errors, which can be detrimental to running a syntax-error free program. In most laboratory setting, it is the role of the teachers to assist their students while doing activities. However, in our experienced, considering the large number of students in a class, it may seem difficult for teachers to assist their students one-by- one given the time constraints. In this paper, the design and implementation of an interactive pedagogical agent named JEPPY is presented. It is intended to assist novice learners learning to program using C++ as a programming language. In order to see on how students struggle or progress in dealing with errors, the proponents implemented the Error Quotient (EQ) developed by Jadud. The principles of the cognitive requirements of an agent- based learning environment were followed. The agent was put into test by novice learners in a laboratory setting. Logs of interaction between the embodied agent and the participants were recorded, aside from the compile errors and edit actions. These mechanisms show us some insight on the interaction behavior of learner to the agent.

Author 1: Julieto E. Perez
Author 2: Dante D. Dinawanao
Author 3: Emily S. Tabanao

Keywords: Pedagogical agent; error quotient; syntax-error correction; compile errors; human computer interaction

PDF

Paper 8: Intelligent Haptic Virtual Simulation for Suture Surgery

Abstract: The aim of this study is to develop an intelligent haptic virtual simulation for suture surgery, enabled with an AI assistant. This haptic VR suture simulation mainly composed of three parts: visuo-haptic rendering for surgery, replica training for surgery, and AI assistant for surgery. In the simulation, a trainee surgeon can practice the suturing procedure using tactile touches in the stereoscopic 3D virtual environment. The simulation adopts a “precise haptic collision detection method using subdivision surface and sphere clustering” developed in the previous studies of authors. In addition, an expert surgeon's suture operations can be replicated to the medical trainee's simulation to experience it as it is. This method has the advantage of reproducing the experience of the expert’s ideal movements of surgical procedures. The data recording of the skilled surgeon's motions and operations are normalized into a form suitable for AI learning. The AI assistant distinguishes five typical types of suture surgery methods and learns the proper methods for various wounds using deep learning techniques, then it suggests the most appropriate suture method. The suture simulation can reduce the cost and time required for surgical training and eventually provide safe and accurate physical surgery.

Author 1: Mee Young Sung
Author 2: Byeonghun Kang
Author 3: Jungwook Kim
Author 4: Taehoon Kim
Author 5: Hyeonseok Song

Keywords: Haptic; virtual simulation; suture surgery; artificial intelligence; assistant

PDF

Paper 9: A Microservices based Approach for City Traffic Simulation

Abstract: The paper proposes a city traffic software simula-tion based on actors which run independently of one another and have specific characters in their behavior. To run indepentently actors are modeled as microservices and they are running within an orchestration framework. Their behavior is modeled as with specific algorithms for each of their type, embedded in each actor’s type code. They may act based on the data about all the other actors, data which is gathered together by a single entity called city simulator. An orchestration model is proposed and all the actors use a communication protocol to offer data to the city simulator and request data from it

Author 1: Toma Becea
Author 2: Honoriu Valean

Keywords: Traffic simulation; microservices; distributed com-putting

PDF

Paper 10: Genres and Actors/Actresses as Interpolated Tags for Improving Movie Recommender Systems

Abstract: Abstract—A movie recommender system has been proven to be a convincing implement on carrying out comprehensive and complicated recommendation which helps users find appropriate movies conveniently. It follows a mechanism that a user can be accurately recommended movies based on other similar interests, e.g. collaborative filtering, and the movies themselves, e.g. content-based filtering. Therefore, the systems should come with predeter-mined information either by users or by movies. One interesting research question should be asked: “what if this information is missing or not manually manipulated?” The problem has not been addressed in the literature, especially for the 100K and 1M variations of the MovieLens datasets. This paper exploits the movie recommender system based on movies’ genres and actors/actresses themselves as the input tags or tag interpolation. We apply tag-based filtering and collaborative filtering that can effectively predict a list of movies that is similar to the movie that a user has been watched. Due to not depending on users’ profiles, our approach has eliminated the effect of the cold-start problem. The experiment results obtained on MovieLens datasets indicate that the proposed model may contribute ade-quate performance regarding efficiency and reliability, and thus provide better-personalized movie recommendations. A movie recommender system has been deployed to demonstrate our work. The collected datasets have been published on our Github repository to encourage further reproducibility and improvement.

Author 1: Nghia Duong-Trung
Author 2: Quynh Nhut Nguyen
Author 3: Dung Ngoc Le Ha
Author 4: Xuan Son Ha
Author 5: Tan Tai Phan
Author 6: Hiep Xuan Huynh

Keywords: Movielens; movie recommender systems; tag inter-polation; colloborative filtering

PDF

Paper 11: Document Length Variation in the Vector Space Clustering of News in Arabic: A Comparison of Methods

Abstract: This article is concerned with addressing the effect of document length variation on measuring the semantic similarity in the text clustering of news in Arabic. Despite the development of different approaches for addressing the issue, there is no one strong conclusion recommending one approach. Furthermore, many of these have not been tested for the clustering of news in Arabic. The problem is that different length normalization methods can yield different analyses of the same data set, and that there is no obvious way of selecting the best one. The choice of an inappropriate method, however, has negative impacts on the accuracy and thus the reliability of clustering performance. Given the lack of agreement and disparity of opinions, we set out to comprehensively evaluate the existing normalization techniques to prove empirically which one is the best for the normalization of text length to improve the text clustering performance of news in Arabic. For this purpose, a corpus of 693 stories representing different categories and of different lengths is designed. Data is analyzed using different document length normalization methods along with vector space clustering (VSC), and then the analysis on which the clustering structure agrees most closely with the bibliographic information of the news stories is selected. The analysis of the data indicates that the clustering structure based on the byte length normalization method is the most accurate one. One main problem, however, with this method is that the lexical variables within the data set are not ranked which makes it difficult for retaining only the most distinctive lexical features for generating clustering structures based on semantic similarity. As thus, the study proposes the integration of TF-IDF for ranking the words within all the documents so that only those with the highest TF-IDF values are retained. It can be finally concluded that the proposed model proved effective in improving the function of the byte normalization method and thus on the performance and reliability of news clustering in Arabic. The findings of the study can also be extended to IR applications in Arabic. The proposed model can be usefully used in supporting the performance of the retrieval systems of Arabic in finding the most relevant documents for a given query based on semantic similarity, not document length.

Author 1: Abdulfattah Omar
Author 2: Wafya Ibrahim Hamouda

Keywords: Arabic; document length; news clustering; semantic similarity; TF-IDF; VSC

PDF

Paper 12: Impact of Information Technologies on HR Effectiveness: A Case of Azerbaijan

Abstract: In the article, the impact of information technologies’ (IT) implementation into the work of human resource departments for increased effectiveness is explored. Modern relations at the enterprise require the most important network-based enterprise’s unit to be a strategic, flexible, cost-effective and service-oriented division of the organization. Although the influence of IT on Human Resources Management (HRM) has been a focus of scientists’ attention, no empirical research has been conducted in this area in Azerbaijan. The authors use the experience and initiatives of enterprises and national banks to show the current state and results of the implementation of IT in HRM. Obtained data show that IT is not widely used in Azerbaijani organizations to perform HRM functions. The results also show that, although IT should have a certain impact on all sectors in terms of HRM, the types of IT used should vary significantly in recruitment, maintenance, and development tasks.

Author 1: Aida Guliyeva
Author 2: Ulviyya Rzayeva
Author 3: Aygun Abdulova

Keywords: HR effectiveness; recruitment needs; maintenance and development tasks; management and planning tasks; performance of enterprises and banks; human capital; return on investment

PDF

Paper 13: Discovering the Relationship between Heat-Stress Gene Expression and Gene SNPs Features using Rough Set Theory

Abstract: Over the years of applying machine learning in bioinformatics, we have learned that scientists, working in many areas of life sciences, call for deeper knowledge of the modeled phenomenon than just the information used to classify the objects with a certain quality. As dynamic molecules of gene activities, transcriptome profiling by RNA sequencing (RNA-seq) is becoming increasingly popular, which not only measures gene expression but also structural variations such as mutations and fusion transcripts. Moreover, Single nucleotide polymorphisms (SNPs) are of great potential in genetics, breeding, ecological and evolutionary studies. Rough sets could be successfully employed to tackle various problems such as gene expression clustering and classification. This study provides general guidelines for accurate SNP discovery from RNA-seq data. Those SNPs annotations are used to find relation between their biological features and the differential expression of the genes to which those SNPs belong. Rough sets are utilized to define this kind of relationship into a finite set of rules. Set of (32) generated rules proved good results with strength, certainty and coverage evaluation terms. This strategy is applied to the analysis of SNPs in A. thaliana plant under heat-stress.

Author 1: Heba Zaki
Author 2: Mohammad Nassef
Author 3: Amr Ahmed Badr
Author 4: Ahmed Farouk Al-Sadek

Keywords: RNA sequencing (RNA-seq); variant calling; Single Nucleotide Polymorphisms (SNPs) analysis; rough sets; gene expression

PDF

Paper 14: Feature Selection in Text Clustering Applications of Literary Texts: A Hybrid of Term Weighting Methods

Abstract: The recent years have witnessed an increasing use of automated text clustering approaches and more particularly Vector Space Clustering (VSC) methods in the computational analysis of literary data including genre classification, theme analysis, stylometry, and authorship attribution. In spite of the effectiveness of VSC methods in resolving different problems in these disciplines and providing evidence-based research findings, the problem of feature selection remains a challenging one. For reliable text clustering applications, a clustering structure should be based on only and all the most distinctive features within a corpus. Although different term weighting approaches have been developed, the problem of identifying the most distinctive variables within a corpus remains challenging especially in the document clustering applications of literary texts. For this purpose, this study proposes a hybrid of statistical measures including variance analysis, term frequency-inverse document frequency, TF-IDF, and Principal Component Analysis (PCA) for selecting only and all the most distinctive features that can be usefully used for generating more reliable document clustering that can be usefully used in authorship attribution tasks. The study is based on a corpus of 74 novels written by 18 novelists representing different literary traditions. Results indicate that the proposed model proved effective in the successful extraction of the most distinctive features within the datasets and thus generating reliable clustering structures that can be usefully used in different computational applications of literary texts.

Author 1: Abdulfattah Omar

Keywords: Feature selection; frequency; PCA; term weight; text clustering; TF-IDF; variance; VSC

PDF

Paper 15: Fast FPGA Prototyping based Real-Time Image and Video Processing with High-Level Synthesis

Abstract: Programming in high abstraction level is known by its benefits. It can facilitate the development of digital image and video processing systems. Recently, high-level synthesis (HLS) has played a significant role in developing this field of study. Real time image and video Processing solution needing high throughput rate are often performed in a dedicated hardware such as FPGA. Previous studies relied on traditional design processes called VHDL and Verilog and to synthesize and validate the hardware. These processes are technically complex and time consuming. This paper introduces an alternative novel approach. It uses a Model-Based Design workflow based on HDL Coder (MBD), Vision HDL Toolbox, Simulink and MATLAB for the purpose of accelerating the design of image and video solution. The main purpose of the present paper is to study the complexity of the design development and minimize development time (Time to market: TM) of conventional FPGA design. In this paper, the complexity of the development™ can be reduced by 60% effectively by automatically generating the IP cores and downloading the modeled design through the Xilinx tools and give more advantages of FPGA related to the other devices like ASIC and GPU.

Author 1: Refka Ghodhbani
Author 2: Layla Horrigue
Author 3: Taoufik Saidani
Author 4: Mohamed Atri

Keywords: High-level synthesis; FPGA; fast prototyping; real-time image processing; video surveillance; computer-aided design; model-based design; HDL coder; FPGA

PDF

Paper 16: Design and Analysis of an Arithmetic and Logic Unit using Single Electron Transistor

Abstract: The demand for low power dissipation and increasing speed elicits numerous research efforts in the field of nano CMOS technology. The Arithmetic Logic Unit is the core of any central processing unit. In this paper, we designed a 4-bit Arithmetic and Logic Unit (ALU) using Single Electron Transistor (SET). Single-electron transistor (SET) is a new type of switching nanodevice that uses controlled single-electron tunneling to amplify the current. The single-electron transistor (SET) is highly scalable and possesses ultra-low power consumption when compared to conventional semiconductor devices. Reversible logic gates designed using SET are used for performing 4-bit arithmetic operations. We modelled symmetric single gate SET operating at room temperature using Verilog A code. The design is carried out in cadence simulation environment. The 4-bit SET based ALU design exhibits the power of 0.52 nW and delay of 350pS.

Author 1: Aarthy M
Author 2: Sriadibhatla Sridevi

Keywords: Single electron transistor; reversible logic gates; low power; speed

PDF

Paper 17: A Review and Development Methodology of a LightWeight Security Model for IoT-based Smart Devices

Abstract: Internet of Things (IoT) turns into another time of the Internet, which contains connected smart objects over the Internet. IoT has numerous applications, for example, smart city, smart home, smart grid and healthcare. In common, the IoT system comprises of heterogeneous devices that deliver then trade endless sums of safety-critical information, also as privacy-sensitive information. Nevertheless, connected devices can give your business a genuine lift, yet anything that is connected to the Internet can be vulnerable to cyberattacks. Most present IoT arrangements rely upon centralized architecture by associating with cloud servers through the Internet. The public cloud is described as computing services publicized by third-party suppliers over the Internet, making them accessible to anybody who needs to use or buy them. This solution gives magnificent flexible calculation and information the executives capacities, as IoT systems are developing increasingly mind-boggling; nonetheless, despite everything, it faces different of security issues. One of the weaknesses is that your information moving in IoT devices by means of public cloud could be in danger, despite the fact that the hacker was not explicitly focusing on you and with the public cloud you have insignificant authority over how rapidly you can grow the cloud. In this case, a secured protocol in IoT is vital to ensure optimum security to the information being traded between connected devices. To overcome the limitation, in this paper, we conduct a comprehensive review on existing security protocols and propose a development methodology of a blockchain-based lightweight security model that provides end to end security. By utilizing lightweight, an authenticated client can get to the information of IoT sensors remotely. The presentation investigation shows that lightweight offers better security, less overheads, and low communication.

Author 1: Mathuri Gurunathan
Author 2: Moamin A. Mahmoud

Keywords: Lightweight; security mode; IoT; smart devices

PDF

Paper 18: SentiFilter: A Personalized Filtering Model for Arabic Semi-Spam Content based on Sentimental and Behavioral Analysis

Abstract: Unwanted content in online social network services is a substantial issue that is continuously growing and negatively affecting the user-browsing experience. Current practices do not provide personalized solutions that meet each individual’s needs and preferences. Therefore, there is a potential demand to provide each user with a personalized level of protection against what he/she perceives as unwanted content. Thus, this paper proposes a personalized filtering model, which we named SentiFilter. It is a hybrid model that combines both sentimental and behavioral factors to detect unwanted content for each user towards pre-defined topics. An experiment involving 80,098 Twitter messages from 32 users was conducted to evaluate the effectiveness of the SentiFilter model. The effectiveness was measured in terms of the consistency between the implicit feedback derived from the SentiFilter model towards five selected topics and the explicit feedback collected explicitly from participants towards the same topics. Results reveal that commenting behavior is more effective than liking behavior to detect unwanted content because of its high consistency with users’ explicit feedback. Findings also indicate that sentiment of users’ comments does not reflect users’ perception of unwanted content. The results of implicit feedback derived from the SentiFilter model accurately agree with users’ explicit feedback by the indication of the low statistical significance difference between the two sets. The proposed model is expected to provide an effective automated solution for filtering semi-spam content in favor of personalized preferences.

Author 1: Mashael M. Alsulami
Author 2: Arwa Yousef AL-Aama

Keywords: Personalization; sentiment analysis; behavioral analysis; spam detection; recommendation systems

PDF

Paper 19: Conceptual Framework for Developing an ERP Module for Quality Management and Academic Accreditation at Higher Education Institutions: The Case of Saudi Arabia

Abstract: As a result of the high priority given by universities in Saudi Arabia to the implementation of quality systems and achieve international and local academic accreditation, especially NCAAA, as well as, academic accreditation standards require the provision of a set of data, documents, reports and evidence distributed among the various departments of the university (academic and non-academic), which must be provided periodically and annually. These universities need to provide an integrated system between these departments to cover the requirements of quality and academic accreditation. On the other hand, there are ERP systems that suit the organizational environment of the university, but these systems did not implement a module to cover quality assurance requirements or academic accreditation. This study proposed a framework includes ERP module for quality requirements and academic accreditation to facilitate the collection of required data from various sources within the university and helps to provide the necessary reports and statistics, where the module organizes the necessary processes for quality and accreditation by providing a special database and through linking with the other subsystems at the university.

Author 1: Mohammad Samir Abdel-Haq

Keywords: Quality assurance; academic accreditation framework; ERP module

PDF

Paper 20: IoT and Blockchain in the Development of Smart Cities

Abstract: With the advent and proliferation of the internet, the fourth industrial revolution is in full swing. As a result, different technologies have the potential to impact the course of human development. In other words, worldwide populations are moving towards growing urban centers and as a result, smart cities are emerging as the integration of human activities and technologies. These smart cities are built on top of different technologies such as blockchain and the Internet of Things (IoT). Consequently, the applications of these technologies in current and future smart cities will not only change the nature of human interaction and governance but also how business is conducted. This paper proposes an experimental study (qualitative and quantitative) that will determine the impact of blockchain and IoT technologies on the development of smart cities. It aims to derive insight from questions such as how current business models are preparing themselves for this disruption, the challenges they will face, and the potential contributions the two technologies will have on business development. The study’s outcomes will provide the rationale for why businesses should start paying attention to these technologies and start on an early adoption plan that will slowly transform their business models as smart cities mature.

Author 1: Laith T. Khrais

Keywords: Internet of things; blockchain technologies; smart cities; emerging markets; electronic commerce

PDF

Paper 21: Automatic Detection of Plant Disease and Insect Attack using EFFTA Algorithm

Abstract: The diagnosis of plant disease by computer vision using digital image processing methodology is a key for timely intervention and treatment of healthy agricultural procedure and to increase the yield by natural means. Timely addressal of these ailments can be the difference between the prevention and perishing of an ecosystem. To make the system more efficient and feasible we have proposed an algorithm called Enhanced Fusion Fractal Texture Analysis (EFFTA). The proposed method consists of Feature Fusion technique which combines SIFT- Scale Invariant Feature Transform and DWT- Discrete Wavelet Transform based SFTA- Segment Based Fractal Texture Analysis. Image as a whole can be detected by shape, texture and color. SIFT is used to detect the texture feature, it extracts the set of descriptors that is very useful in local texture recognition and it captures accurate key points for detecting the diseased area. Further extraction of texture is considered and that can be performed by WSFTA method. It adopts intra- class analysis and inter- class analysis. Extracted features trained using Back Propagation Neural Network. It improves and expands the success rate and accuracy of extraction also it provides higher precision and efficiency when compared to the other traditional methods.

Author 1: Kapilya Gangadharan
Author 2: G. Rosline Nesa Kumari
Author 3: D. Dhanasekaran
Author 4: K. Malathi

Keywords: Texture analysis; features; computer vision; inter- class; intra-class

PDF

Paper 22: A Robust Deep Learning Model for Financial Distress Prediction

Abstract: This paper investigates the ability of deep learning networks on financial distress prediction. This study uses three different deep learning models, namely, Multi-layer Perceptron (MLP), Long Short-term Memory (LSTM) and Convolutional Neural Networks (CNN). In the first phase of the study, different Optimization techniques are applied to each model creating different model structures, to generate the best model for prediction. The top results are presented and analyzed with various optimization parameters. In the second phase, MLP, the best classifier identified in the first phase is further optimized through variations in architectural configurations. This study investigates the robust deep neural network model for financial distress prediction with the best optimization parameters. The prediction performance is evaluated using different real-time datasets, one containing samples from Kuwait companies and another with samples of companies from GCC countries. We have used the technique of resampling for all experiments in this study to get the most accurate and unbiased results. The simulation results show that the proposed deep network model far exceeds classical machine learning models in terms of predictive accuracy. Based on the experiments, guidelines are provided to the practitioners to generate a robust model for financial distress prediction.

Author 1: Magdi El-Bannany
Author 2: Meenu Sreedharan
Author 3: Ahmed M. Khedr

Keywords: Financial distress prediction; multi-layer perceptron; long short-term memory; convolutional neural network; deep neural network; optimized deep learning model

PDF

Paper 23: Investigation of a 7-Level Inverter-based Electric Spring Subjected to Distribution Network Dynamics

Abstract: This paper aims to provide solution to mitigate the voltage variations in critical load caused by the high penetration of DGs into distribution system using Electric Springs (ES). In this regard, there is a need for its exploration with various converter circuits. The improvised topology opens new avenues in the renewable energy powered micro grids for the implementation of ES with a Multi-Level Inverter (MLI) comprising a voltage balancing circuit providing a better quality of power system stability and voltage regulation. This paper captures the voltage dynamics of distribution system dominated by Renewable variability for varying reactive power of the DGs and constantly changing consumer demands. These are analyzed and explained using voltage profiles and power flows in Matlab/Simulink environment. It is practically shown that with the developed ES topology %THD in the system is conspicuously reduced and voltage regulation is seamlessly improved.

Author 1: K. K. Deepika
Author 2: G.Kesava Rao
Author 3: J. Vijaya Kumar
Author 4: Satya Ravi Sankar Rai

Keywords: Electric spring; critical load; multilevel inverter; voltage balancing circuit; voltage regulation

PDF

Paper 24: On Exhaustive Evaluation of Eager Machine Learning Algorithms for Classification of Hindi Verses

Abstract: Implementing supervised machine learning on the Hindi corpus for classification and prediction of verses is an untouched and useful area. Classifying and predictions benefits many applications like organizing a large corpus, information retrieval and so on. The metalinguistic facility provided by websites makes Hindi as a major language in the digital domain of information technology today. Text classification algorithms along with Natural Language Processing (NLP) facilitates fast, cost-effective, and scalable solution. Performance evaluation of these predictors is a challenging task. To reduce manual efforts and time spent for reading the document, classification of text data is important. In this paper, 697 Hindi poems are classified based on four topics using four eager machine-learning algorithms. In the absence of any other technique, which achieves prediction on Hindi corpus, misclassification error is used and compared to prove the betterment of the technique. Support vector machine performs best amongst all.

Author 1: Prafulla B. Bafna
Author 2: Jatinderkumar R. Saini

Keywords: Classification; eager machine learning algorithm; Hindi; prediction

PDF

Paper 25: Teachers’ Experiences in the Development of Digital Storytelling for Cyber Risk Awareness

Abstract: Although the Internet has positively impacted people's lives, it also has its dark side. There have been reports on the increase of cases of violence, racial abuse, cyber-bullying, online fraud, addiction to gaming and gambling, and pornography. A vital issue has emerged that Internet users still lack awareness of these online risks. In this study, our respondents were involved in the development of educational videos related to cyber risk topics using a storytelling approach. The participants in this study were 28 in-service teachers who took a master class on Resource and Information Technology. This study aims to examine the issues that participants took into consideration while planning and developing their digital stories, and their experiences developing digital stories about cyber risks. The data was collected using a written reflection. The data was then analyzed thematically using NVivo Software. The findings indicate how the respondents valued their experience in planning, developing, and evaluating their storytelling videos. The impact of learning from the videos on the students’ affective domain is also discussed. We further discuss the benefits of the storytelling approach for behavior change.

Author 1: Fariza Khalid
Author 2: Tewfiq El-Maliki

Keywords: Cybersecurity; awareness; education; video; digital storytelling; media; case study

PDF

Paper 26: e-Participation Model for Kuwait e-Government

Abstract: Internet has an influence on every aspect of modern life. The increasing interest in e-government has led to increase in public expenditure on communication technologies. The technology provides and facilitates opportunities for citizens to interact with e-government, so called e-participation. In fact, it makes the citizens involvement higher in the delivery of services, administration, and decision making. People need to engage themselves and participate in e-government to achieve its objectives. e-Government literature explored the factors that influence people to participate in e-government. However, the study of e-participation is new in Kuwait. Therefore, this paper aims to find out the critical factors affecting e-participation in Kuwait. To attain the purpose of the research study, a conceptual model has been developed, keeping the context of Kuwait society in view. Then, a questionnaire has been designed and used to test the conceptual model. The results indicate that technical factors, social influence, political factors, perceived usefulness, and perceived ease-of-use are the significant factors that influence the citizen’s intention to participate in Kuwait e-government. Consequently, the results of this study need to be adopted by the government to enhance e-participation in Kuwait e-government.

Author 1: Zainab M. Aljazzaf
Author 2: Sharifa Ayad Al-Ali
Author 3: Muhammad Sarfraz

Keywords: e-Government; e-participation; e-participation factors; e-participation model; e-information; e-consultation

PDF

Paper 27: Dynamic Changes of Multiple Sclerosis Lesions on T2-FLAIR MRI using Digital Image Processing

Abstract: Multiple Sclerosis (MS) is a complex autoimmune neurological disease affecting the myelin sheath of the nerve system. In the world, there are about 2.5 million patients with MS, in South and East Asia the ratio of MS is high. This disease affects young and middle-aged people. The MS is a fatal disease, and the numbers and volumes of MS lesions can be used to determine the degree of disease severity and track its progression. The detection of multiple sclerosis is a critical problem in MRI images because MS is described as frequently involves lesions, it can be appeared on a scan at one time-point and not appeared in subsequent time points. Also, MS on the T2 FLAIR MRI image is more often manifested by the presence of focal changes in the substance of the brain and spinal cord, which complicate their dynamic control according to MRI data. The detection and extraction of the MS lesions features are not just a tedious and time-consuming process, but also required experts and trained physicians, so the computer-aided tools become very important to overcome these obstacles. In this paper, we present a novel computer-aided approach based on digital image processing methods for enhancing the structures, removing undesired signals, segmenting the MS lesions from the background, and finally measuring the size of MS lesions to provide information about the current status of MS, which represent MS lesions that are either new, increasing or shrinking. The accuracy of the proposed methodology was 96%, according to the results presented in data. The lack of accuracy is related to some errors in segmentation.

Author 1: Marwan A. A. Hamid
Author 2: Walid Al-haidri
Author 3: Waseemullah
Author 4: Najeed Ahmed Khan
Author 5: Bilal Ahmed Usmani
Author 6: Syed. M. Wasim Raza

Keywords: Multiple sclerosis; T2-FLAIR; magnetic resonance imaging; digital image processing; image segmentation

PDF

Paper 28: Scientific VS Non-Scientific Citation Annotational Complexity Analysis using Machine Learning Classifiers

Abstract: This paper evaluates the citation sentences’ annotation complexity of both scientific as well as non-scientific text related articles to find out major complexity reasons by performing sentiment analysis of scientific and non-scientific domain articles using our own developed corpora of these domains separately. For this research, we selected different data sources to prepare our corpora in order to perform sentimental analysis. After that, we have performed a manual annotation procedure to assign polarities using our defined annotation guidelines. We developed a classification system to check the quality of annotation work for both domains. From results, we have found that the scientific domain gave us more accurate results than the non-scientific domain. We have also explored the reasons for less accurate results and concluded that non-scientific text especially linguistics is of complex nature that leads to poor understanding and incorrect annotation.

Author 1: Hassan Raza
Author 2: M. Faizan
Author 3: Naeem Akhtar
Author 4: Ayesha Abbas
Author 5: Naveed-Ul-Hassan

Keywords: Classification; machine learning; sentimental analysis; scientific citations; non- scientific citation

PDF

Paper 29: Arabic Morphological Analysis Techniques

Abstract: Recently, activity surrounding Arabic natural language processing has increased significantly. Morphological analysis is the basis of most tasks related to Arabic natural language processing. There are many scientific studies on Arabic morphological analysis, yet most of them lack an accurate classification of Arabic morphology and fail to cover both recent and traditional techniques. This paper aims to survey Arabic morphological analysis techniques from 2005 to 2019 and to organize them into a reasonable and expandable classification system. To facilitate and support new research, this paper compares the currently available Arabic morphological analyzers, reaches certain conclusions, and proposes some promising directions for future research in Arabic morphological analysis.

Author 1: Ameerah Alothman
Author 2: AbdulMalik Alsalman

Keywords: Arabic analyzer; Arabic lexicon; classification morphology; morphological analysis; natural language processing

PDF

Paper 30: Using Social Network Analysis to Understand Public Discussions: The Case Study of #SaudiWomenCanDrive on Twitter

Abstract: Social media analytics has experienced significant growth over the past few years due to the crucial importance of analyzing and measuring public social behavior on different social networking sites. Twitter is one of the most popular social networks and means of online news that allows users to express their views and participate in a wide range of different issues in the world. Expressed opinions on Twitter are based on diverse experiences that represent a broad set of valuable data that can be analyzed and used for many purposes. This study aims to understand the public discussions that are conducted on Twitter about essential topics and developing an analytics framework to analyze these discussions. The focus of this research is the analytical framework of Arabic public discussions using the hashtag #SaudiWomenCanDrive, as one of the hot trends of Twitter discussions. The proposed framework analyzed more than two million tweets using methods from social network analysis. The framework uses the metrics of graph centrality to reveal essential people in the discussion and community detection methods to identify the communities and topics used in the discussion. Results show that @SaudiNews50, @Algassabinasser, and @Abdulrahman were top users in two networks, while @KingSalman and @LoujainHathloul were the top two users in another network. Consequently, “King Salman” and “Loujain Hathloul” Twitter accounts were identified as influencers, whereas “Saudi News” and “Algassabi Nasser” were the leading distributors of the news. Therefore, similar phenomena could be analyzed using the proposed framework to analyze similar behavior on other public discussions.

Author 1: Zubaida Jastania
Author 2: Mohammad Ahtisham Aslam
Author 3: Rabeeh Ayaz Abbasi
Author 4: Kawther Saeedi

Keywords: Social network analysis; twitter; public discussion; network science

PDF

Paper 31: Cross-Language Plagiarism Detection using Word Embedding and Inverse Document Frequency (IDF)

Abstract: The purpose of cross-language textual similarity detection is to approximate the similarity of two textual units in different languages. This paper embeds the distributed representation of words in cross-language textual similarity detection using word embedding and IDF. The paper introduces a novel cross-language plagiarism detection approach constructed with the distributed representation of words in sentences. To improve the textual similarity of the approach, a novel method is used called CL-CTS-CBOW. Consequently, adding the syntax feature to the approach is improved by a novel method called CL-WES. Afterward, the approach is improved by the IDF weighting method. The corpora used in this study are four Arabic-English corpora, specifically books, Wikipedia, EAPCOUNT, and MultiUN, which have more than 10,017,106 sentences and uses with supported parallel and comparable assemblages. The proposed method in this paper combines different methods to confirm their complementarity. In the experiment, the proposed system obtains 88% English-Arabic similarity detection at the word level and 82.75% at the sentence level with various corpora.

Author 1: Hanan Aljuaid

Keywords: NLP; cross-language plagiarism detection; word embedding; similarity detection; IDF

PDF

Paper 32: A Study of LoRa Performance in Monitoring of Patient’s SPO2 and Heart Rate based IoT

Abstract: In this research, a sensor that will be equipped with blood oxygen saturation function (SPO2) blood and Heart Rate is MH-ET Live max30102 Sensor with Library Max30105. The advantage of this sensor is compatible with ATmega 328P, which is the Arduino board, the first experiment using Arduino Uno. Therefore, MH-ET Sensor data is integrated with Wireless Sensor Network (WSN) devices, e.g, LoRa (Long Range) 915 MHz and calculate WSN path loss when sending sensor data in mountainous areas, the model used to represent signal analysis and measurements in this study is the Ground Reflection (2-ray) model. therefore, the conditions that can be explained are patients who will send their data over hilly areas and hospitals or medical treatments called receiving nodes or coordinator nodes in much lower areas, in the same situation adding routers is expected to be a comparison of whether the data sent faster or even no impact. Furthermore, in this study, it is expected to provide clear results on the function of the router as the sender of pulse sensor data. The point is patients who are in a higher area with the level of impossibility in bringing the patient due to the condition of the patient so that the SPO2 data transmission and heart rate of the patient are expected to be known quickly by the medical authorities through the sensor node device attached to the patient's body. The use of the Adaptive Data Rate (ADR) Algorithm is used to optimize data rate, time on air (ToA) or airtime and energy consumption in the network. Therefore, the End Device (ED) in the ADR algorithm must be static (non-mobile). In the process of measuring the ADR algorithm in the position of sending data (uplink) n-bits to n-gateway. Next, the application server used is ThingsSpeak or The Things Network (TTN).

Author 1: Puput Dani Prasetyo Adi
Author 2: Akio Kitagawa

Keywords: Pulse; heart rate; adaptive; data rate; long range; bitrate

PDF

Paper 33: Comparison of Anomaly Detection Accuracy of Host-based Intrusion Detection Systems based on Different Machine Learning Algorithms

Abstract: Among the different host-based intrusion detection systems, an anomaly-based intrusion detection system detects attacks based on deviations from normal behavior; however, such a system has a low detection rate. Therefore, several studies have been conducted to increase the accurate detection rate of anomaly-based intrusion detection systems; recently, some of these studies involved the development of intrusion detection models using machine learning algorithms to overcome the limitations of existing anomaly-based intrusion detection methodologies as well as signature-based intrusion detection methodologies. In a similar vein, in this study, we propose a method for improving the intrusion detection accuracy of anomaly-based intrusion detection systems by applying various machine learning algorithms for classification of normal and attack data. To verify the effectiveness of the proposed intrusion detection models, we use the ADFA Linux Dataset which consists of system call traces for attacks on the latest operating systems. Further, for verification, we develop models and perform simulations for host-based intrusion detection systems based on machine learning algorithms to detect and classify anomalies using the Arena simulation tool.

Author 1: Yukyung Shin
Author 2: Kangseok Kim

Keywords: Anomaly detection; host based intrusion detection system; system calls; cyber security; machine learning; simulation

PDF

Paper 34: A Smart Home System based on Internet of Things

Abstract: The Internet of Things (IoT) describes a network infrastructure of identifiable things that share data through the Internet. A smart home is one of the applications for the Internet of Things. In a smart home, household appliances could be monitored and controlled remotely. This raises a demand for reliable security solutions for IoT systems. Authorization and authentication are challenging IoT security operations that need to be considered. For instance, unauthorized access, such as cyber-attacks, to a smart home system could cause danger by controlling sensors and actuators, opening the doors for a thief. This paper applies an extra layer of security of multi-factor authentication to act as a prevention method for mitigating unauthorized access. One of those factors is face recognition, as it has recently become popular due to its non-invasive biometric techniques, which is easy to use with cameras attached to most trending computers and smartphones. In this paper, the gaps in existing IoT smart home systems have been analyzed, and we have suggested improvements for overcoming them by including necessary system modules and enhancing user registration and log-in authentication. We propose software architecture for implementing such a system. To the best of our knowledge, the existing IoT smart home management research does not support face recognition and liveness detection within the authentication operation of their suggested software architectures.

Author 1: Rihab Fahd Al-Mutawa
Author 2: Fathy Albouraey Eassa

Keywords: Internet of Things (IoT); smart home; system; architecture; security; management

PDF

Paper 35: Integrated Fuzzy based Decision Support System for the Management of Human Disease

Abstract: To eliminate some of the inaccuracies in the diagnosis of human diseases, decision support systems based on algorithms and technologies such as Artificial Neural Network, Fuzzy Logic etc. have been used. The results of such diagnosis are used for treatment and management purposes. Inaccurate and imprecise diagnosis may lead to wrong treatment methods which in turn may result in death or complications. Although treatments are widely carried out using drugs, there exist other treatment methods such as alternative medicine, complimentary medicine which could be used for treatment. We propose an Integrated Fuzzy Based Decision Support System which focuses on the integration of both alternative and pure medicine for the management of malaria. The results obtained showed that integrating these two treatment and management methods will eliminate the limitations of the individual methods therefore bridging the gap between alternative and pure medicine in the treatment and management of human diseases. The system is implemented in C#.

Author 1: Blessing Ekong
Author 2: Idara Ifiok
Author 3: Ifreke Udoeka
Author 4: James Anamfiok

Keywords: Human diseases; fuzzy based decision support system; human disease; fuzzy logic; C#

PDF

Paper 36: A Hybrid Intrusion Detection System for SDWSN using Random Forest (RF) Machine Learning Approach

Abstract: It is indeed an established fact which network security systems had certain technical problems that mostly tends to lead to security risks. Nowadays, Attackers could still continue to abuse the security vulnerabilities as well as shatter the systems and networks, and is quite pricey and even sometimes extremely difficult to resolve all layout and computing faults. The above appears to suggest that methodologies relying on preventive measures seem to be no longer secure and perhaps tracking of intrusion is necessary as a last line of defense. A Hybrid in Software Defined Wireless Sensor Network (SDWSN) the Intrusion Detection System is designed for this paper which really incorporates the benefits of Salp Swarm Optimization (SSO) algorithm as well as the classification of Machine Learning method it is based upon Random Forest (RF). We propose SSO optimization procedures to guarantee that the ideal features for the intrusion detector are chosen and in addition for improving the Random Forest (RF) classifier detection efficiency. To assess / calculate the reliability of the proposed approach here we make use of the generic NSL KDD dataset. Therefore, our proposed hybrid IDS-SSO-RF classifier further analyzes these detected abnormal activities. The known and unknown attacks are also identified. Hybrid framework also shown by the experimental results can reliably detect anomaly behavior and obtains better results in terms in terms of delay, delivery ratio, drop overhead, energy consumption and throughput.

Author 1: Indira K
Author 2: Sakthi U

Keywords: SDWSN; IDS; Salp Swarm Optimization; Random Forest Classifier

PDF

Paper 37: The Internet of Things for Crowd Panic Detection

Abstract: Crowd behavior detection is important for the smart cities applications such as people gathering for different events. However, it is a challenging problem due to the internal states of the crowd itself and the surrounding environment. This paper proposes a novel crowd behavior detection framework based on a number of parameters. We first exploit a computer vision approach based on scale invariant feature transform (SIFT) to classify the crowd behavior either into panic or normalness. We then consider a number of other parameters from the surroundings namely crowd coherency, social interaction, motion information, randomness in crowd speed, internal chaos level, crowd condition, crowd temporal history, and crowd vibration status along with time stamp. Subsequently, these parameters are fed to deep learning model during training stage and the behavior of the crowd is detected during the testing stage. The experimental results show that proposed method renders significant performance in terms of crowd behavior detection.

Author 1: Habib Ullah
Author 2: Ahmed B. Altamimi
Author 3: Rabie A. Ramadan

Keywords: VANET; smart cities; crowd behavior; deep learning; Recurrent Neural Networks (RNN); Convolution Neural Networks (CNN); Invariant Feature Transform (SIFT)

PDF

Paper 38: Short Poem Generation (SPG): A Performance Evaluation of Hidden Markov Model based on Readability Index and Turing Test

Abstract: We developed a Hidden Markov Model (HMM) that automatically generates short poem. The HMM was trained using the forward-backward algorithm also known as Baum Welch algorithm. The training process was exhausted by a hundreds of iterations through recursion method. Then we used the Viterbi algorithm to decode all the best possible hidden states to predict the next word, and from the previous predicted word, it will generate another word, then another word until it reaches the desire word length that was set in the program. Afterwards, the model was evaluated using several kinds of readability metrics index which measure the reading difficulty and comprehensiveness of the generated poem. Then, we performed a Turing Test, which participated by 75 college students, who are well versed in poetry. They determined if the generated poems was created by a human or a machine. Based from the evaluation results, the highest readability score index of the generated short poem is in the grade 16th level. While 69.2% of the participants in the Turing Test, agreed that most of the machine generated poems were likely created by some well-known poets and writers.

Author 1: Ken Jon M. Tarnate
Author 2: May M. Garcia
Author 3: Priscilla Sotelo-Bator

Keywords: Evaluation metrics; Hidden Markov Model; poetry generation; readability test; turing test

PDF

Paper 39: A Novel Image Fusion Scheme using Wavelet Transform for Concealed Weapon Detection

Abstract: The aim of this paper is to detect concealed weapons, especially in high security places like in airports, train stations and places with large crowds, where concealed weapons are not allowed. We aim to specify suspicious person who may have a concealed weapon. In this paper, an Image Fusion technique using pixel alignment and discrete wavelet transform is proposed. It is mainly utilized for Concealed Weapon Detection. Image fusion can be defined as extracting information from two or images into a single image to enhance the detection. Image fusion allows detecting concealed weapons underneath a person’s clothing with imaging sensors such as Infrared imaging or Passive Millimeter Wave sensors. A data fusion scheme for simpler sensors based on correlation coefficients is proposed and utilized. We proposed an image fusion scheme that utilizes fusion dependency rules using wavelet (WT) and inverse wavelet transform (IWT). The fusion rule is to select the coefficient with the highest correlation rate. The higher the correlation the stronger of the co-existed feature. Experimental results shows the superiority of the proposed algorithm both in quality and real time requirement. The proposed algorithm has a real time response time that is less than other comparable algorithms by 40%. At the same time it retains higher quality as shown in the experimental results. It outperforms other algorithms by superior PSNR of more than 10% of the comparable algorithms in average.

Author 1: Hanan A. Hosni Mahmoud

Keywords: Concealed weapon detection; image fusion; pixel alignment; wave sensors

PDF

Paper 40: The Extent to which Individuals in Saudi Arabia are Subjected to Cyber-Attacks and Countermeasures

Abstract: In light of the rapid development of technology and the increase in the number of users of the Internet via computers and smart devices, cybercrimes impact on enterprises, organizations, governments and individuals has been significant. Researches and reports on the impact of cybercrime and methods of prevention and protection are been introduced regularly. However, the majority focuses on the impact on organizations and governments. This paper aims to use a survey methodology in order to highlights the impact of cybercrimes on the individuals in Saudi Arabia and measure the awareness of cybersecurity among individuals. In addition, this research aims to investigate the common cybercrimes which target the individuals in Saudi Arabia and the countermeasure taken by them.

Author 1: Abdullah A H Alzahrani

Keywords: Cybercrimes; cybersecurity; identity theft; cyberattacks

PDF

Paper 41: Towards Social Network Sites Acceptance in e-Learning System: Students Perspective at Palestine Technical University-Kadoorie

Abstract: This study aims to examine social network sites acceptance in an e-Learning system and to propose a model encompassing determining factors that affect students’ intentions to use social network sites in the e-Learning system. The proposed model was built based on the Technology Acceptance Model (TAM), perceived enjoyment, social influences, and perceived information security from the literature review. The quantitative method of data collection using a questionnaire survey is used in the current study. The data analysed using a structural equation modeling (SEM) approach through partial least square (PLS) software version 3. The results indicated that perceived ease of use, perceived usefulness, perceived enjoyment, social influences, and perceived information security has a significant and positive impact on student’s acceptance of social network sites in an e-Learning system at Palestine Technical University-Kadoorie. Theoretical and practical implications discussed.

Author 1: Mohannad Moufeed Ayyash
Author 2: Fadi A.T. Herzallah
Author 3: Waleed Ahmad

Keywords: Social network sites; e-Learning system; perceived usefulness; perceived ease of use; perceived enjoyment; social influence; perceived information security; Palestine

PDF

Paper 42: Managing an External Depot in a Production Routing Problem

Abstract: This paper addresses a production and distribution problem in a supply chain. The supply chain consists of a plant with no storage capacity that produces only one type of product. The manufactured products are then transported to a depot for storage. Customers demand is met by a homogeneous fleet of vehicles that begins and ends their trips at the depot. The objective of the study is to minimize the overall cost of production, inventory and transport throughout the supply chain. A Branch-and-Cut and a hybrid Two Phases Decomposition Heuristic using a Mixed Integer Programming and a Genetic Algorithm have been developed to solve the problem.

Author 1: Bi Kouaï Bertin Kayé
Author 2: Moustapha Diaby
Author 3: Tchimou N’Takpé
Author 4: Souleymane Oumtanaga

Keywords: Production; inventory; distribution; transport; branch-and-cut; decomposition heuristic; MIP; genetic algorithm

PDF

Paper 43: Machine Learning based Access Control Framework for the Internet of Things

Abstract: The main challenge facing the Internet of Things (IoT) in general, and IoT security in particular, is that humans have never handled such a huge amount of nodes and quantity of data. Fortunately, it turns out that Machine Learning (ML) systems are very effective in the presence of these two elements. However, can IoT devices support ML techniques? In this paper, we investigated this issue and proposed a twofold contribution: a thorough study of the IoT paradigm and its intersections with ML from a security perspective; then, we actually proposed a holistic ML-based framework for access control, which is the defense head of recent IT systems. In addition to learning techniques, this second pillar was based on the organization and attribute concepts to avoid role explosion problems and applied to a smart city case study to prove its effectiveness.

Author 1: Aissam Outchakoucht
Author 2: Anas Abou El Kalam
Author 3: Hamza Es-Samaali
Author 4: Siham Benhadou

Keywords: Access control; internet of things; machine learning; security; smart city

PDF

Paper 44: Enhancing the Bitrate and Power Spectral Density of PPM TH-IR UWB Signals using a Sub-Slot Technique

Abstract: Increasing the receiver’s bitrate and suppressing the spectral line are issues of major interest in the design of compliant Time-Hopping Impulse Radio (TH-IR) Ultra-Wide Band (UWB) systems. Suppression of spectral lines has been commonly addressed by randomizing the position of each pulse to make the period as large as possible. Our analysis suggests that this influences the overall shape of a signal’s Power Spectral Density (PSD) in a way that is useful for spectral line suppression or diminishing the PSD maximum peak power. A method for utilizing the system to generate a Dynamic-Location Pulse-Position Modulated (DLPPM) signal for transmission across a UWB communications channel is presented, and an analytical derivation of the PSD of a proposed DLPPM signal TH-IR UWB is introduced. Our proposed method can be applied without affecting the users of other concurrent applications. The theoretical model for DPLM TH-IR is compared with the PSD for conventional DPLM TH-IR. The results show that spectral estimation methods based on Fast Fourier Transform (FFT) significantly overestimate the continuous part of the PSD for small and medium signal lengths, which has implications for assessing interference margins by means of simulation. Another purpose of this paper is to improve a predesigned system by increasing the receiver’s bitrate. This will be achieved by using the bits that control the sub-slot technique as information and designing a receiver capable of detecting them. The bitrate is effectively doubled. Finally, the proposed system for DPLM TH-IR has been built inside Simulink/MATLAB to test its results via a conventional DPLM TH-IR system.

Author 1: Bashar Al-haj Moh’d
Author 2: Nidal Qasem

Keywords: Bitrate; FFT; PPM; PSD; spectral estimation; sub-slot; TH-IR; UWB

PDF

Paper 45: Three-Dimensional Shape Reconstruction from a Single Image by Deep Learning

Abstract: Reconstructing a three-dimensional (3D) shape from a single image is one of the main topics in the field of computer vision. Some of the methods for 3D reconstruction adopt machine learning. These methods use machine learning for acquiring the relationship between 3D shape and 2D image, and reconstruct 3D shapes by using the learned relationship. However, since only predefined features (pixels in the image) are used, it is not possible to obtain the desired features of the 2D image for 3D reconstruction. Therefore, this paper presents a method for reconstructing 3D shapes by learning features of 2D images using deep learning. This method uses Convolutional Neural Network (CNN) for feature learning to reconstruct a 3D shape. Pooling layers and convolutional layers of the CNN capture spatial information about images and automatically select valuable image features. This paper presents two types of the reconstruction methods. The first one is to first estimate the normal vector of the object, and then reconstruct the 3D shape from the normal vector by deep learning. The second one is direct reconstruction of the 3D shape from an image by a deep neural network. The experimental results using human face images showed that the proposed method can reconstruct 3D shapes with higher accuracy than the previous methods.

Author 1: Kentaro Sakai
Author 2: Yoshiaki Yasumura

Keywords: Computer vision; 3D reconstruction; deep learning; convolutional neural network; feature learning; normal vector

PDF

Paper 46: An Attribution of Cyberattack using Association Rule Mining (ARM)

Abstract: With the rapid development of computer networks and information technology, an attacker has taken advantage to manipulate the situation to launch a complicated cyberattack. This complicated cyberattack causes a lot of problems among the organization because it requires an effective cyberattack attribution to mitigate and reduce the infection rate. Cyber Threat Intelligence (CTI) has gain wide coverage from the media due to its capability to provide CTI feeds from various data sources that can be used for cyberattack attribution. In this paper, we study the relationship of basic Indicator of Compromise (IOC) based on a network traffic dataset from a data mining approach. This dataset is obtained using a crawler that is deployed to pull security feed from Shadowserver. Then an association analysis method using Apriori Algorithm is implemented to extract rules that can discover interesting relationship between large sets of data items. Finally, the extracted rules are evaluated over the factor of interestingness measure of support, confidence and lift to quantify the value of association rules generated with Apriori Algorithm. By implementing the Apriori Algorithm in Shadowserver dataset, we discover some association rules among several IOC which can help attribute the cyberattack.

Author 1: Md Sahrom Abu
Author 2: Siti Rahayu Selamat
Author 3: Robiah Yusof
Author 4: Aswami Ariffin

Keywords: CTI; association rule mining; Apriori Algorithm; attribution; interestingness measures

PDF

Paper 47: Extending Tangible Interactive Interfaces for Education: A System for Learning Arabic Braille using an Interactive Braille Keypad

Abstract: Learning Braille for visual impairments means being able to read, write and communicate with others. There exist several educational tools for learning Braille. Unfortunately, for Arabic Braille, there is a lack of interactive educational tools and what is mostly used is the traditional learning tools, such as the Braille block. Replacing those tools with some more effective and interactive e-learning tools would help to improve the learning process. This paper introduces a new educational system with a tangible and interactive interface. This system aims to help blind children to learn Arabic Braille letters and numbers using an interactive tactile Braille keypad together with the educational website. The interactive tactile Braille keypad was built using an Arduino connected with the educational website. A usability test was conducted and results showed that the system is easy to use and suggested that using the interactive Braille keypad with the educational website will improve the learning outcomes for blind children.

Author 1: Hind Taleb Bintaleb
Author 2: Duaa Al Saeed

Keywords: Braille; tangible interface; e-learning; Arduino; accessibility; usability; visually impaired; blind

PDF

Paper 48: Very High-Performance Echo Canceller for Digital Terrestrial Television in Single Frequency Network

Abstract: The principal aim of this paper is to cancel out the natural and man-made echoes in single-frequency networks (SFN). The challenge is to detect and remove feedback echoes and enhance the intelligibility of the essential parameters in SFN of digital terrestrial television broadcasting (DTTB) transmitter systems, especially the Modulation Error Ratio (MER), with optimizing coverage areas. We suggest a Digital Video Broadcasting (DVB) gap filler (GF) with two types of echo cancelling: Digital Adaptive Equalizer (DAE) and Doppler Enhanced Echo Canceller (DEEC). The proposed GF outperforms standard GF (SGF), finite impulse response filter (FIR GF), and adaptive GF (AGF) techniques by 33%, 17%, and 13%, respectively. Furthermore, the obtained MER makes the proposed GF (PGF) ideal for operating in SFN using Coded Orthogonal Frequency Division Multiplex (COFDM) technique.

Author 1: El Miloud Ar-Reyouchi
Author 2: Yousra Lamrani
Author 3: Kamal Ghoumid
Author 4: Salma Rattal

Keywords: Gap filler; Digital Adaptive Equalizer (DAE); Doppler Enhanced Echo Canceller (DEEC); Single-Frequency Networks (SFN); Coded Orthogonal Frequency Division Multiplex (COFDM)

PDF

Paper 49: Assessing Advanced Machine Learning Techniques for Predicting Hospital Readmission

Abstract: Predicting the probability of hospital readmission is one of the most important healthcare problems for satisfactory, high-quality service in chronic diseases such as diabetes, in order to identify needful resources such as rooms, medical staff, beds, and specialists. Unfortunately, not many studies in the literature address this issue. Most studies involve forecasting the probability of diseases. For prediction, several machine learning methods can be implemented. Nonetheless, comparative studies that identify the most effective approaches for the method prediction are also insufficient. With this aim, our paper introduces a comparative study in the literature across five popular methods to predict the probability of hospital readmission in patients suffering from diabetes. The selected techniques include linear discriminant analysis, instance-based learning (K-nearest neighbors), and ensemble-based learning (random forest, AdaBoost, and gradient boosting) techniques. The study showed that the best performance was in random forest whereas the worst performance was shown by linear discriminant analysis.

Author 1: Samah Alajmani
Author 2: Kamal Jambi

Keywords: Boosting; random forest; linear discriminant analysis; k-nearest neighbor; machine learning; hospital readmission; predictive analytics

PDF

Paper 50: 3D Trilateration Localization using RSSI in Indoor Environment

Abstract: Received Signal Strength Indicator (RSSI) is one of the most popular technique for outdoor and indoor localization. There are many previous researches on RSSI-based indoor localization systems. However, most of them lack a solid classification method that reduce localization errors with better accuracy. This paper will focus on indoor localization methods to provide a technological perspective of indoor positioning systems. This paper proposes an indoor localization by using 3D trilateration method to locate target tags from RFID readers that used RSSI measurements for range determination. There will be six test cases for each reader. This system can track any target within the selected area with less localization error.

Author 1: Nur Diana Rohmat Rose
Author 2: Low Tan Jung
Author 3: Muneer Ahmad

Keywords: RSSI; RFID; Indoor Positioning System (IPS); trilateration; 3D localization

PDF

Paper 51: Cloud based Power Failure Sensing and Management Model for the Electricity Grid in Developing Countries: A Case of Zambia

Abstract: In most developing countries, huge parts of the electric power grid are not monitored making it difficult for the service provider to determine when there is a power failure in the electric grid, especially if the power failure occurs in the Low Voltage level. Clients usually have to call and inform the utility’s customer service centre to report a power failure. However, this system of addressing power outages is not very effective and usually results in long durations of system interruptions. This paper proposes a cloud based power failure sensing system to enable automatic power failure sensing and reporting as well as monitoring of the low voltage power network in Zambia, a developing country in Southern Africa. A baseline study was conducted to determine the challenges faced by both the electric power utility company called Zambia Electricity Supply Corporation (ZESCO) and the electricity consumers in the current power failure reporting management model. The results from the baseline study indicate that challenges are being faced by electricity consumers when it comes to reporting power failures. These include failure to get through to the customer call centre due to constantly engaged lines, unanswered calls, failed calls and network failure. The challenge faced by the electricity service provider is the inability to attend to all the customers through the call centre as customer calls are rejected due to limited Call Centre system resources. To address these challenges the proposed cloud based power failure sensor model made use of a Voltage sensor circuit, Arduino Microcontroller board, SIM808 GSM/GPRS/GPS module, cloud architecture, Web Application and Google Map API. Results from the proposed model show improved reporting time, location information and quick response to power failures.

Author 1: Janet Nanyangwe Sinkala
Author 2: Jackson Phiri

Keywords: Cloud architecture; power failure sensing; low voltage network; electric grid

PDF

Paper 52: JobChain: An Integrated Blockchain Model for Managing Job Recruitment for Ministries in Sultanate of Oman

Abstract: Industries around the world has revolutionized with the arrival of blockchain technology. Blockchain applications and use cases are in the process of development in different domains. This research presents a blockchain platform “JobChain” to manage the job recruitments. The case study is conducted for the job recruitments in various Ministries in Sultanate of Oman. Currently, in Oman, citizens are aware of the job vacancies through the advertisements posted in newspapers or social media. A job seeker then applies for the desired job and thereafter the qualified candidates are called for tests/ interviews. To ease this process, a solution based on blockchain which includes various Ministries and the citizens/ residents of Sultanate of Oman is proposed in this research. Ministries can post the job vacancies in the blockchain and qualified citizen(s) can submit their application. Relevant cryptographic functions are used to verify the authenticity of the participants in the blockchain network. The citizens feel the existence of a trusted secure government, which is mandatory for the development of a country. Unlike traditional models, blockchain eliminates the need of intermediary agents (e.g. Job Consultancies) thereby providing direct communication between the participants of the blockchain. The proposed blockchain framework helps the citizens in Oman to get updated about the job vacancies. Hyperledger Composer Playground is used to design and test the proposed blockchain business network. Preliminary results show that the participants and assets are created successfully and the transactions to approve a job vacancy and a job application is done through the proposed blockchain network.

Author 1: Vinu Sherimon
Author 2: Sherimon P.C
Author 3: Alaa Ismaeel

Keywords: Blockchain; permissioned; chaincode; hyperledger composer playground; job recruitment

PDF

Paper 53: Automated Machine Learning Tool: The First Stop for Data Science and Statistical Model Building

Abstract: Machine learning techniques are designed to derive knowledge out of existing data. Increased computational power, use of natural language processing, image processing methods made easy creation of rich data. Good domain knowledge is required to build useful models. Uncertainty remains around choosing the right sample data, variables reduction and selection of statistical algorithm. A suitable statistical method coupled with explaining variables is critical for model building and analysis. There are multiple choices around each parameter. An automated system which could help the scientists to select an appropriate data set coupled with learning algorithm will be very useful. A freely available web-based platform, named automated machine learning tool (AMLT), is developed in this study. AMLT will automate the entire model building process. AMLT is equipped with all most commonly used variable selection methods, statistical methods both for supervised and unsupervised learning. AMLT can also do the clustering. AMLT uses statistical principles like R2 to rank the models and automatic test set validation. Tool is validated for connectivity and capability by reproducing two published works.

Author 1: DeepaRani Gopagoni
Author 2: P V Lakshmi

Keywords: Automated machine learning; regression models; support vector machines; QSAR; QSPR; artificial neural networks; k-means clustering; R program; shiny web app; drug design; market analysis; supervised learning; Naive Bayes classification

PDF

Paper 54: Towards a Powerful Solution for Data Accuracy Assessment in the Big Data Context

Abstract: Data Accuracy is one of the main dimensions of Data Quality; it measures the degree to which data are correct. Knowing the accuracy of an organization's data reflects the level of reliability it can assign to them in decision-making processes. Measuring data accuracy in Big Data environment is a process that involves comparing data to assess with some "reference data" considered by the system to be correct. However, such a process can be complex or even impossible in the absence of appropriate reference data. In this paper, we focus on this problem and propose an approach to obtain the reference data thanks to the emergence of Big Data technologies. Our approach is based on the upstream selection of a set of criteria that we define as "Accuracy Criteria". We use furthermore a set of techniques such as Big Data Sampling, Schema Matching, Record Linkage, and Similarity Measurement. The proposed model and experiment results allow us to be more confident in the importance of data quality assessment solution and the configuration of the accuracy criteria to automate the selection of reference data in a Data Lake.

Author 1: Mohamed TALHA
Author 2: Nabil ELMARZOUQI
Author 3: Anas ABOU EL KALAM

Keywords: Big data; data quality; data accuracy assessment; big data sampling; schema matching; record linkage; similarity measurement

PDF

Paper 55: Semantic Architecture for Modelling and Reasoning IoT Data Resources based on SPARK

Abstract: Electronic Internet-of-Things is one of the foremost valuable techniques today. Through it, everything everywhere the globe became connected and intelligent, eliminating the wants to human-to-human interaction to perform tasks. This by changing all of those objects like humans, machines, devices and something around to be simply an internet Protocol (IP) to be expressed within the network environment through completely different sensors and actuators devices which might facilitate the interaction between all of them. These different types of sensors generate a large volume of various information and data. This type of sensor information created it generally useless because of the heterogeneity and lack of interoperability of it that represents it in unstructured form. So, investing from semantic internet techniques might handle these main challenges that face the IoT applications. Hence, the main contribution behind this research aims to boost the performance and quality of sensors information retrieved from IoT resources and applications by using semantic web technologies to resolve the matter of heterogeneity and interoperability and then convert the unstructured sensor data to structured form to realize the next level of investing of sensors employed in IoT applications. Also, the aim through this research to improve the performance of the tremendous amount of information that represents the demonstrated IoT information utilizing Big Data techniques such as Spark and its query language that's named SPARK-SQL as a streaming inquiry language for a colossal amount of information. The proposed architecture demonstrated that utilizing the semantic techniques to model the streaming sensors data improve the value of information and permit us to gather unused information. Moreover, the improvement by using SPARK leads to extend the performance of utilizing this sensor information in terms of the time retrieval of running queries, particularly when running the same queries utilizing the conventional SPARQL inquiry language.

Author 1: Ahmed Salama
Author 2: Masoud E. Shaheen
Author 3: Haytham Al-Feel

Keywords: Big Data; Internet of Things; Semantic Modelling; Semantic_Reasonin; Semantic_Rules; Sensors; Apache SPARK; SPARK_SQL

PDF

Paper 56: Performance Evaluation LoRa-GPRS Integrated Electricity usage Monitoring System for Decentralized Mini-Grids

Abstract: The emerging Internet of Things (IoT) technologies such as Long-Range (LoRa), combined with traditional cellular communications technologies such as General Packet Radio Service (GPRS) offers decentralized mini-grid companies the opportunity to have cost-effective monitoring systems for the mini-grid resources. Nevertheless, most of the existing decentralized mini-grid companies still rely on traditional cellular networks to fully monitor electricity consumption information, which is not a feasible solution especially for the resource-constrained mini-grid systems. This paper presents the performance evaluation of the proposed LoRa-GPRS integrated power consumption monitoring system for decentralized mini-grid centers. Each mini-grid center consists of a network of custom designed smart meters equipped with LoRa modules for local data collection, while the GPRS gateway is used to transmit collected data from the local monitoring centre to the cloud server. Performance testing was conducted by using five electrical appliances whose power consumption data form the cloud server was compared to the same data collected by using a reference digital meter. The correlation between the two data sets was used as a key performance metric of the proposed system. The performance results show that the proposed system has a good accuracy hence providing a cost-effective framework for monitoring and managing of power resources in decentralized mini-grid centers.

Author 1: Shaban Omary
Author 2: Anael Sam

Keywords: Internet of Things; LoRa; GPRS; decentralized mini-grid systems; electricity usage monitoring

PDF

Paper 57: An Intellectual Detection System for Intrusions based on Collaborative Machine Learning

Abstract: The necessity for safety of information in a network has inflated due to the impressive growth of web applications. Several methods of intrusion detection are used to detect irregularities which depend on precision, detection frequency, other parameters and are anticipated to familiarize to vigorously varying risk scenes. To accomplish consistent abnormalities detection in a network many machine learning algorithms have been formulated by researchers. A technique based on unsupervised machine learning that use two separate machine learning algorithms to identify anomalies in a network viz convolutional autoencoder and softmax classifier is proposed. These profound models were skilled as well as evaluated on NSLKDD test data sets on the NSLKDD training dataset. Using well-known classification metrics such as accuracy, precision and recall, these machine learning models were assessed. The developed intrusion detection system model experimental findings showed promising outcomes in anomaly detection systems for real-world implementation and is compared with the prevailing definitive machine learning techniques. This strategy increases the detection of network intrusion and offers a renewed intrusion detection study method.

Author 1: Dhikhi T
Author 2: M.S. Saravanan

Keywords: Intrusion detection; machine learning; deep learning; convolutional autoencoder; softmax classifier; NSL-KDD dataset

PDF

Paper 58: Dataset Augmentation for Machine Learning Applications of Dental Radiography

Abstract: The performance of any machine learning algorithm heavily depends on the quality and quantity of the training data. Machine learning algorithms, driven by training data can accurately predict and produce the right outcome when trained through enough amount of quality data. In the medical applications, being more critical, the accuracy is of utmost importance. Obtaining medical imaging data, enough to train machine learning algorithm is difficult due to a variety of reasons. An effort has been made to produce an augmented dental radiography dataset to train machine learning algorithms. 116 panoramic dental radiographs have been manually segmented for each tooth producing 32 classes of teeth. Out of 3712 images of individual tooth, 2910 were used for machine learning through general augmentation methods that include rotation, intensity transformation and flipping of the images, creating a massive dataset of 5.12 million unique images. The dataset is labeled and classified into 32 classes. This dataset can be used to train deep convolutional neural networks to perform classification and segmentation of teeth in x-rays, Cone-Beam CT scans and other radiographs. We retrained AlexNet on a subset of 80,000 images of the entire dataset and obtained classification accuracy of 98.88% on 10 classes. The retraining on original dataset yielded 88.31%. The result is evident of nearly a 10% increase in the performance of the classifier trained on the augmented dataset. The training and validation datasets include teeth affected with metal objects. The manually segmented dataset can be used as a benchmark to evaluate the performance of machine learning algorithms for performing tooth segmentation and tooth classification.

Author 1: Shahid Khan
Author 2: Altaf Mukati

Keywords: Data augmentation; Cone-Beam Computed Tomography; dental X-Rays; panoramic; dataset; classification; deep convolutional neural network; benchmark

PDF

Paper 59: An Optimal Prediction Model’s Credit Risk: The Implementation of the Backward Elimination and Forward Regression Method

Abstract: The purpose of this paper is to verify whether there is a relationship between credit risk, main threat to the banks, and the demographic, marital, cultural and socio-economic characteristics of a sample of 40 credit applicants, by using the optimal backward elimination model and the forward regression method. Following the statistical modeling, the final result allows us to know the variables that have a degree of significance lower than 5%, and therefore a significant relationship with the credit risk, namely the CSP (Socio-occupational category), the amount of credit requested, the repayment term and the type of credit. However, by implementing the second method, the place of residence variable was selected as an impacting variable for the chosen model. Overall, these features will help us better predict the risk of bank credit.

Author 1: Sara HALOUI
Author 2: Abdeslam El MOUDDEN

Keywords: Credit risk; prediction; optimal model; backward elimination; statistical modeling

PDF

Paper 60: Digital Twins Development Architectures and Deployment Technologies: Moroccan use Case

Abstract: With the initiation of the fourth industrial revolution and the advent of information and communication technologies that reinforces the development of advanced technological solutions which engage data sciences, artificial intelligence, and cyber physical systems many long-established research concepts have been revived with in-depth applications within manufacturing plants. Thus currently the interest is turning more and more towards technologies and approaches that can combine between the virtual world and its increased capacities in computer sciences and processing, and the physical world with its complex systems and constantly evolving requirements. A relevant concept in this context is the concept of digital twins. Digital twins as defined by their founder Dr Michael Grieves are virtual replicas of a physical system that evolves within a virtual environment in order to mirror their real counterparts’ life cycle and evolvement within the physical environment for applications in numerous domains. This paper’s aim is to present a literature review of digital twin concept, its different development and deployment architectures, and its potential of application across Moroccan industrial ecosystems.

Author 1: Mezzour Ghita
Author 2: Benhadou Siham
Author 3: Medromi Hicham

Keywords: Digital twins; industry 4.0; digital twins challenges and opportunities; Moroccan industrial context

PDF

Paper 61: An Ontology-Driven IoT based Healthcare Formalism

Abstract: The recent developments in the Internet of Things (IoT) paradigms have significantly influenced human life, which made their lives much more comfortable, secure and relaxed. With the remarkable upsurge of the smart systems and applications, people are becoming addicted to using these devices and having many dependencies on them. With the advent of modern smart healthcare systems, and significant advancements in IoT enabled technologies have facilitated patients and physicians to be connected in real-time for providing healthcare services whenever and wherever needed. These systems often consist of tiny sensors and usually run on smart devices using mobile applications. However, these systems become even more challenging when there is a need to take intelligent decision making dynamically in a highly decentralized environment. In this paper, we propose a Belief-Desire-Intention (BDI) based multi-agent formalism for ontology-driven healthcare systems that perform BDI based reasoning to take intelligent decision making dynamically in order to achieve the desired goals. We illustrate the use of the proposed approach using a simple case study with the prototypal implementation of heart monitoring applications.

Author 1: Salwa Muhammad Akhtar
Author 2: Makia Nazir
Author 3: Kiran Saleem
Author 4: Hafiz Mahfooz Ul Haque
Author 5: Ibrar Hussain

Keywords: Internet of Things; BDI reasoning agents; ontology; smart healthcare

PDF

Paper 62: Ransomware Behavior Attack Construction via Graph Theory Approach

Abstract: Ransomware has becoming a current trend of cyberattack where its reputation among malware that cause a massive amount recovery in terms of cost and time for ransomware victims. Previous studies and solutions have showed that when it comes to malware detection, malware behavior need to be prioritized and analyzed in order to recognize malware attack pattern. Although the current state-of-art solutions and frameworks used dynamic analysis approach such as machine learning that provide more impact rather than static approach, but there is not any approachable way in representing the analysis especially a detection that relies on malware behavior. Therefore, this paper proposed a graph theory approach which is analysis of the ransomware behavior that can be visualized into graph-based pattern. An experiment has been conducted with ten ransomware samples for malware analysis and verified using VirusTotal. Then, file system among features were selected in the experiment as a medium to understand the behavior of ransomware using data capturing tools. After that, the result of the analysis was visualized in a graph pattern based on Neo4j which is graph database tool. By using graph as a base, the discussion has been made to recognize each type of ransomware that acts differently in the file system and analyze which node that have the most impact during analysis part.

Author 1: Muhammad Safwan Rosli
Author 2: Raihana Syahirah Abdullah
Author 3: Warusia Yassin
Author 4: Faizal M.A
Author 5: Wan Nur Fatihah Wan Mohd Zaki

Keywords: Ransomware; behavior analysis; graph theory; file activity system; Neo4j

PDF

Paper 63: A Framework for Producing Effective and Efficient Secure Code through Malware Analysis

Abstract: Malware attacks are creating huge inconveniences for organizations and security experts. Due to insecure web applications, small businesses and personal systems are the most vulnerable targets of malware attacks. In the wake of this burgeoning cyber security breach, this article propositions a framework for a complete malware analysis process including dynamic analysis, static analysis, and reverse engineering process. Further, the article provides an approach of malicious code identification, mitigation, and management through a hybrid process of malware analysis, priority-based vulnerability mitigation process and various source code management approaches. The framework delivers a combined package of identification, mitigation and management that simplifies the process of malicious code handling. The proposed framework also gives a solution for reused codes in software industry. Successful implementation of the framework will make the code more robust in the face of unexpected behavior and deliver a revolutionary stage wise process for malicious code handling in software industry.

Author 1: Abhishek Kumar Pandey
Author 2: Ashutosh Tripathi
Author 3: Mamdouh Alenezi
Author 4: Alka Agrawal
Author 5: Rajeev Kumar
Author 6: Raees Ahmad Khan

Keywords: Malware analysis; reuse code; framework; static analysis; dynamic analysis; reverse engineering; manual analysis

PDF

Paper 64: Person Re-Identification System at Semantic Level based on Pedestrian Attributes Ontology

Abstract: Person Re-Identification (Re-ID) is a very important task in video surveillance systems such as tracking people, finding people in public places, or analysing customer behavior in supermarkets. Although there have been many works to solve this problem, there are still remaining challenges such as large-scale datasets, imbalanced data, viewpoint, fine-grained data (attributes), the Local Features are not employed at semantic level in online stage of Re-ID task, furthermore, the imbalanced data problem of attributes are not taken into consideration. This paper has proposed a Unified Re-ID system consisted of three main modules such as Pedestrian Attribute Ontology (PAO), Local Multi-task DCNN (Local MDCNN), Imbalance Data Solver (IDS). The new main point of our Re-ID system is the power of mutual support of PAO, Local MDCNN and IDS to exploit the inner-group correlations of attributes and pre-filter the mismatch candidates from Gallery set based on semantic information as Fashion Attributes and Facial Attributes, to solve the imbalanced data of attributes without adjusting network architecture and data augmentation. We experimented on the well-known Market1501 dataset. The experimental results have shown the effectiveness of our Re-ID system and it could achieve the higher performance on Market1501 dataset in comparison to some state-of-the-art Re-ID methods.

Author 1: Ngoc Q. Ly
Author 2: Hieu N. M. Cao
Author 3: Thi T. Nguyen

Keywords: Person Re-Identification (Re-ID); Pedestrian Attributes Ontology (PAO); Deep Convolution Neuron Network (DCNN); Multi-task Deep Convolution Neuron Network (MDCNN); Local Multi-task Deep Convolution Neuron Network (Local MDCNN); Imbalanced Data Solver (IDS

PDF

Paper 65: Geo Security using GPT Cryptosystem

Abstract: This paper describes an implementation of location-based encryption using a public key cryptosystem based on the rank error correcting codes. In any code based cryptosystem, public and private keys are in the form of matrices based over the finite field. This work proposes an algorithm for calculating public and private key matrices based on the geographic location of the intended receiver. The main idea is to calculate a location specific parity check matrix and then corresponding public key. Data is encrypted using public key. Some information about the parity check matrix along with other private keys are sent to receiver as cipher-text, encrypted with another instance of the public or GPT cryptosystem using public key of the receiver. The proposed scheme also introduces a method of calculating different parity check matrix for each user.

Author 1: Eraj Khan
Author 2: Abbas Khalid
Author 3: Arshad Ali
Author 4: Muhammad Atif
Author 5: Ahmad Salman Khan

Keywords: Location based security; code based cryptosystem; cipher-text; GPT

PDF

Paper 66: Localization of Mobile Submerged Sensors using Lambert-W Function and Cayley-Menger Determinant

Abstract: This paper demonstrates a new mechanism to localize mobile submerged sensors using only a single beacon node. In range-based localization, fast and accurate distance measurement is vital in underwater wireless sensor networks (UWSN). The knowledge of exact coordinates of the sensors is as important as the actuated data in underwater wireless sensor networks. Mostly bouncing technique is used to determine the distance between the beacon and the sensors. Moreover, to determine the coordinates, trilateration and multilateration technique is used; where using multiple beacons (usually three or more) is the most common approach. Nevertheless, because of many factors, this method gives less accurate results in distance measurements, which finally leads to determine erroneous coordinates. As TDOA is very ponderous to achieve in underwater environment because of time synchronization; again, using AOA is extremely difficult and challenging; TOA is the most common approach and is widely employed. However, it still needs precise synchronization. So, to determine the distances between beacon and sensor nodes, we have used a method based on Lambert-W function in this study, which is an approach based on RSS, and it avoids any synchronization. Besides, coordinates of the mobile sensors are calculated using Cayley-Menger determinant. In this paper, the method is derived and the accuracy is verified by simulation results.

Author 1: Anirban Paul
Author 2: Miad Islam
Author 3: Md. Ferdousur Rahman
Author 4: Anisur Rahman

Keywords: Lambert-W function; Cayley-Menger determinant; submerged mobile sensor; single beacon; localization

PDF

Paper 67: A Proposed Method to Solve Cold Start Problem using Fuzzy user-based Clustering

Abstract: With the elevation of the online accessibility to almost everything, many logics, systems and algorithms have to be revised to match the pace of the trends among the socialized networks. One such system; recommendation system has become very important as far as the socialized networks are concerned . In such paced and vibrant environment of the online accessibility and availability to heavy and large amount of data uploaded to the internet such as, movies, books, research articles and much more. The method of recommendation where provides the socialized networks between the operators, at the same instance, it provides references for the users to asses other users that effects their socialized relation directly or indirectly. Collaborative filtering is the technique used for recommending the same taste of picks to that of the user, and it is accomplished by the user’s mutual collaboration, this technique is mostly used by the social networking sites. Nowadays this technique is not only popular but common for recommending the data to the user; meanwhile it also motivates the researchers to find the more effective system and algorithm so that the user’s satisfaction can be achieved by recommending them the data according to their search history. This paper suggests the CF (Collaborative Filtering) model that is based on the user’s truthful information applied by the FCM (Fuzzy C-means) clustering. This study proposes that the fuzzy truthful information of the user is to be combined with rating of the content by other users to produce a recommender system formula with a coupled coefficient with new parameters. To achieve the results the Data set of Movie Lens is included in the study which shows significant improvement in the recommendation subjected to the condition of cold start.

Author 1: Syed Badar Ud Duja
Author 2: Baoning Niu
Author 3: Bilal Ahmed
Author 4: M. Umar Farooq Alvi
Author 5: Muhammad Amjad
Author 6: Usman Ali
Author 7: Zia Ur Rehman
Author 8: Waqar Hussain

Keywords: Recommender system; collaborating filtering; cold start problem; clustering; user based clustering

PDF

Paper 68: Significance of Electronic Word of Mouth (e-WOM) in Opinion Formation

Abstract: In the realm of interconnected digital world, social ranking systems are readily used in different sections of society, for several reasons. The private and public sectors both are making use of social ranking systems as a tool to engineer human behavior, and crafting a digitally stimulated social control. Online reviews and ratings are one of the significant marketing strategies of online sellers to steer out consumers’ opinion and ultimately their purchasing decisions. Buyers usually go through these reviews and ratings while purchasing online product or hiring online services. Online consumer reviews, recommendations for product and services, and peer viewpoints play a significant role in the customer's opinion formation. Different online forums of product reviews, ratings and recommendations differ in their objectives, functions, and characteristics. This paper focuses upon a systematic literature review and comparative study of the influence the positive and negative reviews and ratings of the products, automobile services, movies, restaurants, products and services on OLX & eBay, etc. have on opinion formation. Moreover, how these reviews influence others opinions of buying and using the products, services and apps will be analyzed.

Author 1: Javaria Khalid
Author 2: Aneela Abbas
Author 3: Rida Akbar
Author 4: Muhammad Qasim Mahmood
Author 5: Rafia
Author 6: Arslan Tariq
Author 7: Madiha Khatoon
Author 8: Ayesha Akbar
Author 9: Samreen Azhar
Author 10: Asra Meer
Author 11: Muhammad Junaid Ud Din

Keywords: Component; E-WOM (Electronic word of mouth); opinion formation; positive reviews; negative reviews

PDF

Paper 69: Design and Development of Autonomous Pesticide Sprayer Robot for Fertigation Farm

Abstract: The management of pest insects is the critical component of agricultural production especially in the fertigation based farm. Although the fertigation farm in Malaysia has advantages in the fertilization and irrigation management system, it still lacking with the pest management system. Since almost the insect and pests are living under the crop’s leaves, it is difficult and hard labor work to spray under the leaves of the crop. Almost agricultural plants are damaged, weakened, or killed by insect pests especially. These results in reduced yields, lowered quality, and damaged plants or plant products that cannot be sold. Even after harvest, insects continue their damage in stored or processed products. Therefore, the aim of this study is to design and develop an autonomous pesticide sprayer for the chili fertigation system. Then, this study intends to implement a flexible sprayer arm to spray the pesticide under the crop’s leaves, respectively. This study involves the development of unmanned pesticide sprayer that can be mobilized autonomously. It is because the pesticide is a hazardous component that can be affected human health in the future if it exposed during manual spraying method especially in a closed area such as in the greenhouse. The flexible sprayer boom also can be flexibly controlled in the greenhouse and outdoor environment such as open space farms. It is expected to have a successful pesticide management system in the fertigation based farm by using the autonomous pesticide sprayer robot. Besides, the proposed autonomous pesticide sprayer also can be used for various types of crops such as rockmelon, tomato, papaya. pineapples, vegetables and etc.

Author 1: A. M. Kassim
Author 2: M. F. N. M. Termezai
Author 3: A. K. R. A. Jaya
Author 4: A. H. Azahar
Author 5: S Sivarao
Author 6: F. A. Jafar
Author 7: H.I Jaafar
Author 8: M. S. M. Aras

Keywords: Pesticide spryer; autonomous robot; fertigation; farm; under crop leaves

PDF

Paper 70: Task Sensitivity in Continuous Electroencephalogram Person Authentication

Abstract: This research investigates on the task sensitivity in multimodal stimulation task for continuous person authentication using the electroencephalogram (EEG) signals. Pattern analysis aims to train from historical examples for prediction on the unseen data. However, data trials in EEG stimulation consists of inseparable cognitive information that is difficult to ensure that the testing trials contain the cognitive information matching to the training data. Since the EEG signals are unique across individuals, we assume that multimodal stimulation task in EEG analysis is not sensitive in train-test data trials control. Data trial inconsistency during training and testing can still be used as biometrics to authenticate a person. The EEG signals were collected using the 10-20 systems from 20 healthy subjects. During data acquisition, subjects were asked to operate a computer and perform various computer-related tasks (e.g.: mouse click, mouse scrolling, keyboard typing, browsing, reading, video watching, music listening, playing computer games, and etc.) as their preferences, without interruption. Features extracted from Welch’s estimated Power Spectral Density in different frequency bands were tested. The designed authentication approach computed intra- and inter-personal variability using Mahalanobis distance to authenticate subject. The proposed EEG continuous authentication approach has succeeded. Data collected from multimodal stimulus disregard of task sensitivity able to authenticate subject, where the highest verification performance shown in the low-Beta frequency band. Evidence found that effective frequency region on the middle band was anticipated due to the data collected was based on subject voluntary actions. Future research will focus on the effect of subject voluntary and involuntary actions on the effective frequency region.

Author 1: Rui-Zhen Wong
Author 2: Yun-Huoy Choo
Author 3: Azah Kamilah Muda

Keywords: Electroencephalogram; continuous authentication; task sensitivity; multimodal stimuli; Mahalanobis distance

PDF

Paper 71: Exploiting White Spaces for Karachi through Artificial Intelligence: Comparison of NARX and Cascade Feed Forward Back Propagation

Abstract: Marriage of Internet of Everything (IoE) and Cognitive Radio driven technologies seems near under the umbrella of 6G and 6G+ communication standard. The expected new services that will be introduced in 6G communication will require high data rates for transmission. The learning based algorithms will play a key role towards successful implementation of these novel technologies and evolving next generation wireless standards for providing ubiquitous connectivity. This paper investigates performance of two artificial neural network (ANN) based algorithms for Karachi. These include Nonlinear autoregressive exogenous Algorithm (NARX) and cascade feed forward back propagation neural network (CFFBNN) scheme. A dataset for Karachi is also developed for 1805 MHZ. The results of the two algorithms are compared that show Mean Square Error (MSE) for CFFBNN is 6.8877e-5 at epoch 16 and MSE for NARX is 3.1506e-11 at epoch 26. Hence, exploiting computational performance, NARX performs much superior than the classis CFFBNN algorithm.

Author 1: Shabbar Naqvi
Author 2: Minaal Ali
Author 3: Aamir Zeb Shaikh
Author 4: Yamna Iqbal
Author 5: Abdul Rahim
Author 6: Saima Khadim
Author 7: Talat Altaf

Keywords: 6G; cognitive radio; NARX; cascaded feed forward neural network; learning

PDF

Paper 72: Comparative Study of Truncating and Statistical Stemming Algorithms

Abstract: Search and indexing systems bear a significant quality called word stemming, is lump of content excavating requests, IR frameworks and natural language handling frameworks. The fundamental topic in the search and indexing through time is to upgrade infer via robotized diminishing and fussing of the words into word roots. From index term by evacuating any connected prefixes and postfixes, Stemming is done to proceeding piece of work of index word, and more extensive idea than the real word is spoken by trunk. In an IR framework, the numeral of recovered archives is expanded by stemming process.

Author 1: Sanaullah Memon
Author 2: Ghulam Ali Mallah
Author 3: K.N.Memon
Author 4: AG Shaikh
Author 5: Sunny K.Aasoori
Author 6: Faheem Ul Hussain Dehraj

Keywords: Stemming; truncating; statistical; NLP; IR; Lovins; Porters; Paice/Husk; Dawson; N-gram; HMM; YASS

PDF

Paper 73: Testing different Channel Estimation Techniques in Real-Time Software Defined Radio Environment

Abstract: In modern wireless communication to maximize spectral efficiency and to minimize the bit error rate OFDM (Orthogonal frequency-domain multiplexing) is used. OFDM is used broadly in networks using various protocols, including wireless vehicular environment IEEE 802.11p, IEEE 802.16d/e Wireless Metropolitan Area Networks, Long-Term Evolution 3GPP networks and IEEE 802.11a/g/n Wireless Local Area Networks. The main challenges involved when using OFDM for wireless communications are short channel-coherence bandwidth and the narrow coherence time, and both have a major effect on the reliability and latency of data packet communication. These properties increase the difficulty of channel equalization because the channel may change drastically over the period of a single packet. Spectral Temporal Averaging is an enhanced decision-directed channel equalization technique that improves communication performance (as far as the frame delivery ratio (FDR) and throughput) in typical channel conditions. This paper reports tests of Spectral Temporal Averaging channel equalization in an IEEE 802.11a network, compared with other channel equalization techniques in terms of the FDR in a real-time environment. Herein, a software defined Radio (SDR) platform was used for estimating the channel. This proves that the system can provide over 90% of delivery ratio at 25 db of Signal to Noise Ratio (SNR) for various digital modulation techniques. For this purpose, an experimental setup consisting of software-defined radio, Universal Software Radio Peripheral (USRP) N210 along with wide bandwidth daughter board as hardware and GNU radio is used.

Author 1: Ponnaluru Sowjanya
Author 2: Penke Satyanarayana

Keywords: Channel Estimation; GNU Radio Companion (GRC); Orthogonal frequency-domain multiplexing (OFDM); software-defined radio (SDR); Spectral Temporal Averaging (STA); Universal Software Radio Peripheral (USRP)

PDF

Paper 74: Measures of Organizational Training in the Capability Maturity Model Integration (CMMI)

Abstract: Training has a major impact on organizational commitment. Organizational objectives can be met by executing several training strategies and programs to enhance training. Organizational training aimed at developing employees’ knowledge and skills. It shall enable employees to carry out their duties efficiently and effectively. Two goals and seven practices of the organizational training process area in the capability maturity model integration (CMMI) framework are analyzed through this study. That’s done to set common measures for organizational training. CMMI is a framework for assessing and improving software systems. The researcher implemented the Goal Questions Metrics (GQM) model on two objectives and seven specific practices of the organizational training process area in CMMI. That’s done to set measures.The researcher confirmed that the defined measures are the true measure for each one of the seven specific practices.

Author 1: Mahmoud Khraiwesh

Keywords: Organizational training; training; measures; CMMI; GQM

PDF

Paper 75: Adaptive Sequential Constructive Crossover Operator in a Genetic Algorithm for Solving the Traveling Salesman Problem

Abstract: Genetic algorithms are widely used metaheuristic algorithms to solve combinatorial optimization problems that are constructed on the survival of the fittest theory. They obtain near optimal solution in a reasonable computational time, but do not guarantee the optimality of the solution. They start with random initial population of chromosomes, and operate three different operators, namely, selection, crossover and mutation, to produce new and hopefully better populations in consecutive generations. Out of the three operators, crossover operator is the most important operator. There are many existing crossover operators in the literature. In this paper, we propose a new crossover operator, named adaptive sequential constructive crossover, to solve the benchmark travelling salesman problem. We then compare the efficiency of the proposed crossover operator with some existing crossover operators like greedy crossover, sequential constructive crossover, partially mapped crossover operators, etc., under same genetic settings, for solving the problem on some benchmark TSPLIB instances. The experimental study shows the effectiveness of our proposed crossover operator for the problem and it is found to be the best crossover operator.

Author 1: Zakir Hussain Ahmed

Keywords: Genetic algorithm; adaptive sequential constructive crossover; traveling salesman problem; NP-hard

PDF

Paper 76: Performance Evaluation of Deep Autoencoder Network for Speech Emotion Recognition

Abstract: The learning methods with multiple levels of representation is called deep learning methods. The composition of simple but now linear modules results in deep-learning model. Deep-learning in near future will have many more success, because it requires very little engineering in hands and it can easily take ample amount of data for computation. In this paper the deep learning network is used to recognize speech emotions. The deep Autoencoder is constructed to learn the speech emotions (Angry, Happy, Neutral, and Sad) of Normal and Autistic Children. Experimental results evident that the categorical classification accuracy of speech is 46.5% and 33.3% for Normal and Autistic children speech respectively. Whereas, Auto encoder shows a very low classification accuracy of 26.1% for only happy emotion and no classification accuracy for Angry, Neutral and Sad emotions.

Author 1: Maria AndleebSiddiqui
Author 2: Wajahat Hussain
Author 3: Syed Abbas Ali
Author 4: Danish-ur-Rehman

Keywords: Auto-encoder; emotions; DNN; classification accuracy; autism

PDF

Paper 77: Evaluating the Impact of GINI Index and Information Gain on Classification using Decision Tree Classifier Algorithm*

Abstract: Decision tree is a supervised machine learning algorithm suitable for solving classification and regression problems. Decision trees are recursively built by applying split conditions at each node that divides the training records into subsets with output variable of same class. The process starts from the root node of the decision tree and progresses by applying split conditions at each non-leaf node resulting into homogenous subsets. However, achieving pure homogenous subsets is not possible. Therefore, the goal at each node is to identify an attribute and a split condition on that attribute that minimizes the mixing of class labels, thus resulting into nearly pure subsets. Several splitting indices were proposed to evaluate the goodness of the split, common ones being GINI index and Information gain. The aim of this study is to conduct an empirical comparison of GINI index and information gain. Classification models are built using decision tree classifier algorithm by applying GINI index and Information gain individually. The classification accuracy of the models is estimated using different metrics such as Confusion matrix, Overall accuracy, Per-class accuracy, Recall and Precision. The results of the study show that, regardless of whether the dataset is balanced or imbalanced, the classification models built by applying the two different splitting indices GINI index and information gain give same accuracy. In other words, choice of splitting indices has no impact on performance of the decision tree classifier algorithm.

Author 1: Suryakanthi Tangirala

Keywords: Supervised learning; classification; decision tree; information gain; GINI index

PDF

Paper 78: A Review of Data Gathering Algorithms for Real-Time Processing in Internet of Things Environment

Abstract: Today, Wireless Sensor Network (WSN) has become an enabler technology for the Internet of Things (IoT) applications. The emergence of various applications has then enabled the need for robust and efficient data collection and transfer algorithms. This paper presents a comprehensive review for the existing data gathering algorithms and the technologies adopted for that applications. After reviewing the algorithms and the challenges related to them, which extend the physical reach of the monitoring capability; they possess several constraints such as limited energy availability, low memory size, and low processing speed, which are the principal obstacles to designing efficient management protocols for WSN-IoT integration.

Author 1: Atheer A. Kadhim
Author 2: Norfaradilla Wahid

Keywords: Internet of Things (IoT); Wireless Sensor Network (WSN) and Data Gathering; Virtual Machine (VM); Virtualization Cloud (VC); Data Reduction (DR); Access point (AP); Mobile Ad hoc Network (MANET)

PDF

Paper 79: An Investigation of a Convolution Neural Network Architecture for Detecting Distracted Pedestrians

Abstract: The risk of pedestrian accidents has increased due to the distracted walking increase. The research in the autonomous vehicles industry aims to minimize this risk by enhancing the route planning to produce safer routes. Detecting distracted pedestrians plays a significant role in identifying safer routes and hence decreases pedestrian accident risk. Thus, this research aims to investigate how to use the convolutional neural networks for building an algorithm that significantly improves the accuracy of detecting distracted pedestrians based on gathered cues. Particularly, this research involves the analysis of pedestrian’ images to identify distracted pedestrians who are not paying attention when crossing the road. This work tested three different architectures of convolutional neural networks. These architectures are Basic, Deep, and AlexNet. The performance of the three architectures was evaluated based on two datasets. The first is a new training dataset called SCIT and created by this work based on recorded videos of volunteers from Sheridan College Institute of Technology. The second is a public dataset called PETA, which was made up of images with various resolutions. The ConvNet model with the Deep architecture outperformed the Basic and AlexNet architectures in detecting distracted pedestrian.

Author 1: Igor Grishchenko
Author 2: El Sayed Mahmoud

Keywords: Convolutional neural networks; computer vision; cognitive load; distractive behavior

PDF

Paper 80: Performance Tuning of Spade Card Antenna using Mean Average Loss of Backpropagation Neural Network

Abstract: The microstrip antennas have different dimensions to get the desired performance, especially for microstrip antennas that have complex components and dimensions with the performance: the range of frequency at 2.4 GHz until 3.6 GHz, Maximum Power of Gain value is 5.83 dB and the minimum value is 3 dB and Maximum Directivity Value is 6.22 and the minimum value is 3.32. in consequence, needs to fill the demand for a new and the corresponding design as solvent to adaptive matching as tuner the frequency on antenna design that needs requires a complex mathematical method and simulation. This paper has the novel design to tune the performance of spade card microstrip antenna that can operate on the single, dual or multi-band and able to produce circular or linear polarization using Backpropagation Neural Network in order to obtain an optimum design with a backpropagation algorithm as a solution to simplify the design process. As a result, after 20000 epochs the training loss is around 0.044 and the testing loss is around 0.058. The model has a good performance despite only using a few numbers of training data.

Author 1: Irfan Mujahidin
Author 2: Dwi Arman Prasetya
Author 3: Nachrowie
Author 4: Samuel Aji Sena
Author 5: Putri Surya Arinda

Keywords: Spade card antenna; mean average loss; neural network; performance tuning antenna

PDF

Paper 81: A Review of Vision and Challenges of 6G Technology

Abstract: With the accelerated evolution of smart terminals and rising fresh applications, wireless information traffic has sharply enhanced and underway cellular networks (even 5G) can’t entirely compete the rapidly emerging technical necessities. A fresh framework of wireless communication, the sixth era (6G) framework, by floating aid of artificial intelligence is anticipated to be equipped somewhere in the range of 2027 and 2030. This paper presents critical analysis of Vision of 6G wireless communication and its network structure; also outline a number of important technical challenges, additionally some possible solutions related to 6G, as well as physical layer transmission procedures, network designs, security methods.

Author 1: Faiza Nawaz
Author 2: Jawwad Ibrahim
Author 3: Muhammad Awais Ali
Author 4: Maida Junaid
Author 5: Sabila Kousar
Author 6: Tamseela Parveen

Keywords: Wireless communication; visions; 6G; cellular network; generations; digital technology; satellite networks; cell less architecture

PDF

Paper 82: Towards a Dynamic Scalable IoT Computing Platform Architecture

Abstract: Internet of Things (IoT) has become an interesting topic among technology titans and different business groups. IoT platforms have been introduced to support the development of IoT applications and services. Such platforms connect the real and virtual worlds of objects, systems and people. Even though IoT platforms increasingly target various domains, they still suffer from various limitations. (1) Integrating hardware devices from different providers/vendors (thereafter referenced as heteroge-neous hardware) is still a subtle task. (2) Providing a scalable solution without altering the end user privacy (e.g., through the use of cloud platforms) is hard to achieve. (3) Handling IoT Applications reliability as well as platform reliability is still not fully supported. (4) Addressing Safety-critical applications needs are still not covered by such platforms. A novel scalable dynamic computing platform architecture is proposed to address such lim-itations and provide simultaneous support for five non-functional requirements. The supported non-functional requirements are scalability, reliability, privacy, timing for real-time systems and safety. The proposed architecture uses a novel network topology design, virtualization and containerization concepts, along with a service-oriented architecture. We present and use a smart home case study to evaluate how traditional IoT platform architectures are compared to the proposed architecture, in terms of supporting the five non-functional requirements.

Author 1: Desoky Abdelqawy
Author 2: Amr Kamel
Author 3: Soha Makady

Keywords: Interent of Things (IoT); IoT platforms; IoT archi-tecture; edge computing

PDF

Paper 83: KWA: A New Method of Calculation and Representation Accuracy for Speech Keyword Spotting in String Results

Abstract: This study proposes a new method to measure and represent accuracy for Keyword Spotting (KWS) problem in non-aligned string results. Our approach, called Keyword Spotting Accuracy (KWA), was improved from the Levenshtein Distance algorithm, that used to evaluate the accuracy of the keywords in KWS by measuring the minimum distance between two strings. The main improved algorithm is to show the status of each keyword in training phase for predicted and true labels. In which, representing which words are correct, which ones need to be inserted, substituted or deleted when comparing the prediction labels with true ones during the training phase. In addition, a new method of presenting the multiple keywords in results was proposed to indicate the accuracy of each keyword. This method can display detailed results by keywords, from which, we can obtain the accuracy, distribution, and balance of the keywords in the training dataset by actual speech variance, not by counting keywords in true labels as usual.

Author 1: Nguyen Tuan Anh
Author 2: Hoang Thi Kim Dung

Keywords: Speech Keyword Spotting; KWS; keyword accuracy; Keyword Spotting Accuracy (KWA); speech recognition

PDF

Paper 84: A Visual Analytics System for Route Planning and Emergency Crowd Evacuation

Abstract: Emergency evacuation from crowded public spaces is of great importance to all authorities around the world. Many systems have been developed by researchers to address the optimization of emergency evacuation routing and planning. This paper presents a visual analytics system for route planning and emergency crowd evacuation; a web-based visualization and simulation system that allows stakeholders to develop and assess route planning and evacuation procedures for emergency scenar-ios. The system takes advantage of the available OpenStreetMap comprehensive spatial database to enable users to implement evacuation scenarios in almost anywhere OpenStreetMap dataset is available. Using multiple infrastructure-specific varying sce-narios, such as adjusting capacities of roads/pathways or their closure, the tool can identify bottleneck areas thus allowing the assessment of potential improvements to pedestrian and transportation network to relieve the bottleneck and improve evacuation time. As a case study, we use this system for the city of Makkah in Saudi Arabia and the city of Minnesota in the United States.

Author 1: Saleh Basalamah

Keywords: Emergency evacuation; crowd management; visual-ization; intelligent systems

PDF

Paper 85: Detecting Video Surveillance Using VGG19 Convolutional Neural Networks

Abstract: The meteoric growth of data over the internet from the last few years has created a challenge of mining and extracting useful patterns from a large dataset. In recent years, the growth of digital libraries and video databases makes it more challenging and important to extract useful information from raw data to prevent and detect the crimes from the database automatically. Street crime snatching and theft detection is the major challenge in video mining. The main target is to select features/objects which usually occurs at the time of snatching. The number of moving targets imitates the performance, speed and amount of motion in the anomalous video. The dataset used in this paper is Snatch 101; the videos in the dataset are further divided into frames. The frames are labelled and segmented for training. We applied the VGG19 Convolutional Neural Network architecture algorithm and extracted the features of objects and compared them with original video features and objects. The main contribution of our research is to create frames from the videos and then label the objects. The objects are selected from frames where we can detect anomalous activities. The proposed system is never used before for crime prediction, and it is computationally efficient and effective as compared to state-of-the-art systems. The proposed system outperformed with 81 % accuracy as compared to state-of-the-art systems.

Author 1: Umair Muneer Butt
Author 2: Sukumar Letchmunan
Author 3: Fadratul Hafinaz Hassan
Author 4: Sultan Zia
Author 5: Anees Baqir

Keywords: Anomalous detection; surveillance video; VGG16; VGG19; ConvoNet; AlexNet

PDF

Paper 86: Understanding Attribute-based Access Control for Modelling and Analysing Healthcare Professionals’ Security Practices

Abstract: In recent years, there has been an increase in the application of attribute-based access control (ABAC) in electronic health (e-health) systems. E-health systems are used to store a patient’s electronic version of medical records. These records are usually classified according to their usage i.e., electronic health record (EHR) and personal health record (PHR). EHRs are electronic medical records held by the healthcare providers, while PHRs are electronic medical records held by the patients themselves. Both EHRs and PHRs are critical assets that require access control mechanism to regulate the manner in which they are accessed. ABAC has demonstrated to be an efficient and effective approach for providing fine grained access control to these critical assets. In this paper, we conduct a survey of the existing literature on the application of ABAC in e-health systems to understand the suitability of ABAC for e-health systems and the possibility of using ABAC access logs for observing, modelling and analysing security practices of healthcare professionals. We categorize the existing works according to the application of ABAC in PHR and EHR. We then present a discussion on the lessons learned and outline future challenges. This can serve as a basis for selecting and further advancing the use of ABAC in e-health systems

Author 1: Livinus Obiora Nweke
Author 2: Prosper Yeng
Author 3: Stephen D. Wolthusen
Author 4: Bian Yang

Keywords: Attribute-Based Access Control (ABAC); e-health systems; Personal Health Record (PHR); Electronic Health Record (EHR)

PDF

Paper 87: Beyond the Horizon: A Meticulous Analysis of Clinical Decision-Making Practices

Abstract: Clinical advancements are one of the major out-comes of the technological phase shift of data sciences. The signif-icance of information technology in medical sciences by utilizing the Clinical Decision Support System (CDSS) has opened the spillways of exponentially improved predictive models. Utilizing the latest norms of classification algorithms on clinical data are widely incorporated for prognostic assessments. Medical experts have to make decisions that are crucial in nature and if the research can develop a mechanism that assists them in evolving solid reasoning, infer the knowledge and clearly express their clinical decision by justifying their assertions made, it will be a win-win situation. However, this field of science is still an unknown world for clinicians despite the fact that the enormous amount of medical data cannot be exploited to its maximum without invoking the technological support. The objective of this research is to introduce the clinicians and policymakers of the medical domain with the renowned computer-based methodolo-gies employed to construct a clinical decision support system. We expect that gaining the technical insight into the medical domain by the stakeholders will ensure commissioning the accurate and effective CDSS for improved healthcare delivery.

Author 1: Bilal Saeed Raja
Author 2: Sohail Asghar

Keywords: Decision support system; clinical decision support; classification; clustering; association rule mining; multi-objective evolutionary optimization

PDF

Paper 88: Missing Data Prediction using Correlation Genetic Algorithm and SVM Approach

Abstract: Data exists in large volume in the modern world, it becomes very useful when decoded correctly to inform decision making towards tackling real word issues. However, when the data is conflicting, it becomes a daunting task to get obtain information. Working on missing data has become a very impor-tant task in big data analysis. This paper considers the handling of the missing data using the Support Vector Machine (SVM) based on a technique called Correlation-Genetic Algorithm-SVM. This data is to be subjected to the SVM classification technique after identifying the attribute’s correlation and application of the genetic algorithm. The application of the correlation enables a clear view of the attributes which are highly correlated within a particular dataset. The results indicate that apart from the SVM, the application of the proposed hybrid algorithm produces better outcomes identification rate and accuracy is considered. The proposed approach is also compared with depicts the Mean Identification rate of applying the neural network, the result indicate a consistent accuracy hence making it better.

Author 1: Aysh Alhroob
Author 2: Wael Alzyadat
Author 3: Ikhlas Almukahel
Author 4: Hassan Altarawneh

Keywords: Missing data; Support Vector Machine (SVM); ge-netic algorithm; hybrid algorithm; correlation

PDF

Paper 89: Ranking System for Ordinal Longevity Risk Factors using Proportional-Odds Logistic Regression

Abstract: Longevity improvements have traditionally been analysed and extrapolated for future actuarial projections of longevity risk by using a range of statistical methods with different combinations of statistical data types. These meth-ods have shown great performances in explaining the trend movements of the longevity rate. However, actuaries believe that knowing the trend movements is not enough, especially in controlling the impact of the longevity risk. Accessing the effects of each level of the risk factors, especially ordinal risk factors, towards the improvements of the longevity rate would provide significant additional knowledge to the trend movements. Therefore, this study was conducted to determine the potentiality of Proportional-Odds Logistics Regression in ranking the levels of the ordinal risk factors based on their effects on longevity improvements. Based on the results, this method has successfully reordered the levels of each risk factor to be according to their effects in improving longevity rate. Hence, a more meaningful ranking system has been developed based on these new ordered risk factors. This new ranking system will help in improving the ability of any statistical methods in projecting the longevity risk when handling ordinal variables.

Author 1: Nur Haidar Hanafi
Author 2: Puteri Nor Ellyza Nohuddin

Keywords: Longevity risk; ordinal risk factors; risk ranking; proportional-odds ratios; effect analysis

PDF

Paper 90: Lexical Variation and Sentiment Analysis of Roman Urdu Sentences with Deep Neural Networks

Abstract: Sentiment analysis is the computational study of re-views, emotions, and sentiments expressed in the text. In the past several years, sentimental analysis has attracted many concerns from industry and academia. Deep neural networks have achieved significant results in sentiment analysis. Current methods mainly focus on the English language, but for minority languages, such as Roman Urdu that has more complex syntax and numerous lexical variations, few research is carried out on it. In this paper, for sentiment analysis of Roman Urdu, the novel “Self-attention Bidirectional LSTM (SA-BiLSTM)” network is proposed to deal with the sentence structure and inconsistent manner of text representation. This network addresses the limitation of the unidirectional nature of the conventional architecture. In SA-BiLSTM, Self-Attention takes charge of the complex formation by correlating the whole sentence, and BiLSTM extracts context rep-resentations to tackle the lexical variation of attended embedding in preceding and succeeding directions. Besides, to measure and compare the performance of SA-BiLSTM model, we preprocessed and normalized the Roman Urdu sentences. Due to the efficient design of SA-BiLSTM, it can use fewer computation resources and yield a high accuracy of 68.4% and 69.3% on preprocessed and normalized datasets, respectively, which indicate that SA-BiLSTM can achieve better efficiency as compared with other state-of-the-art deep architectures.

Author 1: Muhammad Arslan Manzoor
Author 2: Saqib Mamoon
Author 3: Song Kei Tao
Author 4: Ali Zakir
Author 5: Muhammad Adil
Author 6: Jianfeng Lu

Keywords: Sentiment analysis; Self-Attention Bidirectional LSTM (SA-BiLSTM); Roman Urdu language; review classification

PDF

Paper 91: Knowledge based Authentication Techniques and Challenges

Abstract: Knowledge-based Authentication (KBA) is an au-thentication approach, which verifying the user identity when accessing services such as finical websites. KBA requests specific information to prove personal identity of the owner. This paper discusses the challenges that are faced by KBA techniques. Memorability is the main obstacle in KBA since the users trying to utilize simple passwords or unify the passwords in various services, a step that cause problems and issues with compliance with security policies. Furthermore, the technique of mixing username/password is considered as another important challenge of KBA due to the recall-based authentication. This discussion includes a comparative analysis of KBA’s techniques based on trade-off criteria to support making of decision. This study’s results can support organizations in the recommendations process of a suitable KBA technique for organizations.

Author 1: Hosam Alhakami
Author 2: ShouqAlhrbi

Keywords: Knowledge-based authentication; artifact-based au-thentication; biometric-based authentication; usability; vulnerabil-ities; memorability; performance; cost

PDF

Paper 92: Map Reduce based REmoving Dependency on K and Initial Centroid Selection MR-REDIC Algorithm for clustering of Mixed Data

Abstract: In machine learning, clustering is recognized as widely used task to find hidden structure of data. While handling the massive amount of data, the traditional clustering algorithm degrades in performance due to size and mixed type of at-tributes. The Removal Dependency on K and Initial Centroid Selection (REDIC) algorithm is designed to handle mixed data with frequency based dissimilarity measurement for categorical attributes. The selection of initial centroids and prior decision for number of cluster improves the efficiency of REDIC algorithm. To deal with the large scale data, the REDIC algorithm is migrated to Map Reduce paradigm,and Map Reduce based REDIC( MR-REDIC) algorithm is proposed. The large amount of data is divided into small chunks and parallel approach is used to reduce the execution time of algorithm.The proposed algorithm inherits the feature of REDIC algorithm to cluster the data.The algorithm is implemented in Hadoop environment with three different configuration and evaluated using five bench mark data sets. Experimental results show that the Speed up value of data is gradually shifting towards linear by increasing number of data nodes from one to four. The algorithm also achieves the near to closer value for Scale up parameter, while maintaining the accuracy of algorithm.

Author 1: Khyati R. Nirmal
Author 2: K.V.V Satyanarayana

Keywords: Machine learning; clustering; similarity measure-ment; initial centroid selection; number of clusters; map reduce paradigm

PDF

Paper 93: A Technique for Panorama-Creation using Multiple Images

Abstract: Image stitching, which is a process of integration of multiple images to create a panoramic image using all contents fitted into one frame, finds wide-spread applications in medical, high resolution digital map, satellite and video imaging. This paper proposes a framework to develop panorama image with multiple images. The framework is an automatic process that takes multiple images, checks correlation of the sequential images and removes overlapping area if exists and creates the panorama.We have done experimentations using different image-sets consisting multiple images with and without overlapping and got satisfactory results.

Author 1: Moushumi Zaman Bonny
Author 2: Mohammad Shorif Uddin

Keywords: Panorama; image stitching; correlation; multiple images; image features

PDF

Paper 94: Comparison of Accuracy between Long Short-Term Memory-Deep Learning and Multinomial Logistic Regression-Machine Learning in Sentiment Analysis on Twitter

Abstract: The paper is about sentiment analysis research on Twitter. In this research data with the keyword, ‘Russian Hacking’ concerning the 2016 US presidential election on Twitter was taken as a dataset using Twitter API with Python pro-gramming language. The first process in sentiment analysis is the cleaning phase of tweet data, then using the Lexicon-based method to produce positive, negative, and neutral sentiment values for each tweet. Data that has been cleaned and classified will be processed in the Deep learning method with Long Short-Term Memory (LSTM) algorithm and Machine learning method with Multinomial Logistic Regression (MLR) algorithm. The accuracy of these two classification methods are calculated using the confusion-matrix method. The accuracy obtained from the LSTM classification method is 93 % and the MLR classification method is 92 %. Thus, it can be concluded that LSTM is better in classifying sentiments compared to MLR.

Author 1: Aries Muslim
Author 2: Achmad Benny Mutiara
Author 3: Rina Refianti
Author 4: Cut Maisyarah Karyati
Author 5: Galang Setiawan

Keywords: Sentiment analysis; deep learning; machine learn-ing; Long Short-Term Memory (LSTM); Multinomial Logistic Regression (MLR)

PDF

Paper 95: A Review of Intelligent Tutorial Systems in Computer and Web based Education

Abstract: ITS (Intelligent Tutoring Systems) are integrated and complex systems, designed and developed using approaches and methods of artificial intelligence (AI), for the resolution of problems and requirements of the teaching/learning activities in the field of education and training of students and the workforce based in computers an web based emerging resources. These sys-tems can establish the level of student knowledge and the learning strategies used to improve the level of knowledge to support the detection and correction of student misconceptions. Their purpose is to contribute to the process of teaching and learning in a given area of knowledge, respecting the individuality of the student. In this paper, a review of intelligent tutorial systems (ITS) is presented, from the perspective of their application and usability in modern learning concepts. The methodology used was that of bibliographical review of classic works of the printed and digital literature in relation to ITS and e-Learning systems, as well as searches in diverse databases, of theses and works in universities and digital repositories. The main weakness of the research lies in the fact that the search was limited to documents published in the English, Spanish and Portuguese.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Elisa Castaneda
Author 4: Jesus Zuniga-Cueva
Author 5: Maria Rivera-Chavez
Author 6: Francisco Fialho

Keywords: Intelligent learning systems; computer assisted learning environments; web based education

PDF

Paper 96: An Algorithmic Approach for Maritime Transportation

Abstract: Starting from the 3rd millennium BC, Indian maritime trade has augmented the life of a common man and businesses alike. This study, finds that India can leverage on the 7,500 long coast line and derive holistic development in terms of interconnected ports with hinterland connectivity and realize lower expenditure coupled with reduced carbon emission. This research analyzed a decade of cargo data from origination to destination and found that around 82.95 per cent (953 MMTPA in 2017–18) of road based consignments in India comprised of Fertilizers, Hydrocarbons, Coal, Lubricants and Oil. Essentially, a quantum of this i.e. 78.39 per cent of MMTPA cargo consignments (State Owned Hydrocarbons) traverses on Indian roads. The study drew parameters of this transportation paradigm and modeled the same using Artificial Intelligence to depict a monumental opportunity to rationalize costs, improve efficiency and reduce carbon emission to strengthen the argument for the employment of Multimodal Logistics in the Maritime Sector. Subsequent to model derivation the same set of parameters are plotted as an efficient transit map of Interstate transit lines connecting 16 major hubs which now handle bulk cargo shipped by all modes of transport. For the pollution segment a collaborative game theoretic approach i.e., Shapley value is proposed for improved decision making. This study presents data driven and compelling research evidence to portray the benefits of collaboration between firms in terms of time and cost. The study also proposes the need and method to improve hinterland connectivity using a scalable greedy algorithm which is tested with real time data of Coal and Bulk Cargo. As a scientific value addition, this study presents a mathematical model that can be implemented across geographies seamlessly using Information Communication Technology.

Author 1: Peri Pinakpani
Author 2: Aruna Polisetty
Author 3: G Bhaskar N Rao
Author 4: Harrison Sunil D
Author 5: B Mohan Kumar
Author 6: Dandamudi Deepthi
Author 7: Aneesh Sidhireddy

Keywords: Maritime transport; multimodal logistics; game theory; greedy algorithm; freight management; intermodal transportation

PDF

Paper 97: Opportunistic use of Spectral Holes in Karachi using Convolutional Neural Networks

Abstract: Wireless services appearing in the next generation wireless standard i.e. 6G include Internet of Everything (IoE), Holographic communications, smart transportation and smart cities require exponential rise in the bandwidth in addition to other requirements. The current static spectrum allocation policy does not allow any new entrant to exploit already grid-locked Radio Frequency (RF) spectrum. Hence, quest for larger bandwidth can be fulfilled through other technologies. These include exploiting sub-Terahertz band, Visible Light Communication and Cognitive Radio scheme or exploiting of RF bands in opportunistic fashion. Cognitive Radio is one of those engines to exploit the RF spectrum in secondary style. Cognitive Radio can use artificial intelligence driven algorithms to complete the task. Several intelligent algorithms can be used for better forecasting of spectral holes. Convolutional Neural Network (CNN) is a Deep Learning algorithm that can be used to predict the presence of a spectral holes that can be opportunistically exploited for efficient utilization of RF spectrum in secondary fashion. This paper investigates the performance of CNN for metropolitan Karachi city of Pakistan so that the users can be provided with uninterrupted access to the network even under busy hours. Dataset for the proposed setup is collected for 1805 MHz frequency band through NI 2901 Universal Software Radio Peripheral (USRP) devices. The root mean square error (RMSE) for the predicted results using CNN appears to be 81.02 at epoch of 200 and mini-batch loss of 3281.8. Based on the predicted results, it was concluded that CNN can be useful for investigating the possible opportunistic usage of RF spectrum, however, further investigation is required with different datasets.

Author 1: Aamir Zeb Shaikh
Author 2: Shabbar Naqvi
Author 3: Minaal Ali
Author 4: Yamna Iqbal

Keywords: Cognitive Radio; Spectral hole; Deep Learning; Convolutional neural network (CNN)

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org