The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 1

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: A Novel Traffic Shaping Algorithm for SDN-Sliced Networks using a New WFQ Technique

Abstract: Managing traditional networks comes with number of challenges due to their limitations, in particular, because there is no central control. Software-Defined Networking (SDN) is a relatively new idea in networking, which enables networks to be centrally controlled or programmed using software applications. Novel traffic shaping (TS) algorithms are proposed for the implementation of a Quality of Service (QoS) bandwidth management technique to optimise performance and solve network congestion problems. Specifically, two algorithms, namely “Packet tagging, Queueing and Forwarding to Queues” and “Allocating Bandwidth”, are proposed for implementing a Weighted Fair Queuing (WFQ) technique, as a new methodology in an SDN-sliced testbed to reduce congestion and facilitate a smooth traffic flow. This methodology aimed at improving QoS that does two things simultaneously, first, making traffic conform to an individual rate using WFQ to make the appropriate queue for each packet. Second, the methodology is combined with buffer management, which decides whether to put the packet into the queue according to the proposed algorithm defined for this purpose. In this way, the latency and congestion remain in check, thus meeting the requirements of real-time services. The Differentiated Service (DiffServ) protocol is used to define classes in order to make network traffic patterns more sensitive to the video, audio and data traffic classes, by specifying precedence for each traffic type. SDN networks are controlled by floodlight controller(s) and FlowVisor, the slicing controller, which characterise the behaviour of such networks. Then, the network topology is modelled and simulated via the Mininet Testbed emulator platform. To achieve the highest level of accuracy, The SPSS statistical package Analysis of Variance (ANOVA) is used to analyse particular traffic measures, namely throughput, delay and jitter as separate performance indices, all of which contribute to QoS. The results show that the TS algorithms do, indeed, permit more advanced allocation of bandwidth, and that they reduce critical delays compared to the standard FIFO queueing in SDN.

Author 1: Ronak Al-Haddad
Author 2: Erika Sanchez Velazquez
Author 3: Arooj Fatima
Author 4: Adrian Winckles

Keywords: Network congestion; SDN; slicing; QoS; queueing; OpenFlow (OF); Weighted Fair Queuing (WFQ); SPSS Analysis of Variance (ANOVA)

PDF

Paper 2: Human-Robot Interaction and Collaboration (HRI-C) Utilizing Top-View RGB-D Camera System

Abstract: In this study, a smart and affordable system that utilizes an RGB-D camera to measure the exact position of an operator with respect to an adjacent robotic manipulator was developed. This developed technology was implemented in a simulated human operation in an automated manufacturing robot to achieve two goals; enhancing the safety measures around the robot by adding an affordable smart system for human detection and robot control and developing a system that will allow the between the human-robot collaboration to finish a predefined task. The system utilized an Xbox Kinect V2 sensor/camera and Scorbot ER-V Plus to model and mimics the selected applications. To achieve these goals, a geometric model for the Scorbot and Xbox Kinect V2 was developed, a robotics joint calibration was applied, an algorithm of background segmentation was utilized to detect the operator and a dynamic binary mask for the robot was implemented, and the efficiency of both systems based on the response time and localization error was analyzed. The first application of the Add-on Safety Device aims to monitor the working-space and control the robot to avoid any collisions when an operator enters or gets closer. This application will reduced and remove physical barriers around the robots, expand the physical work area, reduce the proximity limitations, and enhance the human-robots interaction (HRI) in an industrial environment while sustaining a low cost. The system was able to respond to human intrusion to prevent any collision within 500 ms on average, and it was found that the system’s bottleneck was PC and robot inter-communication speed. The second application was developing a successful collaborative scenario between a robot and a human operator, where a robot will deposit an object on the operator’s hand, mimicking a real-life human-robot collaboration (HRC) tasks. The system was able to detect the operator’s hand and it’s location then command the robot to place an object on the hand, the system was able to place the object within a mean error of 2.4 cm, and the limitation of this system was the internal variables and data transmitting speed between the robot controller and main computer. These results are encouraging and ongoing work aims to experiment with different operations and implement gesture detection in real-time collaboration tasks while keeping the human operator safe and predicting their behavior.

Author 1: Tariq Tashtoush
Author 2: Luis Garcia
Author 3: Gerardo Landa
Author 4: Fernando Amor
Author 5: Agustin Nicolas Laborde
Author 6: Damian Oliva
Author 7: Felix Safar

Keywords: Robotics manipulator; robot end-effector; computer vision; human-robot interaction (HRI); human-robot collaboration (HRC); robotics safety; scorbot; Kinect; RGB camera; industrial system modeling; manufacturing systems design

PDF

Paper 3: Application-based Evaluation of Automatic Terminology Extraction

Abstract: The aim of this paper is to evaluate performance of several automatic term extraction methods which can be easily utilized by translators themselves. The experiments are conducted on German newspaper articles in the domain of politics on the topic of Brexit. However, they can be easily replicated on any other topic or language as long as it is supported by all three tools used. The paper first provides an extensive introduction into the field of automatic terminology extraction. Next, selected terminology extraction methods are assessed using precision with respect to the gold standard compiled on the same corpus. Moreover, the corpus has been completely annotated to allow for the calculation of recall. The effects of using five cut-off points are examined in order to find an optimal value which should be used in translation practice.

Author 1: Marija Brkic Bakaric
Author 2: Nikola Babic
Author 3: Maja Matetic

Keywords: Terminology extraction; hybrid methods; evaluation; precision; recall; gold standard; language resources

PDF

Paper 4: Deep Learning Architectures and Techniques for Multi-organ Segmentation

Abstract: Deep learning architectures used for automatic multi-organ segmentation in the medical field have gained increased attention in the last years as the results and achievements outweighed the older techniques. Due to improvements in the computer hardware and the development of specialized network designs, deep learning segmentation presents exciting developments and opportunities also for future research. Therefore, we have compiled a review of the most interesting deep learning architectures applicable to medical multi-organ segmentation. We have summarized over 50 contributions, most of which are more recent than 3 years. The papers were grouped into three categories based on the architecture: “Convolutional Neural Networks” (CNNs), “Fully Convolutional Neural Networks” (FCNs) and hybrid architectures that combine more designs - including “Generative Adversarial Networks” (GANs) or “Recurrent Neural Networks” (RNNs). Afterwards we present the most used multi-organ datasets, and we finalize by making a general discussion of current shortcomings and future potential research paths.

Author 1: Valentin Ogrean
Author 2: Alexandru Dorobantiu
Author 3: Remus Brad

Keywords: Deep Learning; Multi-Organ Segmentation; Fully Convolutional Neural Networks (FCNs); Generative Adversarial Networks (GANs); Recurrent Neural Networks (RNNs)

PDF

Paper 5: Development of a Physical Impairment Prediction Model for Korean Elderly People using Synthetic Minority Over-Sampling Technique and XGBoost

Abstract: The old people's 'physical functioning' is a key factor of active ageing as well as a major factor in determining the quality of life and the need for long-term care in old age. Previous studies that identified factors related to ADL mostly used regression analysis to predict groups of high physical impairment risk. Regression analysis is useful for confirming individual risk factors, but has limitations in grasping multiple risk factors. As methods for resolving this limitation of regression models, machine learning ensemble boosting models such as random forest and eXtreme Gradient Boosting (XGBoost) are widely used. Nonetheless, the prediction performances of XGBoost, such as accuracy and sensitivity, remain to be verified additionally by follow-up studies. This article proposes an effective method of dealing with imbalanced data for the development of ensemble-based machine learning, by comparing the performances of disease data sampling methods. This study analyzed 3,351 old people aged 65 or above who resided in local communities and completed the survey. As machine learning models to predict physical impairment in old age, this study compared the logistic regression model, XGBoost and random forest, with respect to the predictive performances of accuracy, sensitivity, and specificity. This study selected as the final model a model whose sensitivity and specificity were 0.6 or above and whose accuracy was highest. As a result, synthetic minority over-sampling technique (SMOTE)-based XGBoost whose accuracy, sensitivity, and specificity were 0.67, 0.81, and 0.75, respectively, was determined as the most excellent predictive performance. The results of this study suggest that in case of developing a predictive model using imbalanced data like disease data, it is efficient to use the SMOTE-based XGBoost model.

Author 1: Haewon Byeon

Keywords: Random forest; XGBoost; GBM; gradient boosting machine; physical impairment prediction model

PDF

Paper 6: Data-Forensic Determination of the Accuracy of International COVID-19 Reporting: Using Zipf’s Law for Pandemic Investigation

Abstract: Severe outbreaks of infectious disease occur throughout the world with some reaching the level of international pandemic: Coronavirus (COVID-19) is the most recent to do so. In this paper, a mechanism is set out using Zipf’s law to establish the accuracy of international reporting of COVID-19 cases via a determination of whether an individual country’s COVID-19 reporting follows a power-law for confirmed, recovered, and death cases of COVID-19. The probability of Zipf’s law (P-values) for COVID-19 confirmed cases show that Uzbekistan has the highest P-value of 0.940, followed by Belize (0.929), and Qatar (0.897). For COVID-19 recovered cases, Iraq had the highest P-value of 0.901, followed by New Zealand (0.888), and Austria (0.884). Furthermore, for COVID-19 death cases, Bosnia and Herzegovina had the highest P-value of 0.874, followed by Lithuania (0.843), and Morocco (0.825). China, where the COVID-19 pandemic began, is a significant outlier in recording P-values lower than 0.1 for the confirmed, recovered, and death cases. This raises important questions, not only for China, but also any country whose data exhibits P-values below this threshold. The main application of this work is to serve as an early warning for World Health Organization (WHO) and other health regulatory bodies to perform more investigations in countries where COVID-19 datasets deviate significantly from Zipf’s law. To this end, this paper provide a tool for illustrating Zipf’s law P-values on a global map in order to convey the geographic distribution of reporting anomalies.

Author 1: Aamo Iorliam
Author 2: Anthony T S Ho
Author 3: Santosh Tirunagari
Author 4: David Windridge
Author 5: Adekunle Adeyelu
Author 6: Samera Otor
Author 7: Beatrice O. Akumba

Keywords: COVID-19; power-law; pandemic; Zipf’s Law; WHO

PDF

Paper 7: Clustering K-Means Algorithms and Econometric Lethality Model by Covid-19, Peru 2020

Abstract: Objective: The study looks at how the Covid-19 wave was in Peru, where and when it begins, where and when it culminates. As it faced, the shortcomings that were detected and especially that very little could be done to confront the disease as an emerging country. The wave began in May and ended in August with the greatest number of deaths and then fell. Methodology: Basic, explanatory level, with SINADEF data by region, of the situation room, to get the number of deaths, between January and September 2020/2019. Results. The relationship between infected and deceased was found a Pearson Rho of 0.94. The total death toll model depends on Lima, Huánuco, and Piura. The differences between the deaths of 2019 and 2020 were corroborated with the ANOVA, where a bilateral sig of 0.042 was got. The COVID cycle is found in the cluster algorithm model, of the nine months in 44.4% of them, it generated the highest lethality, between May and August. Conclusion. It is proven that COVID devastated regions of Peru. The model generated by the K-Means algorithm tells us that the COVID-19 cycle began in March and reached its highest peak of deceased and then descended.

Author 1: Javier Pedro Flores Arocutipa
Author 2: Jorge Jinchuña Huallpa
Author 3: Gamaniel Carbajal Navarro
Author 4: Luís Delfín Bermejo Peralta

Keywords: Infected; lethality; COVID 19 cycle; razing

PDF

Paper 8: Optimize the Cost of Resources in Federated Cloud by Collaborated Resource Provisioning and Most Cost-effective Collated Providers Resource First Algorithm

Abstract: Cloud Computing works as the best solution for providing many of its services for cloud consumer agents with different requests for huge computational VM's with large storage capacity. The instance requests of cloud consumers will dynamically change as per their usage of application requirements with the demand for business growth, and single-vendor cloud becomes a constraint to satisfy these needs of the cloud consumers. Federated Cloud can contribute its solution approaches to meet these dynamic needs of cloud consumer requests of resource instances. The interoperability of clouds was made realistic with cloud federation. This paper provides an optimized solution approach where a set of collaborated cloud providers will provide services to satisfy consumer agents' multiple requests. It presents the two-phase collaborated resource provisioning (CCRP) approach and Most Cost-Effective Collated Providers Resources First (MCECPRF) algorithm. The algorithm’s efficiency has been tested with specific data set for optimizing the cost for cloud consumer agents and analyzes the cancellation of requests, decision time for provisioning for different VM configurations within specific time slots.

Author 1: V Pradeep Kumar
Author 2: Kolla Bhanu Prakash

Keywords: Cloud computing; federated cloud; collaborated resource provisioning; optimized cost

PDF

Paper 9: Impact of Artificial Intelligence-enabled Software-defined Networks in Infrastructure and Operations: Trends and Challenges

Abstract: The emerging technologies trending up in information and communication technology are tuning the enterprises for betterment. The existing infrastructure and operations (I&O) are supporting enterprises with their services and functionalities, considering the diverse requirements of the end-users. However, they are not free of the challenges and issues to address as the technology has advanced. This paper explains the impact of artificial intelligence (AI) in the enterprises using software-defined networking (SDN) in I&O. The fusion of artificial intelligence with software-defined networking in infrastructure and operations enables to automate the process based on experience and provides opportunities to the management to make quick decisions. But this fusion has many challenges to be addressed. This research aimed to discuss the trends and challenges impacting infrastructure and operations, and the role of AI-enabled SDN in I&O and discusses the benefits it provides that influence the directional path. Furthermore, the challenges to be addressed in implementing the AI-enabled SDN in I&O shows future directions to explore.

Author 1: Mohammad Riyaz Belgaum
Author 2: Zainab Alansari
Author 3: Shahrulniza Musa
Author 4: Muhammad Mansoor Alam
Author 5: M. S. Mazliham

Keywords: Artificial intelligence; infrastructure and operations; software-defined network; virtualization

PDF

Paper 10: Predicting the Depression of the South Korean Elderly using SMOTE and an Imbalanced Binary Dataset

Abstract: Since the number of healthy people is much more than that of ill people, it is highly likely that the problem of imbalanced data will occur when predicting the depression of the elderly living in the community using big data. When raw data are directly analyzed without using supplementary techniques such as a sample algorithm for datasets, which have imbalanced class ratios, it can decrease the performance of machine learning by causing prediction errors in the analysis process. Therefore, it is necessary to use a data sampling technique for overcoming this imbalanced data issue. As a result, this study tried to identify an effective way for processing imbalanced data to develop ensemble-based machine learning by comparing the performance of sampling methods using the depression data of the elderly living in South Korean communities, which had quite imbalanced class ratios. This study developed a model for predicting the depression of the elderly living in the community using a logistic regression model, gradient boosting machine (GBM), and random forest, and compared the accuracy, sensitivity, and specificity of them to evaluate the prediction performance of them. This study analyzed 4,085 elderly people (≥60 years old) living in the community. The depression data of the elderly in the community used in this study had an unbalance issue: the result of the depression screening test showed that 87.5% of subjects did not have depression, while 12.5% of them had depression. This study used oversampling, undersampling, and SMOTE methods to overcome the unbalance problem of the binary dataset, and the prediction performance (accuracy, sensitivity, and specificity) of each sampling method was compared. The results of this study confirmed that the SMOTE-based random forest algorithm showing the highest accuracy (a sensitivity ≥ 0.6 and a specificity ≥ 0.6) was best prediction performance among random forest, GBM, and logistic regression analysis. Further studies are needed to compare the accuracy of SMOTE, undersampling, and oversampling for imbalanced data with high dimensional y-variables.

Author 1: Haewon Byeon

Keywords: Random forests; gradient boosting machine; SMOTE; undersampling; imbalanced data; oversampling

PDF

Paper 11: Validation of the Components and Elements of Computational Thinking for Teaching and Learning Programming using the Fuzzy Delphi Method

Abstract: Computational Thinking is a phrase employed to explain the developing concentration on students' knowledge development regarding designing computational clarifications to problems, algorithmic Thinking, and coding. The difficulty of learning computer programming is a challenge for students and teachers. Students' ability in programming is closely related to their problem-solving skills and their cognitive abilities. Even though computational thinking is a problem-solving skill in the 21st century, its use for programming needs to be planned systematically taken into account the appropriate components and elements. Therefore, this study aims to validate the main components and elements of computational thinking for solving problems in programming. At the beginning of the study, researchers conducted a literature review to determine the components and the elements of computational thinking that could be used in teaching and learning programming. This validation involved the consensus of a group of experts using the Fuzzy Delphi method. The data were analysed using the Fuzzy Delphi technique, where the experts individually evaluated the components and elements agreed upon prior discussion. A group of experts consisting of 15 people validated 14 components and 35 elements. The results showed that all components and elements reached a threshold (d) value of less than 0.2, a percentage of agreement exceeded 75%, and the Fuzzy score (A) exceeded 0.5. The finding indicates that the main components and elements of the proposed computational thinking are suitable for problem-solving approaches in programming.

Author 1: Karimah Mohd Yusoff
Author 2: Noraidah Sahari Ashaari
Author 3: Tengku Siti Meriam Tengku Wook
Author 4: Noorazean Mohd Ali

Keywords: Expert consensus; focus group; problem-solving; components; elements

PDF

Paper 12: Ground Control Point Generation from Simulated SAR Image Derived from Digital Terrain Model and its Application to Texture Feature Extraction

Abstract: Ground Control Point: GCP generation from simulated topographic map derived from Digital Terrain Model: DTM is proposed. Also, texture feature extraction is attempted from the simulated image. In this study, simulated image is derived from elevation data only, under assumptions of a simple scattering model without consideration of complex dielectric constant of the targets of interest. The performance of the acquired GCPs was evaluated by using several measures with texture features of GCP chip images. This paper describes the details about proposed method for acquisition of GCPs and simulated results on relationship between texture features and GCP matching success rate corresponding to the cross correlation between reference and distorted GCP chip images.

Author 1: Kohei Arai

Keywords: Ground Control Point: GCP; Digital Terrain Model: DTM; scattering model; complex dielectric constant; texture feature; matching success rate; GCP chip

PDF

Paper 13: Modeling of the Factors Affecting e-Commerce Use in Turkey by Categorical Data Analysis

Abstract: e-Commerce use is a subject of study that has been frequently discussed in recent years. The aim of this study was to detect the socio-demographic and economic factors affecting e-commerce use of individuals in Turkey. The micro dataset obtained from Information and Communication Technology (ICT) Usage Survey in Households performed by the Turkish Statistical Institute in 2014-2018 was employed in this study. Multinomial logistic and multinomial probit regression analyses were performed to detect the factors affecting e-commerce use of individuals in Turkey. The data of 129,643 individuals, who participated in ICT Usage Survey in Households in 2014-2018, were employed in the regression analyses. According to the analysis results, the variables of survey year, age, gender, educational level, occupation, income level, region and household size were detected to be effective on online shopping. The results of the study indicated that e-commerce use was gradually increasing. It was determined that more educated and young individuals and individuals living in relatively more developed regions were more inclined to online shopping. Policies should also be developed to increase e-commerce use of low educated individuals and individuals over middle age. In particular, small and medium-sized businesses (SMB) should pay more attention to the use of e-commerce in order to increase their activities by taking these situations into consideration. Indeed, how important e-commerce use is has been found out in epidemics/pandemics such as COVID-19, which causes people to lock themselves at home in the countries.

Author 1: Ömer Alkan
Author 2: Hasan Küçükoglu
Author 3: Gökhan Tutar

Keywords: Electronic commerce; online shopping; online purchase; e-commerce; Turkey; multinomial probit regression

PDF

Paper 14: Customer Profiling for Malaysia Online Retail Industry using K-Means Clustering and RM Model

Abstract: Malaysia's online retail industry is growing sophisticated for the past years and is not expected to stop growing in the following years. Meanwhile, customers are becoming smarter about buying. Online Retailers have to identify and understand their customer needs to provide appropriate services/products to the demanding customer and attracting new customers. Customer profiling is a method that helps retailers to understand their customers. This study examines the usefulness of the LRFMP model (Length, Recency, Frequency, Monetary, and Periodicity), the models that comprised part of its variables, and its predecessor RFM model using the Silhouette Index test. Furthermore, an automated Elbow Method was employed and its usefulness was compared against the conventional visual analytics. As result, the RM model was selected as the finest model in performing K-Means Clustering in the given context. Despite the unusefulness of the LRFMP model in K-Means Clustering, some of its variables remained useful in the customer profiling process by providing extra information on cluster characteristics. Moreover, the effect of sample size on cluster validity was investigated. Lastly, the limitations and future research recommendations are discussed alongside the discussion to bridge for future works.

Author 1: Tan Chun Kit
Author 2: Nurulhuda Firdaus Mohd Azmi

Keywords: Customer Profiling; LRFMP; RFM; Data Mining; K-Means Clustering

PDF

Paper 15: Amalgamation of Machine Learning and Slice-by-Slice Registration of MRI for Early Prognosis of Cognitive Decline

Abstract: Brain atrophy is the degradation of brain cells and tissues to the extent that it is clearly indicative during Mini-Mental State Exam test and other psychological analysis. It is an alarming state of the human brain that progressively results in Alzheimer disease which is not curable. But timely detection of brain atrophy can help millions of people before they reach the state of Alzheimer. In this study we analyzed the longitudinal structural MRI of older adults in the age group of 42 to 96 of OASIS 3 Open Access Database. The nth slice of one subject does not match with the nth slice of another subject because the head position under the magnetic field is not synchronized. As a radiologist analyzes the MRI image data slice wise so our system also compares the MRI images slice wise, we deduced a method of slice by slice registration by driving mid slice location in each MRI image so that slices from different MRI images can be compared with least error. Machine learning is the technique which helps to exploit the information available in abundance of data and it can detect patterns in data which can give indication and detection of particular events and states. Each slice of MRI analyzed using simple statistical determinants and Gray level Co-Occurrence Matrix based statistical texture features from whole brain MRI images. The study explored varied classifiers Support Vector Machine, Random Forest, K-nearest neighbor, Naive Bayes, AdaBoost and Bagging Classifier methods to predict how normal brain atrophy differs from brain atrophy causing cognitive impairment. Different hyper parameters of classifiers tuned to get the best results. The study indicates Support Vector Machine and AdaBoost the most promising classifier to be used for automatic medical image analysis and early detection of brain diseases. The AdaBoost gives accuracy of 96.76% with specificity 95.87% and sensitivity 87.37% and receiving operating curve accuracy 96.3%. The SVM gives accuracy of 96% with 92% specificity and 87% sensitivity and receiving operating curve accuracy 95.05%.

Author 1: Manju Jain
Author 2: C.S. Rai
Author 3: Jai Jain
Author 4: Deepak Gambhir

Keywords: Brain atrophy; registration; Freesurfer; GLCM; texture features; FDR; decision support system; SVM; AdaBoost; Randomforest Bagging; KNN; Naive Bayes; classification; hyperparameters; GridsearchCV; Sklearn; Python

PDF

Paper 16: Full Direction Local Neighbors Pattern (FDLNP)

Abstract: In this paper, we proposed the Full Direction Local Neighbor Pattern (FDLNP) algorithm, which is a novel method for Content-Based Image Retrieval. FDLNP consists of many steps, starting from generating Max and Min Quantizers followed by building two matrix types (the Eight Neighbors Euclidean Decimal Coding matrix, and Full Direction Matrixes). After that, we extracted Gray-Level Co-occurrence Matrix (GLCM) from those matrixes to derive the important features from each GLCM matrixes and finishing with merging the output of previous steps with Local Neighbor Patterns (LNP) histogram. For decreasing the feature vector length, we proposed five extension methods from FDLNP by choosing the specific direction matrixes. Our results demonstrate the effectiveness of our proposed algorithm on color and texture databases, comparing with recent works, with regard to the Precision, Recall, mean Average Precision (mAP), and Average Retrieval Rate (ARR). For enhancing the image retrieval accuracy, we proposed a novel framework that combined the image retrieval system with clustering and classification algorithms. Moreover, we proposed a distributed model that used our FDLNP method with Hadoop to get the ability to process a huge number of images in a reasonable time.

Author 1: Maher Alrahhal
Author 2: Supreethi K.P

Keywords: Content-Based image retrieval; full direction local neighbor patterns; local neighbor pattern; gray-level co-occurrence matrix; ensemble classifiers; k-means clustering; hadoop

PDF

Paper 17: Stanza Type Identification using Systematization of Versification System of Hindi Poetry

Abstract: Poetry covers a vast part of the literature of any language. Similarly, Hindi poetry is also having a massive portion in Hindi literature. In Hindi poetry construction, it is necessary to take care of various verse writing rules. This paper focuses on the automatic metadata generation from such poems by computational linguistics integrated advance and systematic, prosody rule-based modeling and detection procedures specially designed for Hindi poetry. The paper covers various challenges and the best possible solutions for those challenges, describing the methodology to generate automatic metadata for “Chhand” based on the poems’ stanzas. It also provides some advanced information and techniques for metadata generation for “Muktak Chhands”. Rules of the “Chhands” incorporated in this research were identified, verified, and modeled as per the computational linguistics perspective the very first time, which required a lot of effort and time. In this research work, 111 different “Chhand” rules were found. This paper presents rule-based modeling of all of the “Chhands”. Out of the all modeled “Chhands” the research work covers 53 “Chhands” for which at least 20 to 277 examples were found and used for automatic processing of the data for metadata generation. For this research work, the automatic metadata generator processed 3120 UTF-8 based inputs of 53 Hindi “Chhand” types, achieved 95.02% overall accuracy, and the overall failure rate was 4.98%. The minimum time taken for the processing of “Chhand” for metadata generation was 1.12 seconds, and the maximum was 91.79 seconds.

Author 1: Milind Kumar Audichya
Author 2: Jatinderkumar R. Saini

Keywords: Chhand; computational linguistics; Hindi; metadata; poetry; prosody; stanza; verse

PDF

Paper 18: Detection and Recognition of Moving Video Objects: Kalman Filtering with Deep Learning

Abstract: Research in object recognition has lately found that Deep Convolutional Neuronal Networks (CNN) provide a breakthrough in detection scores, especially in video applications. This paper presents an approach for object recognition in videos by combining Kalman filter with CNN. Kalman filter is first applied for detection, removing the background and then cropping object. Kalman filtering achieves three important functions: predicting the future location of the object, reducing noise and interference from incorrect detections, and associating multi-objects to tracks. After detection and cropping the moving object, a CNN model will predict the category of object. The CNN model is built based on more than 1000 image of humans, animals and others, with architecture that consists of ten layers. The first layer, which is the input image, is of 100 * 100 size. The convolutional layer contains 20 masks with a size of 5 * 5, with a ruling layer to normalize data, then max-pooling. The proposed hybrid algorithm has been applied to 8 different videos with total duration of is 15.4 minutes, containing 23100 frames. In this experiment, recognition accuracy reached 100%, where the proposed system outperforms six existing algorithms.

Author 1: Hind Rustum Mohammed
Author 2: Zahir M. Hussain

Keywords: Convolution Neural Network (CNN); Kalman filter; moving object; video tracking

PDF

Paper 19: Performance Evaluation of Different Mobile Ad-hoc Network Routing Protocols in Difficult Situations

Abstract: Performance evaluation of Mobile Ad-hoc Network (MANET) routing protocols is essential for selecting the appropriate protocol for the network. Many routing protocols and different simulation tools were proposed to address this task. This paper will introduce an overview of MANETs routing protocols as well as evaluate MANET performance by using three reactive protocols—Dynamic Source Routing (DSR), Ad-Hoc On-demand Distance Vector (AODV), and Dynamic MANET On-Demand (DYMO)—in three different scenarios. These scenarios are designed carefully to mimic real situations by using OMNET++. The first scenario evaluates the performance when the number of nodes increases. In the second scenario, the performance of the network will be evaluated in the presence of obstacles. In the third scenario, a group of nodes will be suddenly shut down during the communication. The network evaluation is carried out in terms of packets received, end-to-end delay, transmission count or routing overhead, throughput, and packet ratio.

Author 1: Sultan Mohammed Alkahtani
Author 2: Fahd Alturki

Keywords: Ad-hoc network; performance evaluation; network simulation; MANET; DSR; AODV; DYMO; routing protocols; Omnet++

PDF

Paper 20: The Design and Implementation of Mobile Heart Monitoring Applications using Wearable Heart Rate Sensor

Abstract: Heart monitoring is important to deter any catastrophic because of heart failure that may happen. Continuous real-time heart monitoring could prevent sudden death due to heart attack. Nevertheless, the major challenge associated with continuous heart monitoring in the traditional approach is to undertake regular medical check-ups at the hospital or clinic. Hence, the aim of this study is to develop a mobile app where patients can real-time monitor their heart rate (HR) and detect abnormal HR whenever it occurs. Caregivers will be notified when a patient is detected with abnormal HR. The mobile app was developed for Android-based smartphones. A wearable HR sensor is used to collect RR data and transmitted to the smartphone via Bluetooth connection. User acceptance test was conducted to comprehend the intention and satisfaction level of the prospective users to use the application. The user acceptance test shows compatibility, perceived usefulness, perceived ease of use, trust, and behavioral intention to use had a high acceptance rate. It is expected that the developed app may provide a more plausible tool in monitoring HR personally, conveniently, and continuously at any time and anywhere.

Author 1: Ummi Namirah Hashim
Author 2: Lizawati Salahuddin
Author 3: Raja Rina Raja Ikram
Author 4: Ummi Rabaah Hashim
Author 5: Ngo Hea Choon
Author 6: Mohd Hariz Naim Mohayat

Keywords: Personalized healthcare; heart monitoring; wearable sensor; mobile; android

PDF

Paper 21: A Survey on Junction Selection based Routing Protocols for VANETs

Abstract: Objectives: To compare significant position-based routing protocols based on their underlying techniques such as junction selection mechanisms that provide vehicle-to-vehicle communications in city scenarios. Background: Vehicular Adhoc Network is the most significant offshoot of Mobile Adhoc Networks which is capable of organizing itself in an infrastructure-less environment. The network builds smart transportation which facilitates deriving in-terms of traffic safety by exchanging timely information in a proficient manner. Findings: The main features of vehicular adhoc networks pertaining to the city environment like high mobility, network segmentation, sporadic interconnections, and impediments are the key challenges for the development of an effective routing protocol. These features of the urban environment have a great impact on the performance of a routing protocol. This study presents a brief survey on the most substantial position-based routing schemes premeditated for urban inter-vehicular communication scenarios. These protocols are provided with their operational techniques for exchanging messages between vehicles. A comparative analysis is also provided, which is based on various important factors such as the mechanisms of intersection selection, forwarding strategies, vehicular traffic density, local maximum conquering methods, mobility of vehicular nodes, and secure message exchange. Application/Improvements: the outcomes observed from this paper motivate us to improve routing protocol in terms of security, accuracy, and reliability in vehicular adhoc networks. Furthermore, it can be employed as a foundation of references in determining literature that are worth mentioning to the routing in vehicular communications.

Author 1: Irshad Ahmed Abbasi
Author 2: Elfatih Elmubarak Mustafa

Keywords: Position-based; inter-vehicular; urban scenario; algorithms; reliability

PDF

Paper 22: Design of a Rule-based Personal Finance Management System based on Financial Well-being

Abstract: Financial planning plays an important role in people’s lives. The recent COVID-19 outbreak has caused sudden unemployment for many people across the globe, leaving them with a financial crisis. Recent surveys indicate that financial matters continue as the leading cause of stress for employees. Further, many millennials overspend and make unfortunate financial decisions due to their incapability to manage their earnings, which forbids them from maintaining financial satisfaction. Financial well-being as defined by The American Consumer Financial Protection Bureau (CFPB) is a state where one fully meets current and ongoing financial obligations, feels secure in their financial future, and is able to make choices to enjoy life. This work proposes a Personal Finance Management (PFM) system with a new architecture that aims to guide users to reach the state of financial well-being, as defined by CFPB. The proposed system consists of a rule-based system that provides users with actionable advice to make informed spending decisions and achieve their financial goals.

Author 1: Alhanoof Althnian

Keywords: Artificial intelligence; rule-based; deductive reasoning; forward chaining; personal finance; financial well-being

PDF

Paper 23: Robust Control Approach of SISO Coupled Tank System

Abstract: This paper presents the design principles of sliding mode controller, which is implemented in the coupled tank system. The Sliding Mode Control (SMC) controller exhibited a robust stability which can overcome nonlinearities, reduce disturbances and noise that occur in the coupled tank system. The work start with mathematical modelling the coupled tank system using second order single input single output (SISO) technique. Then, the sliding mode controller design began by deriving the sliding surface according to the second order coupled tank system. The control variables in this system, which are C1 and C2 are manipulated to obtain the best performances of the SMC. From the simulations, the performances characteristic of the SMC is analysed and investigated. The output response is obtained by implementing the SMC on the plant and compared with the proportional, integral, and derivative (PID) controller as a benchmarked controller. The results show that the robust SMC has better output response compared to the PID controller.

Author 1: Mohd Hafiz Jali
Author 2: Ameelia Ibrahim
Author 3: Rozaimi Ghazali
Author 4: Chong Chee Soon
Author 5: Ahmad Razif Muhammad

Keywords: Sliding Mode Control; PID controller; robustness; coupled tank system

PDF

Paper 24: Text Coherence Analysis based on Misspelling Oblivious Word Embeddings and Deep Neural Network

Abstract: Text coherence analysis is the most challenging task in Natural Language Processing (NLP) than other subfields of NLP, such as text generation, translation, or text summarization. There are many text coherence methods in NLP, most of them are graph-based or entity-based text coherence methods for short text documents. However, for long text documents, the existing methods perform low accuracy results which is the biggest challenge in text coherence analysis in both English and Bengali. This is because existing methods do not consider misspelled words in a sentence and cannot accurately assess text coherence. In this paper, a text coherence analysis method has been proposed based on the Misspelling Oblivious Word Embedding Model (MOEM) and deep neural network. The MOEM model replaces all misspelled words with the correct words and captures the interaction between different sentences by calculating their matches using word embedding. Then, the deep neural network architecture is used to train and test the model. This study examines two different types of datasets, one in Bengali and the other in English, to analyze text consistency based on sentence sequence activities and to evaluate the effectiveness of this model. In the Bengali language dataset, 7121 Bengali text documents have been used where 5696 (80%) documents have been used for training and 1425 (20%) documents for testing. And in the English language dataset, 6000 (80%) documents have been used for training and 1500 (20%) documents for model evaluation out of 7500 text documents. The efficiency of the proposed model is compared with existing text coherence analysis techniques. Experimental results show that the proposed model significantly improves automatic text coherence detection with 98.1% accuracy in English and 89.67% accuracy in Bengali. Finally, comparisons with other existing text coherence models of the proposed model are shown for both English and Bengali datasets.

Author 1: Md. Anwar Hussen Wadud
Author 2: Md. Rashadul Hasan Rakib

Keywords: Coherence analysis; deep neural network; distributional representation; misspellings; NLP; word embedding

PDF

Paper 25: Comprehensive Multilayer Convolutional Neural Network for Plant Disease Detection

Abstract: Agriculture has a dominant role in the world’s economy. However, losses due to crop diseases and pests significantly affect the contribution made by the agricultural sector. Plant diseases and pests recognized at an early stage can help limit the economic losses in agriculture production around the world. In this paper, a comprehensive multilayer convolutional neural network (CMCNN) is developed for plant disease detection that can analyze the visible symptoms on a variety of leaf images like, laboratory images with a plain background, complex images with real field conditions and images of individual disease symptoms or spots. The model performance is evaluated on three public datasets -Plant Village repository having images of the whole leaf with plain background, Plant Village repository with complex background and Digipathos repository with images of lone lesions and spots. Hyperparameters like learning rate, dropout probability, and optimizer are fine-tuned such that the model is capable of classifying various types of input leaf images. The overall classification accuracy of the model in handling laboratory images is 99.85%, real field condition images is 98.16% and for images with individual disease symptoms is 99.6%. The proposed design is also compared with the popular CNN architectures like GoogleNet, VGG16, VGG19 and ResNet50. The experimental results indicate that the suggested generic model has higher robustness in handling various types of leaf images and has better classification capability for plant disease detection. The obtained results suggest the favorable use of the proposed model in a decision support system to identify diseases in several plant species for a large range of leaf images.

Author 1: Radhika Bhagwat
Author 2: Yogesh Dandawate

Keywords: Crop diseases; plant disease detection; hyperparameters; deep learning; convolutional neural network

PDF

Paper 26: Personalized Book Recommendation System using Machine Learning Algorithm

Abstract: As the amounts of online books are exponentially increasing due to COVID-19 pandemic, finding relevant books from a vast e-book space becomes a tremendous challenge for online users. Personal recommendation systems have been emerged to conduct effective search which mine related books based on user rating and interest. Most of these existing systems are user-based ratings where content-based and collaborative-based learning methods are used. These systems' irrationality is their rating technique, which counts the users who have already been unsubscribed from the services and no longer rate books. This paper proposed an effective system for recommending books for online users that rated a book using the clustering method and then found a similarity of that book to suggest a new book. The proposed system used the K-means Cosine Distance function to measure distance and Cosine Similarity function to find Similarity between the book clusters. Sensitivity, Specificity, and F Score were calculated for ten different datasets. The average Specificity was higher than sensitivity, which means that the classifier could re-move boring books from the reader's list. Besides, a receiver operating characteristic curve was plotted to find a graphical view of the classifiers' accuracy. Most of the datasets were close to the ideal diagonal classifier line and far from the worst classifier line. The result concludes that recommendations, based on a particular book, are more accurately effective than a user-based recommendation system.

Author 1: Dhiman Sarma
Author 2: Tanni Mittra
Author 3: Mohammad Shahadat Hossain

Keywords: Personalize book recommendation; recommendation system; clustering; machine learning

PDF

Paper 27: Learning Management System based on Machine Learning: The Case Study of Ha'il University - KSA

Abstract: Online learning environments have become an established presence in higher education in the Kingdom of Saudi Arabia, especially with the expected of Covid-19 pandemic. At present, supporting e-learning with interactive virtual campuses is a future aim in education. In order to solve the problems of the interactivity and the adaptability of e-online learning systems in Saudi universities, this paper proposes a module, based on digital learning, and to be used in learning management systems to meet the challenges a future goal in e-online learning. The e-learning system should be intelligent and has the possibility to inspire the specific characteristics (i.e., metadata) of a student used to access to their social media profiles.

Author 1: Mohamed Hédi Maâloul
Author 2: Younès Bahou

Keywords: Learning management systems; blackboard; machine learning; semi-supervised learning; personalization system; SPS System; user profile; social profile

PDF

Paper 28: Using IndoWordNet for Contextually Improved Machine Translation of Gujarati Idioms

Abstract: Gujarati language is the Indo-Aryan language spoken by the Gujaratis, the people of the state of Gujarat of India. Gujarati is the one of the 22 official languages recognized by the Indian government. Gujarati script was adopted from Devanagari script. Approximately 3000 idioms are available in Gujarati language. Machine translation of any idiom is the challenging task because contextual information is important for the translation of a particular idiom. For the translation of Gujarati idioms into English or any other language, surrounding contextual words are considered for the translation of specific idiom in the case of ambiguity of the meaning of idiom. This paper experiments the IndoWordNet for Gujarati language for getting synonyms of surrounding contextual words. This paper uses n-gram model and experiments various window sizes surrounding the particular idiom as well as role of stop-words for correct context identification. The paper demonstrates the usefulness of context window in case of ambiguity in the meaning identification of idioms with multiple meanings. The results of this research could be consumed by any destination-independent machine translation system for Gujarati language.

Author 1: Jatin C. Modh
Author 2: Jatinderkumar R. Saini

Keywords: Contextual information; Gujarati; idiom; IndoWordNet; Machine Translation System (MTS); n-gram model

PDF

Paper 29: “iSAY”: Blockchain-based Intelligent Polling System for Legislative Assistance

Abstract: “iSAY”' is a Blockchain-based polling system created for legislative assistance. Sri Lanka is a democratic country. Country follows a representative democracy and voters in Sri Lanka vote for their preferred government based on their election mandate. However, governments implement legislative decisions that are not stated in the election mandate. People won’t get a chance to state their opinion on this legislative matter and the government also doesn’t know whether people like this or not.  To solve this issue, in this paper the authors propose a blockchain-based intelligent polling application for legislative assistance.  “iSay” is an application where blockchain technology gets together with machine learning to add value into the public opinion. The government can create a poll about a legislative decision and people can state their opinion which could be further discussed in the legislature. Adding a significant change to the blockchain based e-voting solutions this paper proposes a novel feature where users can add their idea to a relevant poll. Using machine learning algorithms all these user ideas will be classified and analyzed before presenting to the government. Through this research, it is expected to deploy scalable elections among the general public and get their vote and ideas about specific legislations to generate an overview of general public opinion about legislative decisions.

Author 1: Deshan Wattegama
Author 2: Pasindu S.Silva
Author 3: Chamika R. Jayathilake
Author 4: Kalana Elapatha
Author 5: Kavinga Abeywardena
Author 6: N. Kuruwitaarachchi

Keywords: Blockchain; machine learning; distributed systems; e-voting; legislative assistance; liquid democracy; natural language processing

PDF

Paper 30: Modeling the Estimation Errors of Visual-based Systems Developed for Vehicle Speed Measurement

Abstract: This paper aims to modeling the relationship between the error of visual-based systems developed for vehicle speed estimation (as dependent variable) and each of the detection region length, the camera angle, and the volume-to-capacity ratio (V/C), as independent variables. Simulation software (VISSIM) is used to generate a set of video clips of predefined traffic based on different values of the dependent variables. These videos are analyzed with a video-based detection and tracking model (VBDATM) developed in 2015. Errors are expressed as differences between each of the actual speeds generated by VISSIM and the speeds computed by the VBDATM divided by the actual speed. The results conducted by the forward stepwise regression analysis show that the V/C ratio does not affect the accuracy of the estimate and there are weak relationships between the estimation error and each of camera position and the detection region length.

Author 1: Abdulrazzaq Jawish Alkherret
Author 2: Musab AbuAddous

Keywords: Intelligent transportation systems; image processing; vehicle detection; vehicle tracking; speed estimation; traffic simulation; linear regression analysis

PDF

Paper 31: Characterization of Quaternary Deposits in the Bou Ahmed Coastal Plain (Chefchaouen, Morocco): Contribution of Electrical Prospecting

Abstract: The Bou Ahmed plain, which is part of the internal area of the Rif, is located along the Mediterranean coast, 30 kilometers of Oued Laou town. This basin is made up by a quaternary filling mainly formed by detrital fluvial facies, channeled conglomerates surmounted by fluvial sand interlayered with pebbles; these facies can be new potential aquifers formations areas. Therefore, the main goal of this study is to build a lithostratigraphic three dimensional model to identify the hydrogeological units and the reservoir geometry of the Bou Ahmed plain. In order to achieve this goal, we have created a database made up by Vertical Electrical Sounding surveys and drilling data integrated into a Geographic Information System. This database allowed us to establish a three-dimensional model of the bottom, geoelectric cross-sections, isopach and isoresistivity maps of new potential aquifers units. This approach allowed us to explain the modalities of deposition for the quaternary deposits of the Bou Ahmed plain and to identify potential hydrogeological reservoirs. These results will also be used to develop a hydrodynamic model based on MODFLOW code in the Bou Ahmed aquifer.

Author 1: Yassyr Draoui
Author 2: Fouad Lahlou
Author 3: Imane Al Mazini
Author 4: Jamal Chao
Author 5: Mohamed Jalal El Hamidi

Keywords: Bou Ahmed plain; hydrogeological units; reservoir geometry; vertical electrical sounding surveys; geographic information system; quaternary deposits

PDF

Paper 32: BiDETS: Binary Differential Evolutionary based Text Summarization

Abstract: In extraction-based automatic text summarization (ATS) applications, feature scoring is the cornerstone of the summarization process since it is used for selecting the candidate summary sentences. Handling all features equally leads to generating disqualified summaries. Feature Weighting (FW) is an important approach used to weight the scores of the features based on their presence importance in the current context. Therefore, some of the ATS researchers have proposed evolutionary-based machine learning methods, such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), to extract superior weights to their assigned features. Then the extracted weights are used to tune the scored-features in order to generate a high qualified summary. In this paper, the Differential Evolution (DE) algorithm was proposed to act as a feature weighting machine learning method for extraction-based ATS problems. In addition to enabling the DE to represent and control the assigned features in binary dimension space, it was modulated into a binary coded format. Simple mathematical calculation features have been selected from various literature and employed in this study. The sentences in the documents are first clustered according to a multi-objective clustering concept. DE approach simultaneously optimizes two objective functions, which are compactness measuring and separating the sentence clusters based on these objectives. In order to automatically detect a number of sentence clusters contained in a document, representative sentences from various clusters are chosen with certain sentence scoring features to produce the summary. The method was tested and trained using DUC2002 dataset to learn the weight of each feature. To create comparative and competitive findings, the proposed DE method was compared with evolutionary methods: PSO and GA. The DE was also compared against the best and worst systems benchmark in DUC 2002. The performance of the BiDETS model is scored with 49% similar to human performance (52%) in ROUGE-1; 26% which is over the human performance (23%) using ROUGE-2; and lastly 45% similar to human performance (48%) using ROUGE-L. These results showed that the proposed method outperformed all other methods in terms of F-measure using the ROUGE evaluation tool.

Author 1: Hani Moetque Aljahdali
Author 2: Ahmed Hamza Osman Ahmed
Author 3: Albaraa Abuobieda

Keywords: Differential evolution; text summarization; PSO; GA; evolutionary algorithms; optimization techniques; feature weighting; ROUGE; DUC

PDF

Paper 33: Semi-Direct Routing Approach for Mobile IP

Abstract: The Mobile IP (MIP) protocol is used to maintain device connectivity while the device is moving between networks through a permanent IP address and temporary care-of-address (CoA). There are two techniques to implement MIP; these are direct and indirect. The indirect is commonly used in the current industry due to its stability while the mobile host (MH) frequently moves from network to another. However, the indirect technique suffers from the problems of delays and enlargement related to the packet size. The direct technique is more sensitive to frequent mobility, yet it required less transformation overhead with stable mobility. Accordingly, to overcome the disadvantages of both techniques, a semi-direct technique is proposed in this paper. The proposed technique is implemented by minimizing the home agent's interference (HA) with a push notification to the correspondent node (CN) that concerns any modification in the moving MH's CoA. The simulation of the proposed technique, the indirect and the direct routing techniques showed the advantages of the semi-direct routing technique over the conventional approaches. The results showed that the semi-direct approach outperformed the conventional approaches in terms of delay and overhead with frequently moved MH.

Author 1: Basil Al-Kasasbeh

Keywords: Mobile IP; direct routing; indirect routing; care-of address; home agent; foreign agent

PDF

Paper 34: An Automated Convolutional Neural Network Based Approach for Paddy Leaf Disease Detection

Abstract: Bangladesh and India are significant paddy-cultivation countries in the globe. Paddy is the key producing crop in Bangladesh. In the last 11 years, the part of agriculture in Bangladesh's Gross Domestic Product (GDP) was contributing about 15.08 percent. But unfortunately, the farmers who are working so hard to grow this crop, have to face huge losses because of crop damages caused by various diseases of paddy. There are approximately more than 30 diseases of paddy leaf and among them, about 7-8 diseases are quite common in Bangladesh. Paddy leaf diseases like Brown Spot Disease, Blast Disease, Bacterial Leaf Blight, etc. are very well known and most affecting one among different paddy leaf diseases. These diseases are hampering the growth and productivity of paddy plants which can lead to great ecological and economical losses. If these diseases can be detected at an early stage with great accuracy and in a short time, then the damages to the crops can be greatly reduced and the losses of the farmers can be prevented. This paper has worked on 4 types of diseases and one healthy leaf class of the paddy. The main goal of this paper is to provide the best results for paddy leaf disease detection through an automated detection approach with the deep learning CNN models that can achieve the highest accuracy instead of the traditional lengthy manual disease detection process where the accuracy is also greatly questionable. It has analyzed four models such as VGG-19, Inception-Resnet-V2, ResNet-101, Xception, and achieved better accuracy from Inception-ResNet-V2 is 92.68%.

Author 1: Md. Ashiqul Islam
Author 2: Md. Nymur Rahman Shuvo
Author 3: Muhammad Shamsojjaman
Author 4: Shazid Hasan
Author 5: Md. Shahadat Hossain
Author 6: Tania Khatun

Keywords: Paddy leaf disease; deep convolutional neural network (DNN); transfer learning; VGG-19; ResNet-101; Inception-ResNet-V2; Xception

PDF

Paper 35: Quantification of Surface Water-Groundwater Exchanges by GIS Coupled with Experimental Gauging in an Alluvial Environment

Abstract: Surface water and groundwater are two interrelated components, where the influence of one automatically affects the quantity and quality of the other. These exchange flows are robustly influenced by some mechanisms such as permeability, lithological nature of the soil, landscape, in addition to the difference between the hydrometric height of the river and the piezometric level of groundwater. The study area of Bou Ahmed plain is vulnerable to intensive pumping mainly in the coastal fringe. The increase in water demand, due to demographic development, is accompanied by pressure on groundwater abstraction which causes significant drops of the groundwater level. The main objectives of this study are to develop Geographic Information System database and mathematical models to analyze spatial and temporal hydrogeological characteristics and hydrodynamic functioning of groundwater flow of the Bou Ahmed aquifer. The present work exhibits the characteristics of the river-groundwater exchanges in an alluvial plain. Therefore, we quantified the flows exchanged between a river and its groundwater using GIS tools along with measurements of parameters obtained by the differential gauging, which was carried out in the field, and hydrogeological boreholes data. These quantified flows, moreover, enabled us to eventually estimate the uncertainties related to the use of the GIS method. These results will also be used to support a set of groundwater simulations based on MODFLOW code in the Bou Ahmad aquifer. These models also associated with develop Geographic Information System will help to better plan, manage and control the groundwater resources of this aquifer.

Author 1: Yassyr Draoui
Author 2: Fouad Lahlou
Author 3: Jamal Chao
Author 4: Lhoussaine El Mezouary
Author 5: Imane Al Mazini
Author 6: Mohamed Jalal El Hamidi

Keywords: Surface water and groundwater; river-groundwater exchanges; geographic information system; differential gauging

PDF

Paper 36: Simplified Framework for Benchmarking Standard Downlink Scheduler over Long Term Evolution

Abstract: Downlink scheduling is one of the essential operations when it comes to improving the quality of service in Long Term Evolution (LTE). With an increasing user base, there will be an extensive challenge in resource provisioning too. A review of existing approaches shows that there is a significant possibility of improvement in this regard, and hence, the proposed manuscript presents a benchmarking model for addressing the issues associated with Best-Channel Quality Indicator (CQI), Round Robin, and Hybrid Automatic Repeat Request (HARQ). The outcome shows HARQ scheduling to offer better performance in higher throughput, higher fairness, and lower delay over different test cases.

Author 1: Srinivasa R K
Author 2: Hemanth Kumar A.R

Keywords: eNodeB; Scheduler; HARQ; Best-CQI; Round Robin

PDF

Paper 37: The Cognizance of Green Computing Concept and Practices among Secondary School Students: A Preliminary Study

Abstract: The use of information communication technology (ICT) is growing and has been a compulsory norm in society. However, the increased use of ICT facilities in all developing countries has contributed to higher energy use and lead to environmental pollution. This study explores the extent of awareness among the younger generation about green computing concepts and practices. In this study, a total of 94 secondary school students were sampled across Selangor state. The data were gathered using a set of questionnaires comprising of 20 items pertaining to the harmful effects of using computers and communication gadgets on the environment, awareness of the concepts and practices of green computing. The findings indicate it reveal that secondary school students are still not aware of the green computing concept. It is observed that 54.35% of students may not realize that computers and communication devices could be disposed of eco-safe. Furthermore, 61.96% of students do not realize that computer hardware can be recycled, and 75% of them do not have experience in disposing of their computers. Surprisingly, they mostly practice green computing when it comes to reducing energy consumption. This study contributes to determining the current level of students’ green computing awareness in a sustaining environment. In conclusion, students need to be educated on utilizing ICT resources and practicing green computing mechanisms to boost environmental sustainability.

Author 1: Shafinah Kamarudin
Author 2: Siti Munirah Mohd
Author 3: Nurul Nadwa Zulkifli
Author 4: Rosli Ismail
Author 5: Ribka Alan
Author 6: Philip Lepun
Author 7: Muhd Khaizer Omar

Keywords: Awareness; energy consumption; green computing; environment pollution; secondary students

PDF

Paper 38: Recognizing Activities of Daily Living using 1D Convolutional Neural Networks for Efficient Smart Homes

Abstract: Human activity recognition is considered a challenging task in sensor-based monitoring systems. In ambient intelligent environments, such as smart homes, collecting data from ambient sensors is useful for recognizing activities of daily living, which can then be used to provide assistance to inhabitants. Activities of daily living are composed of complex multivariable time series data that has high dimensionality, is huge in size, and is updated constantly. Thus, developing methods for analyzing time series data to extract meaningful features and specific characteristics would help solve the problem of human activity recognition. Based on the noticeable success of deep learning in the field of time series classification, we developed a model called a deep one-dimensional convolutional neural network (Deep 1d-CNN) for recognizing activities of daily living in smart homes. Our model contains several one-dimensional convolution layers coupled with max-pooling technique to learn the internal representation of time series data and automatically generate very deep features for recognizing different activity types. For the performance evaluation, we tested our deep model on the new real-life dataset, ContextAct@A4H, and the results showed that our model achieved a high F1 score (0.90). We also extended our study to show the potential energy saving in smart homes through recognizing activities of daily living. We built a recommendation system based on the activities recognized by our deep model to detect the devices that are wasting energy, and recommend the user to execute energy optimization actions. The experiment indicated that recognizing activities of daily living can result in energy savings of around 50%.

Author 1: Sumaya Alghamdi
Author 2: Etimad Fadel
Author 3: Nahid Alowidi

Keywords: Deep learning; one-dimensional convolutional neural networks; time-series classification; Activities of Daily Living (ADLs); smart home; recommendation system

PDF

Paper 39: A Cryptocurrency-based E-mail System for SPAM Control

Abstract: Sending bulk e-mail is commercially cheap and technically easy, making it profitable for spammers, even if a tiny percentage of recipients falls for the attacks or turn into customers. Some researchers have proposed making e-mail paid so that sending bulk e-mail becomes expensive, making spamming unprofitable and a futile exercise unless many victims respond to spam. On the other hand, the small sending fee is negligible for legitimate e-mail users. Making e-mail paid is a challenging task if implemented using a conventional payment system or developing new cryptocurrencies. Traditional payment systems are challenging to integrate with e-mail systems, and new cryptocurrencies will have challenges in adoption by users on the required scale. This work proposes using cryptocurrency payments to make e-mail senders pay for sending an e-mail without creating a new cryptocurrency or a new blockchain. In the proposed system, the recipients of the e-mail can collect the payments and use the collected revenues to send e-mail messages or even sell them on an exchange. The proposed solution has been implemented using Ropsten, an Ethereum Test Network and tested using enhanced E-mail Client and Server software.

Author 1: Shafiya Afzal Sheikh
Author 2: M. Tariq Banday

Keywords: E-mail; SPAM; blockchain; cryptocurrency; Ethereum

PDF

Paper 40: Discovery Engine for Finding Hidden Connections in Prose Comprehension from References

Abstract: Reading is one of the essential practices of modern human learning. Comprehending prose text simply from the available text is particularly challenging as in general the comprehension of prose requires the use of external knowledge or references. Although the processes of reading comprehension have been widely studied in the field of psychology, no algorithm level models for comprehension have yet to be developed. This paper has proposed a comprehension engine consisting of knowledge induction which connects the knowledge space by augmenting associations within it. The connections are achieved through the automatic incremental reading of external references and the capturing of high familiarity knowledge associations between prose concepts. The Ontology Engine is used to find lexical knowledge associations amongst concept pairs, with the objective being to obtain a knowledge space graph with a single giant component to establish a base model for prose comprehension. The comprehension engine is evaluated through experiments with various selected prose texts. Akin to human readers, it could mine reference texts from modern knowledge corpuses such as Wikipedia and WordNet. The results demonstrate the potential efficiency of using the comprehension engine that enhances the quality of reading comprehension in addition to reducing reading time. This comprehension engine is considered the first algorithm level model for comprehension compared with existing works.

Author 1: Amal Babour
Author 2: Javed I. Khan
Author 3: Fatema Nafa
Author 4: Kawther Saeedi
Author 5: Dimah Alahmadi

Keywords: Knowledge graph; ontology engine; text comprehension; text summarization; Wikipedia; WordNet

PDF

Paper 41: Impact of the Mining Activity on the Water Quality in Peru Applying the Fuzzy Logic with the Grey Clustering Method

Abstract: Mining activity in the department of Junín, Peru, is intense, due to the great existing mining-metallic potential that exists in the place, the Yauli and Andaychagua rivers, located in the Yauli Province, belonging to Junín, receive a large volume of discharges causing deterioration in the quality of the water of these rivers. To evaluate this quality in an integral way, fuzzy logic will be applied with the Grey Clustering methodology, defining the central point triangular whitening weight functions (CTWF), having as grey classes the Environmental Quality Standards for water (ECA-Water), Category 3, which were modified for research purposes. Four monitoring points were evaluated, both upstream (point PY-01) and downstream (PY-02) of the Yauli River; as upstream (PA-01) and downstream (PA-02) of the Andaychagua river. From the analysis it was determined that the water quality in PY-01 is 0.7302, 0.8795 in the dry season and 0.5980 in the wet season; in PY-02, 0.5448, 0.6448 in dry season and 0.5628 in wet season were obtained. At point PA-01 it is 0.8213, 0.8691 in dry season and 0.7902 in wet season; In PA-02, 0.8385, 0.8827 in the dry season and 0.8118 in the wet season were obtained, concluding that there is good water quality, decreasing in wet seasons, this due to the influence of the rains in the contact waters. The research provides an integration of the parameters that are considered in the ECA-Water with other international standards allowing to determine a more precise evaluation of the quality status of the Yauli and Andaychagua rivers after receiving the effluents generated by the mining activity, benefiting the relevant authorities for decision making and providing a methodology that improves the analysis of the results obtained by the specific parameters that are evaluated in the environmental monitoring.

Author 1: Alexi Delgado
Author 2: Anabel Fernandez
Author 3: Brigitte Chirinos
Author 4: Gabriel Barboza
Author 5: Enrique Lee Huamaní

Keywords: Mining activity; water quality; grey clustering

PDF

Paper 42: The Role of Electronic Means in Enhancing the Intellectual Security among Students at the University of Jordan

Abstract: The study aims to identify the role of electronic means in enhancing intellectual security among students at the University of Jordan. To achieve the objective of the study, a study instrument is developed, and the study sample consists of (525) male and female students. The results show that the response of the university students in assessing the role of electronic means in enhancing the students' intellectual security is with an arithmetic mean of (3.07), and a standard deviation of (1.128), and this score is considered medium. In light of the results of the study, the researchers recommend employing electronic means in activating the intellectual security among the university education students in Jordan.

Author 1: Mohammad Salim Al-Zboun
Author 2: Mamon Salim Al-Zboun
Author 3: Hussam N. Fakhouri

Keywords: Electronic means; intellectual security; enhancement

PDF

Paper 43: Enhancement of 3D Seismic Images using Image Fusion Techniques

Abstract: Seismic images are data collected by sending seismic waves to the earth subsurface, recording the reflection and providing subsurface structural information. Seismic attributes are quantities derived from seismic data and provide complementary information. Enhancing seismic images by fusing them with seismic attributes will improve the subsurface visualization and reduce the processing time. In seismic data interpretation, fusion techniques have been used to enhance the resolution and reduce the noise of a single seismic attribute. In this paper, we investigate the enhancement of 3D seismic images using image fusion techniques and neural networks to combine seismic attributes. The paper evaluates the feasibility of using image fusion models pretrained on specific image fusion tasks. These models achieved the best results on their respective tasks and are tested for seismic image fusion. The experiments showed that image fusion techniques are capable of combining up to three seismic attributes without distortion, future studies can increase the number. This is the first study conducted using pretrained models on other types of images for seismic image fusion and the results are promising.

Author 1: Abrar Alotaibi
Author 2: Mai Fadel
Author 3: Amani Jamal
Author 4: Ghadah Aldabbagh

Keywords: Image fusion; seismic image; seismic attribute; neural networks

PDF

Paper 44: Multi-beam Antenna Array Operating Over Switch On/Off Element Condition

Abstract: In this work, is presented the design of a linear multi-beam antenna. The designed procedure is focused on the possibility of switching off a part of antenna array elements, in active antenna systems, in order to preserve resources (power and heat dissipation). The behavior of the original multi beam antenna design is investigated on the radiation pattern alteration due to the switched off elements. The choice of switching on/off antenna elements requires less computational effort from the algorithms incorporated in the active antenna system. Antenna array beam design using progressive phase shifts permits beam orthogonality which is valuable over use multiple beam antenna. In this work, turning off part of antenna elements will inevitably change the beam orthogonality conditions. Despite this, the case presented in this paper, shows beam space discrimination better than 10dB. To rank the behavior of modified antenna, with the turned off elements, are used both Euclidean and Hausdorff distances to measure the changes resulted from the modified array performance. The obtained solutions show the applicability of binary operation on existing antenna array. The metric showed in here can be effectively used as ranking criteria.

Author 1: Julian Imami
Author 2: Elson Agastra
Author 3: Aleksandër Biberaj

Keywords: Multi-beam; phased antenna array; Hausdorff distance; Woodward-Lawson

PDF

Paper 45: A Survey on Image Encryption using Chaos-based Techniques

Abstract: Encryption methods such as AES (Advanced Encryption Standard), DES (Data Encryption Standard), etc. cannot be used for image encryption as images contain a huge amount of redundant data, a high correlation between neighboring pixels and size of the image is very large. Chaos- based techniques have suitable properties that are required for image encryption. The properties include sensitivity to initial conditions, pseudorandom number, ergodicity, and density of periodic orbits. In this paper, a survey of image encryption using chaos-maps such as a logistic map, piecewise linear chaotic map (PWLCM), tent map, etc. is done in order to choose best map for image encryption. Comparison of image encryption using different chaotic maps is done by considering parameters such as key-space and correlation analysis.

Author 1: Veena G
Author 2: Ramakrishna M

Keywords: Chaos theory; image encryption; logistic map; PWLCM; tent map

PDF

Paper 46: Classification of Arabic-Speaking Website Pages with Unscrupulous Intentions and Questionable Language

Abstract: This study aims to put forward a comprehensive and detailed classification system to categorize different Arabic-speaking website pages with unscrupulous intentions and questionable language. The methodology of this is based on a quantitative approach by using different algorithms (supervised) to build a model for data classification by using manually categorized information. The classification algorithm used to construct the model uses quantitative information extracted by Posit or SAFAR textual analysis framework. This model functions with (58) features combined from Posit – n-grams and morphological SAFAR V2 POS tools. This model achieved more than (94 %) success in the level of precision. The results of this study revealed that the best results reaching 94% precision have been achieved by combining Posit + SAFAR + (18 attributes Posit+ SAFAR N-Gram). Moreover, the most reliable results have been achieved by applying a Random Forest classification algorithm using regression. The research recommends working more on this topic and using new algorithms and techniques.

Author 1: Haya Mesfer Alshahrani

Keywords: Extremism; textual analysis; classification; Posit; SAFAR

PDF

Paper 47: Cyber Situation Awareness Perception Model for Computer Network

Abstract: With the increase in cyber threats, computer network security has raised a lot of issues among various companies. In order to guide against all these threats, a formidable Intrusion Detection System (IDS) is needed. Various Machine Learning (ML) algorithms such as Artificial Neural Network (ANN), Decision Tree (DT), Support Vector Machine (SVM), Naïve Bayes, etc. has been used for threat detection. In light of the novel threats, there is a need to use a combination of tools to accurately enhance intrusion detection in computer networks, this is because intruders are gaining ground in the cyber world and the side effects on organizations cannot be quantified. The aim of this work is to provide an enhanced model for the detection of threats on the computer network. The combination of DT and ANN is proposed to accurately predict threats. With this model, a network administrator will be rest assured to some extent based on the prediction of the model. Two different supervised machine algorithms were hybridized in this research. NSL-KDD dataset was deployed for the simulation process in WEKA environment. The proposed model gave 0.984 precision, 0.982 sensitivity and 0.987 accuracy.

Author 1: Olofintuyi Sunday Samuel

Keywords: Situation awareness; intrusion detection system; artificial neural network based decision tree; decision tree; classification

PDF

Paper 48: Visualization of Arabic Entities in Online Social Media using Machine Learning

Abstract: In recent years, the use of social media and the amount of exchangeable data have increased considerably. The increase in exchangeable data makes data mining, analysis and visualization of relevant information a challenging task. This research work assesses, categorizes, and analyzes Arabic entities on social media selected by users at certain time intervals. To accomplish this aim, the authors built a highly efficient classification model to classify entities according to three categories: person, location, and organization. The developed model captures an entity and specific time, collects all the posts on tweeter that refer to the entity at this specific time, and then classifies, visualize the entity through three methods. It first starts with classifying the entity through a corpus model that depends on customized corpus. If the entity is not classified through that model, it will be send to an indicators model which uses the pre-indicators or post-indicators for classing. Finally, the entity is passed to a gazetteer model which searches for the entity in three gazetteers (person, location, and organization), and accordingly determines the number of times the entity reference is repeated. This work allows scholars and researchers in different fields to visualize the frequency of entities referenced by a community. It also compares how references to entities change over time. The experimental results show that accuracy of the developed model in classifying the tweets is nearly 90%.

Author 1: Khowla Mohammed Alyamani
Author 2: Abdul Khader Jilani Saudagar

Keywords: Machine learning; classification; visualization; Arabic entities; social media

PDF

Paper 49: Systematic Review of Methodologies for the Development of Embedded Systems

Abstract: Embedded systems encompass software and hardware components developed in parallel. These systems have been the focus of interest for many scholars who emphasized development issues related to embedded systems. Moreover, they proposed different approaches for facilitating the development process. The aim of this work is to identify desirable characteristics of existing development methodologies, which provide a good foundation for development of new methodologies. For that purpose, systematic mapping methodology was applied to the area of embedded systems, resulting in a classification scheme, graphically represented by a multilayer conceptual network. Afterwards, the most significant clusters were identified, using the k-means algorithm and the squared Euclidean distance formula. Overall, the results provide guidelines for further research aiming to propose a holistic approach for the development of special case of embedded systems.

Author 1: Kristina Blaškovic
Author 2: Sanja Candrlic
Author 3: Alen Jakupovic

Keywords: Embedded systems; development; methodology; multilayer conceptual network; cluster analysis; k-means algorithm

PDF

Paper 50: B-droid: A Static Taint Analysis Framework for Android Applications

Abstract: Android is currently the most popular smartphone operating system in use, with its success attributed to the large number of applications available from the Google Play Store. However, these contain issues relating to the storage of the user’s sensitive data, including contacts, location, and the phone’s unique identifier (IMEI). Use of these applications therefore risks exfiltration of this data, including unauthorized tracking of users’ behavior and violation of their privacy. Sensitive data leaks are currently detected with taint analysis approaches. This paper addresses these issues by proposing a new static taint analysis framework specifically for Android platforms, termed “B-Droid”. B-Droid is based on static taint analysis using a large set of sources and sinks techniques, side by side with the fuzz testing concept, in order to detect privacy leaks, whether malicious or unintentional by analyses the behavior of Applications Under Test (AUTs). This has the potential to offer improved precision in comparison to earlier approaches. To ensure the quality of our analysis, we undertook an evaluation testing a variety of Android applications installed on a mobile after filtering according to the relevant permissions. We found that B-Droid efficiently detected five of the most prevalent commercial spyware applications on the market, as well as issuing an immediate warning to the user, so that they can decide not to continue with the AUTs. This paper provides a detailed analysis of this method, along with its implementation and results.

Author 1: Rehab Almotairy
Author 2: Yassine Daadaa

Keywords: Static analysis; taint analysis; fuzz testing; android applications; mobile malwares; data flow analysis

PDF

Paper 51: HoloLearn: An Interactive Educational System

Abstract: The HoloLearn project is a sophisticated interactive educational system that attempts to simplify the educational process in the field of medicine, mainly, through the use of Hologram technology. The Hologram technology has been used in conjunction with the feature of user interaction, to take the whole educational process to a completely new level providing students with a different learning experience. The system is more dedicated to medical students as they must study diverse, complicated structures of the human body anatomy and its internal organs. HoloLearn is aimed at replacing the traditional educational techniques with a new one that involves user interaction with real-sized 3D objects. Based on the interview conducted with medical students from different universities and educational levels in Saudi Arabia, and the questionnaire results, it has been found that traditional learning techniques are insufficient/inefficient as they lack quality and most of the criteria that could qualify them to be highly effective as reliable learning materials. Therefore, there is an increasing need for new learning strategies/methods with enough capabilities to give the students the chance to perceive every concept they study, rather than depending on their imagination to picture what a human body looks like from inside, they need visual learning methods. From another perspective, teachers also face difficulties when explaining medical concepts, especially those related to human body structure and behavior. The currently available materials and sources are mostly theoretical. They promote indoctrination and a result-driven approach instead of engaging the students in a process of sharing knowledge and ideas from both parties i.e., teacher and students. In fact, students have to listen and read instead of practicing and exploring, consequently students are prone to loss of concentration and mental distraction during lessons repeatedly, while from another angle they suffer from long study hours and difficulties in retrieving information. The results of this project indicate that when combining the Hologram technology with the user interaction feature, the educational process can be highly improved, and can be much more creative and entertaining.

Author 1: Shoroog Alghamdi
Author 2: Samar Aloufi
Author 3: Layan Alsalea
Author 4: Fatma Bouabdullah

Keywords: Interactive educational system; hologram technology; user interaction

PDF

Paper 52: Prototype of Web System for Organizations Dedicated to e-Commerce under the SCRUM Methodology

Abstract: This research work is based on making a prototype e-commerce system applying the scrum methodology, because currently the systems of many organizations are still developed following traditional methodologies, i.e. the system does not meet the requirements that should provide value to the organization in addition to its poor information security, leading to the organization's data to the vulnerability of loss or theft. That is why this article aims to design a web system for organizations dedicated to e-commerce under the agile SCRUM methodology. The methodology allowed to design the prototype meeting the needs of the organization with frequent retrospectives and continuous communication between stakeholders, in addition to proposing the information security of the company's data. In the results of this research the user stories were analyzed, which allowed the division into 4 Sprint deliverables of the general modules proposed in the article, for which a maximum score of 21 story points per Sprint was placed; in which 16 story points were placed to the first sprint, the second Sprint with 20 story points, the third Sprint with 21 story points and the fourth Sprint with 16 story points. The e-commerce system proposal will be beneficial for the organization and its customers, since it will allow them to have a system according to their requirements and needs in a secure manner.

Author 1: Ventocilla Gomero-Fanny
Author 2: Aguila Ruiz Bengy
Author 3: Laberiano Andrade-Arenas

Keywords: Agile; e-commerce; scrum; sales; user stories; sprint

PDF

Paper 53: Blockchain in Insurance: Exploratory Analysis of Prospects and Threats

Abstract: Ever since the first generation of blockchain technology became very successful and the FinTech enormously benefited from it with the advent of cryptocurrency, the second and third generations championed by Ethereum and Hyperledger have explored the extension of blockchain in other domains like IoT, supply chain management, healthcare, business, privacy, and data management. A field as huge as the insurance industry has been underrepresented in literature. Therefore, this paper presents how investments in blockchain technology can profit the insurance industry. We discuss the basics of blockchain technology, popular platforms in use today, and provide a simple theoretical explanation of the insurance sub-processes which blockchain can mutate positively. We also discuss hurdles to be crossed to fully implement blockchain solutions in the insurance domain.

Author 1: Anokye Acheampong AMPONSAH
Author 2: Adebayo Felix ADEKOYA
Author 3: Benjamin Asubam WEYORI

Keywords: Blockchain technology; insurance industry; hyperledger; ethereum

PDF

Paper 54: Dual Frequencies Usage by Full and Incomplete Ring Elements

Abstract: Full and incomplete ring patch elements antenna can be mixed to generate a variety of responses with respect to the direction of polarization. Dual frequencies requirement in numerous usage like in tracking radar can be achieved by combining the patches. To obtain the favored operating frequency, a straightforward design of a ring patch and also the addition of a gap have been utilized. Various sizes of the gap and the gap's position within a ring have been examined in the study. In order to investigate the element conduct, the surface current distribution, return loss, as well as reflection phase, are monitored. The evaluation revealed that the ring element with a smaller width would offer a sharp gradient of reflection phase that lowers its bandwidth performance. Meanwhile, the element performance was not affected by the gap's placement at the upper and lower part of the ring. However, the gap's size of 0.2 mm placed at the left or right position of the ring has shifted up the resonant frequency from 8.1 GHz to 11.8 GHz. For this study, it was shown that the mixture of full and incomplete ring elements has the potential to be utilized as an antenna to comprehend the operation of monopulse.

Author 1: Kamarulzaman Mat
Author 2: Norbahiah Misran
Author 3: Mohammad Tariqul Islam
Author 4: Mohd Fais Mansor

Keywords: Incomplete ring; full ring; dual frequencies; polarization-dependent

PDF

Paper 55: An Efficient Data Replication Technique with Fault Tolerance Approach using BVAG with Checkpoint and Rollback-Recovery

Abstract: Data replication has been one of the pathways for distributed database management as well as computational intelligence scheme as it continues to improve data access and reliability. The performance of data replication technique can be crucial when it involves failure interruption. In order to develop a more efficient data replication technique which can cope with failure, a fault tolerance approach needs to be applied in the data replication transaction. Fault tolerance approach is a core issue for a transaction management as it preserves an operation mode transaction prone to failure. In this study, a data replication technique known as Binary Vote Assignment on Grid (BVAG) has been combined with a fault tolerance approach named as Checkpoint and Rollback-Recovery (CR) to evaluate the effectiveness of applying fault tolerance approach in a data replication transaction. Binary Vote Assignment on Grid with Checkpoint and Rollback-Recovery Transaction Manager (BVAGCRTM) is used to run the BVAGCR proposed method. The performance of the proposed BVAGCR is compared to standard BVAG in terms of total execution time for a single data replication transaction. The experimental results reveal that BVAGCR improves the BVAG total execution time in failure environment of about 31.65% by using CR fault tolerance approach. Besides improving the total execution time of BVAG, BVAGCR also reduces the time taken to execute the most critical phase in BVAGCRTM which is Update (U) phase by 98.82%. Therefore, based on the benefits gained, BVAGCR is recommended as a new and efficient technique to obtain a reliable performance of data replication with failure condition in distributed databases.

Author 1: Sharifah Hafizah Sy Ahmad Ubaidillah
Author 2: A. Noraziah
Author 3: Basem Alkazemi

Keywords: Data replication; computational intelligence; fault tolerance; binary vote assignment on grid; checkpoint and rollback-recovery

PDF

Paper 56: Evaluation of Water Quality in the Lower Huallaga River Watershed using the Grey Clustering Analysis Method

Abstract: Currently, the evaluation of water quality is a topic of global interest, due to its socio-cultural, environmental and economic importance, but in recent years this quality is deteriorating due to inadequate management in the conservation, disposal and use of water by the competent authorities, private-state entities and the population itself. An alternative to determine the quality of a water body in an integrated manner is the Grey Clustering Method, which was used in this study taking as an indicator the Prati Quality Index, with the objective of making an objective analysis of the quality of the water bodies under study. The case study is the Lower Watershed of the Huallaga River, located between the region of Loreto and San Martin, along which 12 monitoring stations were established to evaluate its surface water quality, through the analysis of 7 parameters: pH, BOD, COD, Total Suspended Solids (TSS), Ammonia Nitrogen, Substrates and Nitrates. Finally, it was determined that the water quality of eleven monitoring stations in the Lower Huallaga River Watershed are within the "Uncontaminated" category, while one monitoring station is within the "Highly Contaminated" category of the Prati Index, this due to its proximity to a landfill. The results obtained in this study, could be useful for the authorities responsible for the protection and sustainable conservation of the Huallaga River Watershed, in order to propose appropriate measures to improve its quality, additionally, this study could be a reference for future studies since the proposed method allowed to prioritize the quality level of the water bodies and identify critical areas.

Author 1: Alexi Delgado
Author 2: Diego Cuadra
Author 3: Karen Simon
Author 4: Katya Bonilla
Author 5: Katherine Tineo
Author 6: Enrique Lee Huamaní

Keywords: Water quality; prati index; grey clustering method; protection and sustainable conservation

PDF

Paper 57: A Comparative Analysis of Machine Learning Models for First-break Arrival Picking

Abstract: First-break (FB) picking is an important and necessary step in seismic data processing and there is a need to develop precise and accurate auto-picking solutions. Our investigation in this study includes eight machine learning models. We use 1195 raw traces to extract several features and train for accurate picking and monitoring the performance of each model using well-defined evaluation metrics. Careful investigation of the scores shows that a single metric alone is not sufficient to evaluate the arrival picking models in real-time. Correlation analysis of predicted probabilities and predicted classes of machine learning models confirm that the performance metrics that use predicted probabilities have higher score value than those that use predicted classes. Our study which incorporates comparisons of different machine learning models based on different performance metrics, training time, and feature importance indicates that the approach we developed in this study is helpful and provides an opportunity to determine the real-time suitability of different methodologies for automatic FB arrival picking with clear deep insight. Based on performance scores, we bench-marked the Extra Tree classifier as the most efficient model for FB arrival picking with accuracy and F1-score above 95%.

Author 1: Mohammed Ayub
Author 2: SanLinn I. Kaka

Keywords: First-break arrival picking; seismology; neural networks; machine learning; feature ranking

PDF

Paper 58: Olive Oil Ripping Time Prediction Model based on Image Processing and Neural Network

Abstract: The agriculture sector in Jordan depends very much on planting the olive trees. More than ten million of olive trees are planted in the Jordanian soil. Olive fruit are harvested for two purposes; either to produce oil or to produce olive table (pickled olive). Olive fruit harvesting time for extracting the oil from the olive fruit is crucial. Hence, harvesting the olive fruit on ripping time gives the best amount and quality of oil. It also, could lose 15% to 20% of multiple values because of harvesting time. Olive fruit ripping time is varied since it depends on the rainfall, temperature and cultivation. A system to predict the optimal time for harvesting olive fruit for producing oil only is introduced. It based one Digital Image Processing (DIP) and artificial intelligent neural network. Moreover, four features were extracted from the olive fruit image based on the red, green and blue colors. The proposed system tested olive fruits in three stages of ripping time; under ripping, on ripping and over ripping. The classification accuracy achieved in the three stages was 97.51% in under ripping stage 95.10% in ripping stage, and 96.12% in over ripping stage. The proposed system performance was 96.14%.

Author 1: Mutasem Shabeb Alkhasawneh

Keywords: Neural network; image processing; olive ripping time; prediction; classifications

PDF

Paper 59: A New Discretization Approach of Bat and K-Means

Abstract: Bat algorithm is one of the optimization techniques that mimic the behavior of bat. Bat algorithm is a powerful algorithm in finding the optimum feature data collection. Classification is one of the data mining tasks that useful in knowledge representation. But, the high dimensional data become the issue in the classification that interrupt classification accuracy. From the literature, feature selection and discretization able to overcome the problem. Therefore, this study aims to show Bat algorithm is potential as a discretization approach and as a feature selection to improve classification accuracy. In this paper, a new hybrid Bat-K-Mean algorithm refer as hBA is proposed to convert continuous data into discrete data called as optimize discrete dataset. Then, Bat is used as feature selection to select the optimum feature from the optimized discrete dataset in order to reduce the dimension of data. The experiment is conducted by using k-Nearest Neighbor to evaluate the effectiveness of discretization and feature selection in classification by comparing with continuous dataset without feature selection, discrete dataset without feature selection, and continuous dataset without discretization and feature selection. Also, to show Bat is potential as a discretization approach and feature selection method. . The experiments were carried out using a number of benchmark datasets from the UCI machine learning repository. The results show the classification accuracy is improved with the Bat-K-Means optimized discretization and Bat optimized feature selection.

Author 1: Rozlini Mohamed
Author 2: Noor Azah Samsudin

Keywords: Classification; discretization; feature selection; optimization algorithm; bat algorithm

PDF

Paper 60: Development of a System to Manage Letters of Recommendation

Abstract: Letter of Recommendation (LoR) is a letter which describes qualifications, skills, and abilities of the person being recommended. Students need to request a letter of recommendation from their instructors for the purpose of applying for jobs, internships and academic studies. The main obstacle which many students face is that instructors do not response quickly, especially if the students have graduated. On the other hand, the instructors find it difficult to manage the many requests they receive, especially at the end of the semester. The work in this paper presents the design, development and testing of system that aims to replace the traditional method of requesting/issuing LoRs with a more systematic standardized way. The developed system may be adopted to simplify, unify, and improve the process. The results showed that the developed system enhanced the communication between requesters and issuers, reduced the amount of time and effort comparing with traditional way and achieved the usability requirements.

Author 1: Reham Alabduljabbar

Keywords: Letter of recommendation; mobile application; letter of reference; android mobile application; LoR; standardized letters of recommendations

PDF

Paper 61: Conceptual Temporal Modeling Applied to Databases

Abstract: We present a different approach to developing a concept of time for specifying temporality in the conceptual modeling of software and database systems. In the database field, various proposals and products address temporal data. The difficulty with most of the current approaches to modeling temporality is that they represent and record time as just another type of data (e.g., values of a bank balance or amounts of money), instead of appreciating that time and its values are unique, in comparison to typical data attributes. Time is an engulfing phenomenon that lifts a system’s entire model from staticity to dynamism and beyond. In this paper, we propose a conceptualization of temporality involving the construction of a multilevel modeling method that progresses from static representation to system compositions that form regions of dynamism. Then, a chronology of events is used to define the system’s behavior. Lastly, the events are viewed as data sources with which to build a temporal model. A case-study model of a temporal banking-management system database that extends UML and the object-constraint language is re-modeled using thinging machine (TM) modeling. The resultant TM diagrammatic specification delivers a new approach to temporality that can be extended to be a holistic monitoring system for historic data and events.

Author 1: Sabah Al-Fedaghi

Keywords: Conceptual modeling; temporal database; static model; events model; behavioral model

PDF

Paper 62: Towards a Real Time Distributed Flood Early Warning System

Abstract: Since the beginning of humanity, floods have caused a lot of damages and have killed many people. So far, floods are causing a lot of losses and damage in many countries every year. When facing such a disaster, effective decisions regarding flood management must be made using real-time data, which must be analyzed and, more importantly, controlled. In this paper, we present a distributed decision support system that can be deployed to support flood management decision makers. Our system is based on Multi-Agent Systems and Anytime Algorithm, and it has two modes of processing: a Pre-Processing mode to test and control the information sent by sensors in real-time and the Main Processing mode, which has three different parts. The first part is the Trigger Mode for monitoring rainfall and triggering the second part, the offline mode, which predicts the flood based on historical data without going through the real-time decision support system. Finally, the online mode predicts the flood based on real-time data and on a combination of communications among different modules, hydrodynamic data, Geographic Information System (GIS), decision support and a remote sensing module to determine information about the flood.

Author 1: EL MABROUK Marouane

Keywords: Flood; forecasting; distributed decision support system; multi-agent system; anytime algorithm

PDF

Paper 63: Modelling Health Process and System Requirements Engineering for Better e-Health Services in Saudi Arabia

Abstract: This systematic review aimed to examine the published works on e-health modelling system requirements and suggest one applicable to Saudi Arabia. PRISMA method was adopted to search, screen and select the papers to be included in this review. Google Scholar was used as the search engine to collect relevant works. From an initial 74 works, 20 were selected after all screening procedures as per PRISMA flow diagram. The 20 selected works were discussed under various sections. The review revealed that goal setting is the first step. Using the goals, a model can be created based on which system requirements can be elicited. Different research used different approaches within this broad framework and applied the procedures to varying healthcare contexts. Based on the findings, an attempt has been made to set the goals and elicit the system requirements for a diabetes self-management model for the entire country in Saudi Arabian context. This is a preliminary model which needs to be tested, improved and then implemented.

Author 1: Fuhid Alanazi
Author 2: Valerie Gay
Author 3: Mohammad N. Alanazi
Author 4: Ryan Alturki

Keywords: e-health; e-health systems; e-healthy modelling and e-health modelling system requirements

PDF

Paper 64: Comparison of Deep and Traditional Learning Methods for Email Spam Filtering

Abstract: Electronic mail, or email, is a method for com-municating using the internet which is inexpensive, effective, and fast. Spam is a type of email where unwanted messages, usually unwanted commercial messages, are distributed in large quantities by a spammer. The objective of such behavior is to harm email users; these messages need to be detected and prevented from being sent to users in the first place. In order to filter these emails, the developers have used machine learning methods. This paper discusses different methods which are used deep learning methods such as a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) models with(out) a GloVe model in order to classify spam and non-spam messages. These models are only based on email data, and the extraction set of features is automatic. In addition, our work provides a comparison between traditional machine learning and deep learning algorithms on spam datasets to find out the best way to intrusion detection. The results indicate that deep learning offers improved performance of precision, recall, and accuracy. As far as we are aware, deep learning methods show great promise in being able to filter email spam, therefore we have performed a comparison of various deep learning methods with traditional machine learning methods. Using a benchmark dataset consisting of 5,243 spam and 16,872 not-spam and SMS messages, the highest achieved accuracy score is 96.52% using CNN with the GloVe model.

Author 1: Abdullah Sheneamer

Keywords: Spam filtering; machine learning; deep learning; LSTM; CNN

PDF

Paper 65: Mobile Application Design with IoT for Environmental Pollution Awareness

Abstract: In the present study, we have seen that many people are affected by environmental pollution, so we proposed a mobile application prototype to facilitate awareness of this environmental problem. The methodology that will help us make this application will be the Scrum methodology, since it is adaptable to the constant changes in the mobile application development process. We will also use an Internet of Things and Fire based technology to collect data from the various air pollution sensors. Since users require a visualization of the contamination in real time. The mobile application to which users will automatically register will have easy access for use in monitoring and controlling atmospheric pollution, whose reception data will be through sensors. The result that has been obtained is that it was achieved with the implementation of the application that people are aware of the damage that these pollutants cause to the environment.

Author 1: Anthony Ramos-Romero
Author 2: Brighitt Garcia-Yataco
Author 3: Laberiano Andrade-Arenas

Keywords: Environmental pollution; internet of things; scrum; mobile app

PDF

Paper 66: An Integrated Imbalanced Learning and Deep Neural Network Model for Insider Threat Detection

Abstract: The insider threat is a vital security problem concern in both the private and public sectors. A lot of approaches available for detecting and mitigating insider threats. However, the implementation of an effective system for insider threats detection is still a challenging task. In previous work, the Machine Learning (ML) technique was proposed in the insider threats detection domain since it has a promising solution for a better detection mechanism. Nonetheless, the (ML) techniques could be biased and less accurate when the dataset used is hugely imbalanced. Therefore, in this article, an integrated insider threat detection is named (AD-DNN), which is an integration of adaptive synthetic technique (ADASYN) sampling approach and deep neural network technique (DNN). In the proposed model (AD-DNN), the adaptive synthetic (ADASYN) is used to solve the imbalanced data issue and the deep neural network (DNN) for insider threat detection. The proposed model uses the CERT dataset for the evaluation process. The experimental results show that the proposed integrated model improves the overall detection performance of insider threats. A significant impact on the accuracy performance brings a better solution in the proposed model compared with the current insider threats detection system.

Author 1: Mohammed Nasser Al-Mhiqani
Author 2: Rabiah Ahmed
Author 3: Z Zainal Abidin
Author 4: S.N Isnin

Keywords: Security; insider threat; insider threats detection; machine learning; deep learning; imbalanced data

PDF

Paper 67: A Systematic Study of Duplicate Bug Report Detection

Abstract: Defects are an integral part of any software project. They can arise at any time, at any phase of the software development or the maintenance phase. In open source projects, open bug repositories are used to maintain the bug reports. When a new bug report arrives, a person called “Triager” analyzes the bug report and assign it to some responsible developer. But before assigning, has to look if it is duplicate or not. Duplicate Bug Report is one of the big problems in the maintenance of bug repositories. Lack of knowledge and vocabulary skills of reporters sometimes increases the effort required for this purpose. Bug Tracking Systems are usually used to maintain the bug reports and are the most consulted resource during the maintenance process. Because of the Uncoordinated nature of the submission of bug reports to the tracking system, many times the same bug report is reported by many users. Duplicate Bug Reports lead to the waste of resources and the economy. It creates problems for triagers and requires a lot of analysis and validation. Lot of work has been done in the field of duplicate bug report detection. In this paper, we present the researches systematically done in this field by classifying the works into three categories and listing down the methods being used for the classified researches. The paper considers the papers till January 2020 for the analysis purpose. The paper mentions the strengths, limitations, data set, and the major approach used by the popular papers of the research in this field. The paper also lists the challenges and future directions in this field of research.

Author 1: Som Gupta
Author 2: Sanjai Kumar Gupta

Keywords: AUSUM; feature-based; deep learning; semantic; unsupervised

PDF

Paper 68: A Blockchain-based Crowdsourced Task Assessment Framework using Smart Contract

Abstract: In today’s world, crowdsourcing is a highly rising paradigm where mass people are engaged in solving a problem. Though this system has a lot of advantages, yet people are not interested in working on this platform. Thus, we survey people to find out the constraints of this platform and the main reason behind their unwillingness. 59% of people think that security and privacy is the major challenge of a crowdsourcing platform. Therefore, we propose a blockchain-based crowdsourced system which can provide security and privacy to the user’s information. We also have used a smart contract to verify the task so that the users get the exact output that they have wanted. We implemented our system and compared the performance with the existing systems. Our proposed approach outperforms the current methods in terms of cost and properties.

Author 1: Linta Islam
Author 2: Syada Tasmia Alvi
Author 3: Mafizur Rahman
Author 4: Ayesha Aziz Prova
Author 5: Md. Nazmul Hossain
Author 6: Jannatul Ferdous Sorna
Author 7: Mohammed Nasir Uddin

Keywords: Blockchain; crowdsourcing; task allocation; smart contract

PDF

Paper 69: Fake Reviews Detection using Supervised Machine Learning

Abstract: With the continuous evolve of E-commerce systems, online reviews are mainly considered as a crucial factor for building and maintaining a good reputation. Moreover, they have an effective role in the decision making process for end users. Usually, a positive review for a target object attracts more customers and lead to high increase in sales. Nowadays, deceptive or fake reviews are deliberately written to build virtual reputation and attracting potential customers. Thus, identifying fake reviews is a vivid and ongoing research area. Identifying fake reviews depends not only on the key features of the reviews but also on the behaviors of the reviewers. This paper proposes a machine learning approach to identify fake reviews. In addition to the features extraction process of the reviews, this paper applies several features engineering to extract various behaviors of the reviewers. The paper compares the performance of several experiments done on a real Yelp dataset of restaurants reviews with and without features extracted from users behaviors. In both cases, we compare the performance of several classifiers; KNN, Naive Bayes (NB), SVM, Logistic Regression and Random forest. Also, different language models of n-gram in particular bi-gram and tri-gram are taken into considerations during the evaluations. The results reveal that KNN(K=7) outperforms the rest of classifiers in terms of f-score achieving best f-score 82.40%. The results show that the f-score has increased by 3.80%when taking the extracted reviewers behavioral features into consideration.

Author 1: Ahmed M. Elmogy
Author 2: Usman Tariq
Author 3: Ammar Mohammed
Author 4: Atef Ibrahim

Keywords: Fake reviews detection; data mining; supervised machine learning; feature engineering

PDF

Paper 70: An Adaptive Genetic Algorithm for a New Variant of the Gas Cylinders Open Split Delivery and Pickup with Two-dimensional Loading Constraints

Abstract: This paper studies a combination of two well-known problems in distribution logistics, which are the truck loading problem and the vehicle routing problem. In our context, a customer daily demand exceeds the truck capacity. As a result, the demand has to be split into several routes. In addition, it is required to assign customers to depots, which means that each customer is visited just once by any truck in the fleet. Moreover, we take into consideration a customer time windows. The studied problem can be defined as a Multi-depots open split delivery and pickup vehicle routing problem with two-dimensional loading constraints and time windows (2L-MD-OSPDTW). A mathemat-ical formulation of the problem is proposed as a mixed-integer linear programming model. Then, a set of four class instances is used in a way that reflects the real-life case study. Furthermore, a genetic algorithm is proposed to solve a large scale dataset. Finally, preliminary results are reported and show that the MILP performs very well for small test instances while the genetic algorithm can be efficiently used to solve the problem for a wide-reaching test instances.

Author 1: Anouar Annouch
Author 2: Adil Bellabdaoui

Keywords: Vehicle routing problem; split delivery and pickup; multi-depot; two-dimensional loading; genetic algorithm

PDF

Paper 71: Study of Post-COVID-19 Employability in Peru through a Dynamic Model, Between 2020 and 2025

Abstract: The research work is focused on the sector of the population that will have a job, taking into account that the pandemic was a problem that resulted in many people losing their jobs due to the economic crisis that affected all countries because the first half of the year 2020 the equivalent of 400 million full-time jobs were lost and there was a 14% drop in working hours worldwide, also in Lima 1.2 million people were left without work. For this reason, a dynamic analysis was developed for the projection of post-COVID19 employability in Peru from 2020 to 2025 to obtain an approximate knowledge of the population’s labor outlook, implementing system dynamics as a methodology since it includes this recommendation, given that any model built through its application will be based on the opinion of those involved in the system to be represented. In this work, system dynamics is presented as a very useful methodology for the analysis of complex problems, developing the Forrester and causal diagram with the help of Vensim software. As a result, the approximate number of jobs that will be available was visualized, and it was observed that the future of employability will be at risk, which is why a good government strategy is necessary so that this does not happen since people need to satisfy their professional, economic and development needs.

Author 1: Richard Ronny Arias Marreros
Author 2: Keyla Vanessa Nalvarte Dionisio
Author 3: Luis Alberto Romero Tuanama
Author 4: Juber Alfonso Quiroz Gutarra
Author 5: Laberiano Andrade-Arenas

Keywords: Employability; forrester diagram; population; sys-tem dynamics; vensim

PDF

Paper 72: Topic based Sentiment Analysis for COVID-19 Tweets

Abstract: The incessant Coronavirus pandemic has had a detrimental impact on nations across the globe. The essence of this research is to demystify the social media’s sentiments regarding Coronavirus. The paper specifically focuses on twitter and extracts the most discussed topics during and after the first wave of the Coronavirus pandemic. The extraction was based on a dataset of English tweets pertinent to COVID-19. The research study focuses on two main periods with the first period starting from March 01,2020 to April 30, 2020 and the second period starting from September 01,2020 to October 31, 2020. The Latent Dirichlet Allocation (LDA) was adopted for topics extraction whereas a lexicon based approach was adopted for sentiment analysis. In regards to implementation, the paper utilized spark platform with Python to enhance speed and efficiency of analyzing and processing large-scale social data. The research findings revealed the appearance of conflicting topics throughout the two Coronavirus pandemic periods. Besides, the expectations and interests of all individuals regarding the various topics were well represented.

Author 1: Manal Abdulaziz
Author 2: Alanoud Alotaibi
Author 3: Mashail Alsolamy
Author 4: Abeer Alabbas

Keywords: Social media analysis; COVID-19; topics extraction; sentiment analysis; LDA; spark; twitter

PDF

Paper 73: Building a Personalized Fitness Recommendation Application based on Sequential Information

Abstract: Nowadays sports plays a very important role in the life of the human being and it allows to keep him healthy and make him always active. Sport is essential for people to have a healthy mind. However, the practice of a sport can have negative effects on the body and human health if it is practiced incorrectly or if it is not adapted to the body or the human health. This is why, in this paper, we have proposed a recommendations system that allows the selection of the right person to practice the right sport according to several factors such as heart rate, speed and size. The implementation was applied to the FitRec dataset with the help of SPARK tool, and the results show that the proposed method is capable of generating the appropriate training for different groups according to their information, where each group gets the appropriate training. The grouping of this data was done by the k-means method.

Author 1: Manal Abdulaziz
Author 2: Bodor Al-motairy
Author 3: Mona Al-ghamdi
Author 4: Norah Al-qahtani

Keywords: Big data; big data processing; recommendation system; sport analysis; K-means

PDF

Paper 74: Inventory Management Analysis under the System Dynamics Model

Abstract: In this work, the modeling of the system dynamics concerning inventory management has been carried out, this was done to achieve a correct analysis on said management, and thus make decisions that benefit the company. Knowing that the problem lies in the mismanagement of inventories, which are managed by certain companies with little or vast knowledge about inventory management, for which, it is desired to make use of the system dynamics modeling, so that this, achieves a correct analysis of the management, but is focused on the dynamics of system. The result obtained, from the development of the methodology applied in this work, was a correct and adequate analysis of the dynamics modeling of system in inventory management, which was achieved, using the simulation software known as Vensim, and a methodology based on three stages that are the Causal Diagram, the Forrester Diagram, and the mathematical equations.

Author 1: Shal´om Adonai Huaraz Morales
Author 2: Laberiano Andrade-Arenas

Keywords: Causal diagram; dynamics of system; forrester dia-gram; inventory management; Vensim

PDF

Paper 75: Cuckoo-Neural Approach for Secure Execution and Energy Management in Mobile Cloud Computing

Abstract: Along with an explosive growth in mobile appli-cations and the emergence of the concept of cloud computing, in mobile cloud computing (MCC) has been familiarized as a potential technology for mobile users. Employing MCC to enable mobile users to realize the benefits of cloud computing in an environment friendly way is an effective strategy to meet today’s industrial demands. With the ever-increasing demand for MCC technology, energy efficiency has become extremely relevant in mobile cloud computing infrastructure. The concept of mobile cloud computing offers low cost and high availability to the mobile cloud users on pay-per-use basis. However, the challenges such as resource management and energy consumption are still faced by mobile cloud providers. If the allocation of the resources is not managed in a secure manner, the false allocation will lead to more energy consumption. This article demonstrates the importance of energy-saving mechanisms in cloud data centers and elaborates the importance of the “energy efficiency” relationship to promote the adoption of these mechanisms in practical scenarios. The utilization of resources are being maximized by minimizing the energy consumption. To achieve this, an integrated approach using Cuckoo Search (CS) with Artificial Neural network (ANN) is presented here. Initially, the Virtual Machines (VMs) are sorted as per their CPU utilization using Modified Best Fit Decreasing Approach (MBFD). This suffers from the increase in Service Level Agreement (SLA) violation along with many VM migrations. If the migration is not done at an appropriate host, which can hold the VM for long, Service Level Agreement Violation (SLAV) will be high.

Author 1: Vishal
Author 2: Bikrampal Kaur
Author 3: Surender Jangra

Keywords: Mobile cloud computing; VM migration; energy consumption; SLA violation; VM selection; overloading; under loading

PDF

Paper 76: An Improved Biometric Fusion System of Fingerprint and Face using Whale Optimization

Abstract: In the field of wireless multimedia authentication unimodal biometric model is commonly used but it suffers from spoofing and limited accuracy. The present work pro-poses the fusion of features of face and fingerprint recognition system as an Improved Biometric Fusion System (IBFS) leads to improvement in performance. Integrating multiple biometric traits recognition performance is improved and thereby reducing fraudulent access.The paper introduces an IBFS comprising of authentication systems that are Improved Fingerprint Recogni-tion System (IFPRS) and Improved Face Recognition System (IFRS) are introduced. Whale optimization algorithm is used with minutiae feature for IFPRS and Maximally Stable External Regions (MSER) for IFRS. To train the designed IBFS, Pattern net model is used as a classification algorithm. Pattern net works based on processed data set along with SVM to train the IBFS model to achieve better classification accuracy. It is observed that the proposed fusion system exhibited average true positive rate and accuracy of 99.8 percentage and 99.6 percentage, respectively.

Author 1: Tajinder Kumar
Author 2: Shashi Bhushan
Author 3: Surender Jangra

Keywords: Biometric fusion; face recognition; fingerprint recognition; feature extraction; feature optimization; classifier

PDF

Paper 77: Implementation of an e-Commerce System for the Automation and Improvement of Commercial Management at a Business Level

Abstract: At present micro and small businesses engaged in the production and marketing of products and have a single means of sale, whether stalls or physical stores, have been affected by the current crisis that is happening due to the pandemic, which came in early 2020 to Europe and different countries in Latin America, which is causing terrible damage to the economy of enterprises, since it does not have a means of virtual sales, where they can offer and market their products so that trade continues to operate during the pandemic. In this way, we designed a prototype e-commerce system meeting the requirements required by the organizations. Where it was based on the Scrum methodology as an agile development framework for the realization of the project. The use of the Marvel design tool allowed the creation of web platform prototypes. Obtaining as a result, prototypes according to an e-commerce system complying with the development procedures established by the Scrum team, which gives you a novel proposal and a productive approach to start implementing e-commerce within the sales processes of each business area. Therefore, this e-commerce system prototype proposal can be implemented by the different micro companies that wish to have a new online sales method and improve their commercial area process, allowing the increase of their client portfolio, as well as their production.

Author 1: Anthony Tupia-Astoray
Author 2: Laberiano Andrade-Arenas

Keywords: Agile development; e-commerce; scrum methodol-ogy; prototype; system

PDF

Paper 78: Heuristic Evaluation of Peruvian Government Web Portals, used within the State of Emergency

Abstract: The development of web platforms is very abundant at present, this event has developed exponentially due to the state of emergency. Therefore, there is a need to evaluate the quality of these platforms to ensure a good user experience, especially if these platforms are governmental. Currently, there are several platforms that have been developed by governments of different nations, which are related to the theme of covid-19; which should be evaluated from a heuristic perspective to detect usability problems that can occur when users interact with a product and identify ways to solve them. This article presents a heuristic evaluation of Peruvian government web portals, used within the state of emergency; for this purpose, a heuristic evaluation is carried out using a list of 15 heuristic principles, proposed by Toni Granollers, from two government platforms: Aprendo en casa and Covid19 Minsa. In this way, it was identified that the Peruvian platform Aprendo en casa has fewer usability problems compared to the Covid19 Minsa platform. Therefore, there is a need to renew or update the Covid19 Minsa platform using the results of the heuristic evaluation performed; on the other hand, although a heuristic evaluation of these Peruvian government platforms is being carried out, it is recommended to continue with a research that uses other usability evaluation methodologies for other platforms of daily use, such as the S/ 380 bonus platform, Bonus for independent or AFP withdrawal.

Author 1: Flores Quispe Percy Santiago
Author 2: Mamani Condori Kevin Alonso
Author 3: Paniura Huamani Jose Maykol
Author 4: Anampa Chura Diego David
Author 5: Richart Smith Escobedo Quispe

Keywords: Heuristic evaluation; usability; heuristic principles; government web portals; Covid-19

PDF

Paper 79: k-Integer-Merging on Shared Memory

Abstract: The k integer-merging problem is to merge the k sorted arrays into a new sorted array that contains all elements of Ai,∀ i. We propose a new parallel algorithm based on exclusive read exclusive write shared memory. The algorithm runs in O(log n) time using n/log n processors. The algorithm performs linear work, O(n), and has optimal cost. Furthermore, the total work done by the algorithm is less than the best-known previous parallel algorithms for k merging problem.

Author 1: Ahmed Y Khedr
Author 2: Ibrahim M Alseadoon

Keywords: Merging; parallel algorithm; shared memory; optimality; linear work

PDF

Paper 80: An Accessibility Evaluation of the Websites of Top-ranked Hospitals in Saudi Arabia

Abstract: Hospital websites offer the potential to improve healthcare service delivery. They can provide up-to-date information and services to patients, at low cost and regardless of their level of abilities. This, in turn, can reduce overcrowding in hospitals and reduce spread of disease, especially in circumstances like the current COVID-19 pandemic. It is, therefore, imperative for designers to ensure the accessibility of hospital websites to the widest possible range of people. This study aims to evaluate the accessibility of the websites of top-ranked hospitals in Saudi Arabia using AChecker. The sample included the websites of the top ten hospitals from each of the public and private sectors. The results show that only 20% of the evaluated websites conformed fully to the Web Content Accessibility Guidelines 2.0. No significant difference was found in terms of the accessibility compliance between the websites of the public and private hospitals. The most frequently observed accessibility errors were related to the structure of information, non-text content, labels and instructions, headings, and keyboard access. The study concludes that Saudi hospitals are not doing an adequate job of meeting accessibility guidelines, thereby denying many of their web customers the ability to fully use their websites.

Author 1: Obead Alhadreti

Keywords: Accessibility; hospital websites; Saudi Arabia

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org