The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 1

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Performance Impact of Type-I Virtualization on a NewSQL Relational Database Management System

Abstract: For more than 40 years, the relational database management system (RDBMS) and the atomicity, consistency, isolation, durability (ACID) transaction guarantees provided through its use have been the standard for data storage. The advent of Big Data created a need for new storage approaches that led to NoSQL technologies, which rely on basic availability, soft-state, eventual consistency (BASE) transactions. Over the last decade, NewSQL RDBMS technology has emerged, providing the benefits of RDBMS ACID transaction guarantees and the performance and scalability of NoSQL databases. The reliance on virtualization in IT has continued to grow, but an investigation of current academic literature identified a void regarding the performance impact of virtualization of NewSQL databases. To help address the lack of research in this area, a quantitative experimental study was designed and carried out to answer the central research question, "What is the performance impact of Type-I virtualization on a NewSQL RDBMS?" VMware ESXi virtualization software, NuoDB RDBMS, and OLTP-Bench software were used to execute a mixed-load benchmark. Performance metrics were collected comparing bare metal and virtualized environments, and the data analyzed statistically to evaluate five hypotheses related to CPU utilization, memory utilization, disk and network input-output (I/O) rates, and database transactions per second. Findings indicated a negative performance impact on CPU and memory utilization, as well as network I/O rates. Performance improvements were noted in disk I/O rates and database transactions-per-second.

Author 1: J. Bryan Osborne

Keywords: Database benchmarking; NewSQL; relational database; virtualization

PDF

Paper 2: Knock Knock, Who’s There: Facial Recognition using CNN-based Classifiers

Abstract: Artificial intelligence (AI) has captured the public’s imagination. Performance gains in computing hardware, and the ubiquity of data have enabled new innovations in the field. In 2014, Facebook’s DeepFace AI took the facial recognition industry by storm with its splendid performance on image recognition. While newer models exist, DeepFace was the first to achieve near-human level performance. To better understand how this breakthrough performance was achieved, we developed our own facial image detection models. In this paper, we developed and evaluated six Convolutional Neural Net (CNN) models inspired by the DeepFace architecture to explore facial feature identification. This research made use of the You Tube Faces (YTF) dataset which included 621,126 images consisting of 1,595 identities. Three models leveraged pretrained layers from VGG16 and InceptionResNetV2, whereas the other three did not. Our best model achieved a 84.6% accuracy on the test dataset.

Author 1: Qiyu Sun
Author 2: Alexander Redei

Keywords: Face recognition; deep learning; convolutional neu-ral networks; DeepFace

PDF

Paper 3: Design of Smart IoT Device for Monitoring Short-term Exposure to Air Pollution Peaks

Abstract: Air pollution spikes have been causing harm to human beings and the environment. Most exposure to Air pollution spikes has demonstrated a significant impact on mental health, especially children at an early age. That lead to suicide or depression. Previous research concentrated on air pollution in general. Existing monitoring systems do not consider Short-term air pollution peaks. This paper presents the co-design of the hardware and software for IoT to monitor air pollution spikes for a short duration in real-time monitoring. The system comprises two technologies like edge computing to capture short-term exposure and a mathematical model for distribution in analyzing the captured data. This system ensures the presence of the spikes start and end for each pollutant. Monte Carlo simulation has been used in this research to predict the next spike of each pollutant. Artificial Intelligent is used to analyze immutable data for a short term prediction. After the analysis, legislators based on intelligent contracts created using blockchain to reduce pollution based on its source.

Author 1: Eric Nizeyimana
Author 2: Jimmy Nsenga
Author 3: Ryosuke Shibasaki
Author 4: Damien Hanyurwimfura
Author 5: JunSeok Hwang

Keywords: Short-duration air pollution peak/spike; real-time monitoring; short-term prediction; immutable data; blockchain; AI

PDF

Paper 4: Robust Facial Recognition System using One Shot Multispectral Filter Array Acquisition System

Abstract: Face recognition in the visible and Near Infrared range has received a lot of attention in recent years. The current Multispectral (MS) imaging systems used for facial recognition are based on multiple cameras having multiple sensors. These acquisition systems are normally slow because they take one MS image in several shots, which makes them unable to acquire images in real time and to capture moving scenes. On the other hand, currently there are snapshot multispectral imaging systems which integrate a single sensor with Multispectral Filter Arrays (MSFA) allow having at each acquisition an image on several spectra. These systems drastically reduce image acquisition time and are able to capture moving scenes in real time. This paper proposes a study of robust facial recognition using Multispectral Filter Array acquisition system. For this goal, a MSFA one-shot camera was used to collect the images and a robust facial recognition method based on Fast Discrete Curvelet Transform and Convolutional Neural Network is proposed. This camera covers the spectral range from 650 nm to 950 nm. A comparison of the facial recognition system using Multispectral Filter Arrays camera is made with those that using multiple cameras. Experimental results proved that face recognition systems whose acquisition systems are designed using MSFA perform more efficiently with an accuracy of 100%.

Author 1: M. Eléonore Elvire HOUSSOU
Author 2: A. Tidjani SANDA MAHAMA
Author 3: Pierre GOUTON
Author 4: Guy DEGLA

Keywords: Multispectral image database; multispectral imaging; multispectral filter array (MSFA); one-shot camera; facial recognition system

PDF

Paper 5: Detecting Distributed Denial of Service in Network Traffic with Deep Learning

Abstract: COVID-19 has altered the way businesses throughout the world perceive cyber security. It resulted in a series of unique cyber-crime-related conditions that impacted society and business. Distributed Denial of Service (DDoS) has dramatically increased in recent year. Automated detection of this type of attack is essential to protect business assets. In this research, we demonstrate the use of different deep learning algorithms to accurately detect DDoS attacks. We show the effectiveness of Long Short-Term Memory (LSTM) algorithms to detect DDoS attacks in computer networks with high accuracy. The LSTM algorithms have been trained and tested on the widely used NSL-KDD dataset. We empirically demonstrate our proposed model achieving high accuracy (~97.37%). We also show the effectiveness of our model in detecting 22 different types of attacks.

Author 1: Muhammad Rusyaidi
Author 2: Sardar Jaf
Author 3: Zunaidi Ibrahim

Keywords: Cybersecurity; Cyber-attack; DDoS attack; machine learning; deep learning; recurrent neural networks; long short-term memory

PDF

Paper 6: Proficient Networking Protocol for BPLC Network Built on Adaptive Multicast, PNP-BPLC

Abstract: In order to solve the problems of the existing broadband power line carrier communication standard IEEE1901.1 data link layer protocol network, multiple primary nodes receiving the beacon will send connotation entreaty message, and CCO will send an association sanction message proximately after receiving the message to confirm their character the central coordinator CCO association confirmation message reply is not timely to reduce the success rate of network access, high network access delay and control overhead. Based on the characteristics of BPLC carrier network, Proficient networking instrument of broadband PLC built on adaptive multicast PNP-BPLC is proposed in this paper. The simulation results show that adaptive multicast is used when CCO replies to the primary station, which can excellently recover the success rate of low-voltage PLB carrier communication nodes, decrease the network access delay and cut the network control overhead. Finally we used to OPNET simulation software to prove simulation result.

Author 1: Ali Md Liton
Author 2: Zhi Ren
Author 3: Dong Ren
Author 4: Xin Su

Keywords: Powerline communication; network delay; control overhead; central coordinator

PDF

Paper 7: Method for Improvement of Ocean Wind Speed Estimation Accuracy by Taking into Account the Relation between Wind Speed and Wind Direction

Abstract: A method for improvement of ocean wind speed estimation accuracy by taking into account the relation between wind speed and wind direction is proposed. Brightness temperature observed with microwave radiometer onboard satellite is modified with microwave radiometer derived wind direction proposed by Frank Wentz. Using the modified brightness temperature, more precise wind speed is estimated. Experiments with AMSR-E and NCEP GDAS data show improvements of wind speed estimation in comparison to the existing method based on the geophysical model of Frank Wentz together with the retrieval algorithm of Akira Shibata.

Author 1: Kohei Arai
Author 2: Kenta Azuma

Keywords: Sea surface wind speed (WS); advanced microwave scanning radiometer for erath observing system (AMSR-E); NCEP Global Data Assimilation System (GDAS); relative wind direction (RWD)

PDF

Paper 8: A Study of Security Impacts and Cryptographic Techniques in Cloud-based e-Learning Technologies

Abstract: e-Learning has transposed the perception of teaching and learning considering knowledge delivery and knowledge acquirement. Today, e-learning participants access and upload their materials at any time and at any place since e-learning technologies are typically hosted on the cloud. Cloud computing has embellished the base platform for the future of e-learning, however, security and privacy remains a major concern. Cloud-hosted e-learning technologies as they are accessed over the internet suffer from the same risks to information security aspects namely availability, confidentiality, and integrity. In such a context, data authenticity, privacy, access rights and digital footprints are vulnerable in the cloud. Research in this domain focuses on specific components of cloud and e-learning without covering a holistic view of applied cryptographic techniques and practical implementation. Hence, aiming at the various security aspects and impacts of cloud-based e-learning technologies, this paper puts forward reviewing the various cryptographic techniques used to secure data across the whole end-to-end cloud-based e-learning service spectrum using systematic review and exploratory method. The results obtained define several sets of criteria to evaluate the requirements of cryptographic techniques and propose an implementation framework across an end-to-end cloud-based e-learning architecture using multi-agent software.

Author 1: Lavanya-Nehan Degambur
Author 2: Sheeba Armoogum
Author 3: Sameerchand Pudaruth

Keywords: e-Learning; cloud computing; data management; pseudonymization; data deduplication

PDF

Paper 9: Various Antenna Structures Performance Analysis based Fuzzy Logic Functions

Abstract: The antenna is a critical component of the communication system. The antenna is used in wireless communication for signal transmission and reception over long distances. There are numerous sorts of antennas, such as wire antennas, traveling wave antennas, reflector antennas, microstrip antennas, and so on. The application of antennas is determined by the antenna's attributes as well as the frequency range of operation. As a result, it is vital to understand the behavior of antennas over a wide range of operations and select the optimum antenna for the application. The performance parameters of the antenna determines its efficiency. VSWR, Return Loss, Directivity, Bandwidth, and more parameters are available. As a result, one of the primary areas of focus is antenna analysis. In this study, we simulate various antenna types and derive performance parameters such as return loss, directivity, and so on. MATLAB will be used to simulate the antenna at various frequencies. When all of the parameters are taken into account, the analysis becomes quite tough. In this case of ambiguity, we use fuzzy logic to calculate the antenna's performance index. A variety of antenna parameters will be fed into the fuzzy inference system, which will make a judgment based on a set of rules. The crisp numbers are turned into fuzzy values using the fuzzification process, then evaluated and defuzzied to obtain the antenna's performance index. The fuzzy inference system will be developed in MATLAB, and the overall system will be modeled in Simulink.

Author 1: Chafaa Hamrouni
Author 2: Aarif Alutaybi
Author 3: Slim Chaoui

Keywords: Antenna; antenna element; function; fuzzy logic function; fuzzy inference system; Matlab; Simulink

PDF

Paper 10: Empirical Analysis Measuring the Performance of Multi-threading in Parallel Merge Sort

Abstract: Sorting is one of the most frequent concerns in Computer Science, various sorting algorithms were invented for specific requirements. As these requirements and capabilities grow, sequential processing becomes inefficient. Therefore, algorithms are being enhanced to run in parallel to achieve better performance. Performing algorithms in parallel differ depending on the degree of multi-threading. This study determines the optimal number of threads to use in parallel merge sort. Furthermore, it provides a comparative analysis of various degrees of multithreading. The implementation in this empirical experiment takes a group of devices with various specifications. For each device, it takes fixed-sized data set and executes merge sort for sequential and parallel algorithms. For each device, the lowest average runtime is used to measure the efficiency of the experiment. In all experiments, single-threaded is more efficient when the data size is less than 105 since it claimed 53% of the lowest runtime than the multithreaded executions. The overall average of the experiments shows either four or eight threads, with 72% and 28%, respectively, are most efficient when data sizes exceed 105.

Author 1: Muhyidean Altarawneh
Author 2: Umur Inan
Author 3: Basima Elshqeirat

Keywords: Parallel merge sort; sort; multithread; degree of multithreading

PDF

Paper 11: Special Negative Database (SNDB) for Protecting Privacy in Big Data

Abstract: Despite the importance of big data, it faces many challenges. The most important big data challenges are data storage, heterogeneity, inconsistency, timeliness, security, scalability, visualization, fault tolerance, and privacy. This paper concentrates on privacy which is one of the most pressing issues with big data. As mentioned in the Literature Review below there are numerous methods for safeguarding privacy with big data. This paper introduces an efficient technique called Specialized Negative Database (SNDB) for protecting privacy in big data. SNDB is proposed to avoid the drawbacks of all previous techniques. SNDB is based on deceiving bad users and hackers by replacing only sensitive attribute with its complement. Bad user cannot differentiate between the original data and the data after applying this technique.

Author 1: Tamer Abdel Latif Ali
Author 2: Mohamed Helmy Khafagy
Author 3: Mohamed Hassan Farrag

Keywords: Big data; big data challenges; privacy violations; privacy-preserving techniques; special negative database; data integrity

PDF

Paper 12: Drug Sentiment Analysis using Machine Learning Classifiers

Abstract: In recent times, one of the most emerging sub-dimensions of natural language processing is sentiment analysis which refers to analyzing opinion on a particular subject from plain text. Drug sentiment analysis has become very significant in present times as classifying medicines based on their effectiveness through analyzing reviews from users can assist potential future consumers in gaining knowledge and making better decisions about a particular drug. The objective of this proposed research is to measure the effectiveness level of a particular drug. Currently most of the text mining researches are based on unsupervised machine learning methods to cluster data. When supervised learning methods are used for text mining, the usual primary concern is to classify the data into two classes. Lack of technical terms in similar datasets make the categorization even more challenging. The proposed research focuses on finding out the keywords through tokenization and lemmatization so that better accuracy can be achieved for categorizing the drugs based on their effectiveness using different algorithms. Such categorization can be instrumental for treating illness as well as improve one’s health and well-being. Four machine learning algorithms have been applied for binary classification and one for multiclass classification on the drug review dataset acquired from the UCI machine learning repository. The machine learning algorithms used for binary classification are naive Bayes classifier, random forest, support vector classifier (SVC), and multilayer perceptron; among these machine learning algorithms, linear SVC was used for multiclass classification. Results obtained from these four classifier algorithms have been analyzed to evaluate their performances. The random forest has been proven to have the best performance among these four algorithms. However, multiclass classification was found to have low performance when applied to natural language processing. On the contrary, the applied linear SVC algorithm performed better for class 2 with AUC 0.82 in this research.

Author 1: Mohammed Nazim Uddin
Author 2: Md. Ferdous Bin Hafiz
Author 3: Sohrab Hossain
Author 4: Shah Mohammad Mominul Islam

Keywords: Machine Learning Algorithms; natural language processing; drugs sentiment analysis; text mining

PDF

Paper 13: Effective Malware Detection using Shapely Boosting Algorithm

Abstract: Malware constitutes a prime exploitation tool to attack the vulnerabilities in software that lead to a threat to security. The number of malware gets generated as exploitation tools need effective methods to detect them. Machine learning methods are effective in detecting malware. The effectiveness of machine learning models can be increased by analyzing how the features that build the model contribute to the detection of malware. The model can be made robust by getting insight into how features contribute to each sample that is fed to a trained model. In this paper, the boosting machine learning model based on LightGBM is enhanced with Shapley value to detect the contribution of the top nine features for classification such as true positive or true negative and for misclassification such as false positive or false negative. This insight in the model can be used for effective and robust malware detection and to avoid wrong detections such as false positive and false negative. The comparison of the top features and their contribution in shapely value for each category of the sample gives insight and inductive learning into the model to know the reasons for misclassification. Inductive learning can be transformed into rules. The prediction by the trained model can be re-evaluated with such inductive learning and rules to ensure effective and robust prediction and avoid misclassification. The performance of models gives 98.48 at maximum and 97.45 at a minimum by 10 fold cross-validation.

Author 1: Rajesh Kumar
Author 2: Geetha S

Keywords: Artificial intelligence; machine learning; malware detection; shapely value; decision plot; waterfall plot

PDF

Paper 14: Using a Rule-based Model to Detect Arabic Fake News Propagation during Covid-19

Abstract: Since the emergence of the Covid-19, both factual and false information about the new virus has been disseminated. Fake news harms societies and must be combated. This research aims to identify Arabic fake news tweets and classify them into six categories: entertainment, health, politics, religious, social, and sports. The study also aims to uncover patterns in the spread of Arabic fake news associated with the Covid-19 pandemic. The researchers created an Arabic dictionary and used text classification based on a rule-based system to detect and categorize fake news. A dataset consisting of 5 million tweets was analyzed. The developed model achieves an overall accuracy of 78.1% with 70% precision and 98%recall. The model detected more than 26006 fake news tweets. Interestingly we found an association between the number of fake news tweets and dates. The result demonstrates that as more information and knowledge about Covid-19 become available over time, people's awareness increase, while the number of fake news tweets decreases. The categorization of false news indicates that the social category was highest in all Arab countries except Palestine, Qatar, Yemen, and Algeria. Conversely, fake news related to the entertainment category was the weakest dissemination in most Arab countries.

Author 1: Fatimah L. Alotaibi
Author 2: Muna M. Alhammad

Keywords: Fake news; Covid-19; text classification; rule-based system; trends

PDF

Paper 15: Hybrid Deep Neural Network Model for Detection of Security Attacks in IoT Enabled Environment

Abstract: The extensive use of Internet of Things (IoT) appliances has greatly contributed in the growth of smart cities. Moreover, the smart city deploys IoT-enabled applications, communications, and technologies to improve the quality of life, people’s wellbeing, quality of services for the service providers and increase the operational efficiency. Nevertheless, the expansion of smart city network has become the utmost hazard due to increased cyber security attacks and threats. Consequently, it is more significant to develop the system models for preventing the attacks and also to protect the IoT devices from hazards. This paper aims to present a novel deep hybrid attack detection method. The input data is subjected for preprocessing phase. Here, data normalization process is carried out. From the preprocessed data, the statistical and higher order statistical features are extracted. Finally, the extracted features are subjected to hybrid deep learning model for detecting the presence of attack. The proposed hybrid classifier combines the models like Convolution Neural Network (CNN) and Deep Belief Network (DBN). To make the detection more precise and accurate, the training of CNN and DBN is carried out by using Seagull Adopted Elephant Herding optimization (SAEHO) model by tuning the optimal weights.

Author 1: Amit Sagu
Author 2: Nasib Singh Gill
Author 3: Preeti Gulia

Keywords: Internet of things; deep learning; optimization; convolutional neural network; security attack detection

PDF

Paper 16: A Node Monitoring Agent based Handover Mechanism for Effective Communication in Cloud-Assisted MANETs in 5G

Abstract: As nodes often join or leave the network, the communication between the cloud and the MANET remains unreliable in Cloud-Assisted MANET. The event of connection failure in MANET presents several challenges to the network, in particular, the handover issue and high energy consumption during route re-establishment if a connection fails in D2D (device-to-device) communication networks. To address this problem of D2D mobile communication in 5G, we propose a Node Monitoring Agent Based Handover Mechanism (NMABHM). To improve the network's efficiency, we use the K-means algorithm for clustering and cluster head selection in Hybrid MANET and maintain a backup routing table based on a Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to quickly recover the route. Additionally, a Node Monitoring Agent (NMA) is introduced to handle the handover issue if a node comes out of range during the communication phase. The NMABHM-based handover mechanism is proposed with group mobility over a cluster-based architecture involving agents. The results of the simulation indicate that, in terms of lower energy consumption and higher throughput, our proposed mechanism is more effective than the current routing mechanisms.

Author 1: B. V.S Uma Prathyusha
Author 2: K.Ramesh Babu

Keywords: MANET; clustering; cloud computing; 5G Wireless networks; cloud-assisted MANET

PDF

Paper 17: Design of an Intelligent Hydroponics System to Identify Macronutrient Deficiencies in Chili

Abstract: Nutrient contents are important for plants. Lack of macronutrients causes plant damage. Several macronutrient deficiencies exhibit similar visual characteristics that are difficult for ordinary farmers to identify. Collaboration between Computer Vision technology and IoT has become a non-destructive method for nutrient monitoring and control, included in the hydroponic system. Computer vision plays a role in processing plant image data based on specific characteristics. However, the analysis of one characteristic cannot represent plant health. In addition, knowing the percentage of macronutrient deficiencies is also needed to support precision agriculture systems. Therefore, we propose a Multi Layer Perceptron architecture that can perform multi-tasks, namely, identification and estimation. In addition, the optimal architecture will also be sought based on the characteristics of the combination of three features in the form of texture, color, and leaf shape. Based on analysis and design, our proposed model has a high potential for identifying and estimating macronutrient deficiency at the same time as well and can be applied to support precision agriculture in Indonesia.

Author 1: Deffa Rahadiyan
Author 2: Sri Hartati
Author 3: Wahyono
Author 4: Andri Prima Nugroho

Keywords: Multi Layer perceptron; internet of things; feature combination; leaf image; nutrient deficiency

PDF

Paper 18: Automatic Fake News Detection based on Deep Learning, FastText and News Title

Abstract: As a range of daily phenomena, Fake News is quickly becoming a longstanding issue affecting individuals, public and private sectors. This major challenge of the connected and modern world can cause many severe and real damages such as manipulating public opinion, damaging reputations, contributing to the loss in stock market value and representing many risks to the global health. With the fast spreading of online misinformation, checking manually Fake News becomes ineffective solution (not obvious, difficult and takes a long time). The improvement of Deep Learning Networks (DLN) can support with high degree of accuracy and efficiency the classical processes of Fake News spotting. One of the keys improvement strategies are optimizing the Word Embedding Layer (WEL) and finding relevant Fake News predicting features. In this context, and based on six DLN architectures, FastText process as WEL and Inverted Pyramid as News Articles Pattern (IPP), the present paper focuses on the assessment of the first news article feature that is hypothesized as affecting the performances of fake news predicting: News Title. By assessing the impact that the Embedding Vector Size (EVS), Window Size (WS) and Minimum Frequency of Words (MFW) in News Titles corpus can have on DLN, the experiments carried out in this paper showed that the News Title feature and FastText process can have a significant improvement on DLN fake news detection with accuracy rates exceeding 98%.

Author 1: Youssef Taher
Author 2: Adelmoutalib Moussaoui
Author 3: Fouad Moussaoui

Keywords: Fake news; automatic detection; deep learning; FastText; news title

PDF

Paper 19: Determine the Level of Concentration of Students in Real Time from their Facial Expressions

Abstract: In teaching environments, student facial expressions are a clue to the traditional classroom teacher in gauging students' level of concentration in the course. With the rapid development of information technology, e-learning will take off because students can learn anytime, anywhere and anytime they feel comfortable. And this gives the possibility of self-learning. Analyzing student concentration can help improve the learning process. When the student is working alone on a computer in an e-learning environment, this task is particularly challenging to accomplish. Due to the distance between the teacher and the students, face-to-face communication is not possible in an e-learning environment. It is proposed in this article to use transfer learning and data augmentation techniques to determine the concentration level of learners from their facial expressions in real time. We found that expressed emotions correlate with students' concentration, and we designed three distinct levels of concentration (highly concentrated, nominally concentrated, and not at all concentrated).

Author 1: Bouhlal Meriem
Author 2: Habib Benlahmar
Author 3: Mohamed Amine Naji
Author 4: Elfilali Sanaa
Author 5: Kaiss Wijdane

Keywords: Emotion recognition; level of concentration; transfer learning; data augmentation

PDF

Paper 20: Medical Image Cryptanalysis using Adaptive, Lightweight Neural Network based Algorithm for IoT based Secured Cloud Storage

Abstract: Currently available modern medical system generates large amounts of data, such as computerized patient data and digital medical pictures, which must be kept securely for future reference. Existing storage technologies are not capable of storing large amounts of data efficiently. It is a key and abrogating topic of specialized, social, and medical significance and a key and abrogating subject of general interest. The results of cars, purchasers, and Internet of Things industry-based and essential segments, sensors, and other daily objects are fused with a network of the Internet and solid information abilities that promise to alter the way we operate and live future. The suggested work demonstrates a symmetric-key lightweight technique for secure data transmission of images and text, which uses an image encryption system and a reversible data hiding system to demonstrate the program's implementation. On the other hand, cloud storage services can meet demand due to features such as flexibility and availability. Cloud computing is enabled by amazing internet innovation as well as cutting-edge electrical equipment. Even though medical images may be stored on the cloud, most cloud service providers only save client data in plain text. As part of their overall strategy, cloud users must take responsibility for protecting medical data. Because attackers' increasing computing power and creativity are opening up more and more areas in this mathematical form, most existing image encryption schemes are vulnerable to the plaintext attack of choice. This article presents an image encryption method inspired by an Adaptive IoT-based Hopfield Neural Network (AIHNN) that can resist other assaults while optimizing and improving the system through continuous learning and updating.

Author 1: M V Narayana
Author 2: Ch Subba Lakshmi
Author 3: Rishi Sayal

Keywords: Cloud storage; IoT; medical image; neural network

PDF

Paper 21: Analysis of Logistics Service Quality and Customer Satisfaction during COVID-19 Pandemic in Saudi Arabia

Abstract: Logistics companies' success is inextricably linked to the quality of their services, particularly when dealing with customer issues. Nowadays, social media is the first place that users turn to in order to express their thoughts on services or to communicate with customer service representatives to resolve problems. Businesses can retrieve and analyze these data to gain a better understanding of the factors that affect their operations, both positively and negatively. During the COVID-19 pandemic, we conducted a sentiment analysis to assess customer satisfaction with logistics services in Saudi Arabia's private and public sectors. Using a lexicon-based approach, 67,124 tweets were collected and classified as positive, negative, or neutral. A support vector machine (SVM) model was used for classification, with an average accuracy of 82%. Following that, we conducted a thematic analysis of negative opinions in order to identify the factors that influenced the effectiveness and quality of logistics services. The findings reveal five negative themes: delay, customer service issues, damaged shipments, delivery issues, and hidden prices. Finally, we make suggestions to improve the efficiency and quality of logistics services.

Author 1: Amjaad Bahamdain
Author 2: Zahyah H. Alharbi
Author 3: Muna M. Alhammad
Author 4: Tahani Alqurashi

Keywords: Logistics services; sentiment analysis; lexicon-based approach; SVM; sentiment classification

PDF

Paper 22: Investigate the Ensemble Model by Intelligence Analysis to Improve the Accuracy of the Classification Data in the Diagnostic and Treatment Interventions for Prostate Cancer

Abstract: Class imbalance problem become greatest issue in data mining, imbalanced data appears in daily application, especially in the health care. This research aims at investigating the application of ensemble model by intelligence analysis to improving the classification accuracy of imbalanced data sets on prostate cancer. The primary requirements obtained for this study included the datasets, relevant tools for pre-processing to identify the missing values, models for attribute selection and cross validation, data resembling framework, and intelligent algorithms for base classification. Additionally, the ensemble model and meta-learning algorithms were acquired in preparation for performance evaluation by embedding feature selecting capabilities into the classification model. The experimental results led to the conclusion that the application of ensemble learning algorithm on resampled data sets provides highly accurate classification results on single classifier J48. The study further suggests that gain ratio and ranker techniques are highly effective for attribute selection in the analysis of prostate cancer data. The lowest error rate and optimal performance accuracy in the classification of imbalanced prostate cancer data is achieved using when Adaboost algorithm is combined with single classifier J48.

Author 1: Abdelrahman Elsharif Karrar

Keywords: Ensemble model; intelligence analysis; classification of imbalanced data; prostate cancer

PDF

Paper 23: Dynamic Deployment of Road Side Units for Reliable Connectivity in Internet of Vehicles

Abstract: Internet of vehicles (IoV) promises to provide ubiquitous information exchange among moving vehicles and reliable connectivity to the internet. Therefore, IoV is becoming more and more popular as the number of connected vehicles is increasing. However, the existing vehicular communication infrastructure cannot guarantee reliable connectivity because all-time information exchange for every travelling vehicle is not assured due to lack of required number of roadside units (RSUs) especially along the intercity highways. This study is aimed towards exploring the use of cost-effective dynamic deployment of RSUs based upon the road traffic density and by ensuring the Line of Sight (LOS) among RSUs and Cellular Network Antennas. The unmanned aerial vehicles (UAVs) have a potential to serve as economical dynamic RSUs. Therefore, the use of UAVs along the roadside for providing reliable and ubiquitous information exchange among vehicles is proposed. The UAVs will be deployed along the roadside and their respective placement will be changed dynamically based upon the current traffic density in order to ensure the all-time connectivity with the travelling vehicles and the other UAVs/Cellular Network antennas. The reliability of the proposed network will be tested in terms of signal strength and packet delivery ratio (PDR) using the simulation.

Author 1: Abdulwahab Ali Almazroi
Author 2: Muhammad Ahsan Qureshi

Keywords: VANET; Roadside unit; internet of vehicle; social internet of vehicles; unmanned aerial vehicle; line of sight vehicular communication

PDF

Paper 24: Customer Satisfaction with Digital Wallet Services: An Analysis of Security Factors

Abstract: This study aimed to determine an efficient framework that caters to the security and consumer satisfaction for digital wallet systems. A quantitative online survey was carried out to test whether the six factors (i.e., transaction speed, authentication, encryption mechanisms, software performance, privacy details, and information provided) positively or negatively impact customer satisfaction. This questionnaire was divided into two sections: the respondents’ demographic data and a survey on security factors that influence customer satisfaction. The questionnaires were distributed to the National University of Malaysia’s professors and students. A sample of 300 respondents undertook the survey. The survey results suggested that many respondents agreed that the stated security factors influenced their satisfaction when using digital wallets. Previous studies indicated that financial security, privacy, system security, cybercrime, and trust impact online purchase intention. The proposed framework in this research explicitly covers the security factors of the digital wallet. This study may help digital wallet providers understand the customer's perspective on digital wallet security aspects, therefore motivating providers to implement appropriately designed regulations that will attract customers to utilize digital wallet services. Formulating appropriate security regulations will generate long-term value, leading to greater digital wallet adoption rates.

Author 1: Dewan Ahmed Muhtasim
Author 2: Siok Yee Tan
Author 3: Md Arif Hassan
Author 4: Monirul Islam Pavel
Author 5: Samiha Susmit

Keywords: Cashless transaction; electronic payment; internet security; consumer satisfaction; e-commerce

PDF

Paper 25: Towards a Strategic IT GRC Framework for Healthcare Organizations

Abstract: The rapidly changing healthcare market requires healthcare institutions to adjust their operations to address regulatory, strategic, and other risks. Healthcare organizations use a wide range of IT systems producing large amounts of sensitive and confidential data. However, few tools are available to measure the data governance activities of healthcare institutions and align healthcare data management with legislation. The Governance, Risk, and Compliance (GRC) Model focused on integrating that ability to achieve organizational goals. The demand for corporate governance is crucial for protecting the healthcare system from risks. An adaptation of a modified version that includes strategy, processes, technology, people, as well as legal and business requirements was developed to analyze the factors affecting IT GRC implementation in healthcare organizations. Although about 48% of participants reported that their organizations implemented IT GRC programs, 16% stated that they are considering implementing IT GRC programs soon. In almost 71% of healthcare organizations, IT governance, risk management, and compliance are integrated. Among the factors influencing the implementation of IT GRC programs in Saudi healthcare organizations, legal context ranked as the most critical, followed by process, strategy, then technology, business, and finally, people contexts. This study shows that healthcare organizations must assess various factors for the effective implementation of IT GRC activities.

Author 1: Fawaz Alharbi
Author 2: Mohammed Nour A. Sabra
Author 3: Nawaf Alharbe
Author 4: Abdulrahman A. Almajed

Keywords: Information technlogy; strategic; healthcare; governance; compliance

PDF

Paper 26: Ambulatory Monitoring of Maternal and Fetal using Deep Convolution Generative Adversarial Network for Smart Health Care IoT System

Abstract: With the increase in the number of high-risk pregnancies, it is important to monitor the health of the fetus during pregnancy. Major advances in the field of study have led to the development of intelligent automation systems that enable clinicians to predict and determine the monitoring of Maternal and Fetal Health (MFH) with the aid of the Internet of Things (IoT). This paper provides a solution for monitoring high-risk MHF based on IoT sensors, data analysis-based feature extraction, and an intelligent system based on the Deep Convolutional Generative Adversarial Network (DCGAN) classifier. Various clinical indicators such as heart rate of MF, oxygen saturation, blood pressure, and uterine tonus of maternal are monitored continuously. Many data sources produce large amounts of data in different formats and ratios. The smart health analytics system proposes to extract several features and measure linear and non-linear dimensions. Finally, a DCGAN has been proposed as a predictive mechanism for the simultaneous classification of MFH status by considering more than four possible outcomes. The results showed that the proposed system for mobile monitoring between MFH is a practical solution based on the IoT.

Author 1: S. Venkatasubramanian

Keywords: Deep convolutional generative adversarial network; fetal health monitoring; high-risk pregnancies; internet of things; smart healthcare system

PDF

Paper 27: Periapical Radiograph Texture Features for Osteoporosis Detection using Deep Convolutional Neural Network

Abstract: Currently, research for osteoporosis examination using dental radiographic images is increasing rapidly. Many researchers have used various methods from subject data. It indicates that osteoporosis has become a widespread disease that should be studied more deeply. This study proposes a deep Convolutional Neural Network architecture as a texture feature of dental periapical radiograph for osteoporosis detection. The subject of this study is postmenopausal Javanese women aged over 40 and data measurement result of Bone Mineral Density. The proposed model is divided into stages: 1) stage image acquisition and RoI selection, 2) stage feature extraction and classification. Various experiments with the number of convolution layers (3 layers to 6 layers) and various input block sizes and other hyper parameters were used to get the best model. The best model is obtained when the input image size is greater than 100 and less than 150 and a five of convolution layer, as well as other hyper parameters, including epochs=100, dropout=0.5, learning rate=0.0001, batch size= 16 and loss function using Adam's optimization. Validation and testing accuracy achieved by the best model is 98.10%, and 92.50. The research shows that the bigger images provide additional information about trabecular patterns in normal, osteopenia and osteoporosis classes, so that the proposed method using deep convolutional neural network as textural feature of the periapical radiograph achieves a good performance for detection osteoporosis.

Author 1: Khasnur Hidjah
Author 2: Agus Harjoko
Author 3: Moh. Edi Wibowo
Author 4: Rurie Ratna Shantiningsih

Keywords: Osteoporosis; dental periapical radiograph; convolutional neural network; texture features; bone mineral density

PDF

Paper 28: Prediction of Diabetic Obese Patients using Fuzzy KNN Classifier based on Expectation Maximization, PCA and SMOTE Algorithms

Abstract: Diabetes is a long-term disease. Inappropriate blood sugar level control in diabetic patients can lead to serious issues like kidney and heart diseases. Obesity is widely regarded as a major risk factor for type 2 diabetes. In this research, a model proposed to predict diabetic obese patients based on Expectation Maximization, PCA, and SMOTE Algorithms in the preprocessing and feature extraction phases, and using Fuzzy KNN classifier in the prediction phase. The model applied on real dataset and the accuracy of prediction results reflects the positive effect of the preprocessing techniques. The accuracy of the proposed model is 95.97% and outperforms other model applied on the same dataset.

Author 1: Ibrahim Eldesouky Fattoh
Author 2: Soha Safwat

Keywords: KNN classifier; SMOTE; PCA; diabetic obese patients

PDF

Paper 29: Evaluation Optimal Prediction Performance of MLMs on High-volatile Financial Market Data

Abstract: The present study evaluates the prediction performance of the multi-machine learning models (MLMs) on high-volatile financial markets data sets since 2007 to 2020. The linear and nonlinear empirical data sets are comprised on stock price returns of Karachi stock exchange (KSE) 100-Index of Pakistan and currencies exchange rates of Pakistani Rupees (PKR) against five major currencies (USD, Euro, GBP, CHF & JPY). In the present study, the support vector regression (SVR), random forest (RF), and machine learning-linear regression model (ML-LRM) are under-evaluated for comparative prediction performance. Moreover, the findings demonstrated that the SVR comparatively gives optimal prediction performance on group1. Similarly, the RF relatively gives the best prediction performance on group2. The findings of study concludes that the algorithm of RF is most appropriate for nonlinear approximation/evaluation and the algorithm of SVR is most useful for high-frequency time-series data estimation. The present study is contributed by exploring comparative enthusiastic/optimistic machine learning model on multi-nature data sets. This empirical study would be helpful for finance and machine-learning pupils, data analysts and researchers, especially for those who are deploying machine-learning approaches for financial analysis.

Author 1: Yao HongXing
Author 2: Hafiz Muhammad Naveed
Author 3: Muhammad Usman Answer
Author 4: Bilal Ahmed Memon
Author 5: Muhammad Akhtar

Keywords: Support vector regression; random forest; machine learning-linear regression model; optimal prediction performance; currencies exchange rates; stock price returns

PDF

Paper 30: Optimize and Secure Routing Protocol for Multi-hop Wireless Network

Abstract: Multi-hop Wireless Network (MWN) requires the existing of wireless nodes that communicate via a wireless channel. Thus, selecting optimal paths between the communicant nodes is a major challenge. Many researchers are focusing on this topic and proposed some routing protocols that help the nodes to learn multi-hop paths. Multi-hop wireless network is used for several types of applications, like military, medical care and national security. These applications are important and critical, so they require a certain level of performance and security during data communication. Securing the transmission of data in a multi-hop network is a challenge since the devices have limited resources like memory and battery. In this paper, we propose an optimal and secure routing protocol. The main goal of this proposal is to improve the performance and the security of such network by selecting a secure route between the source and its target destination. To secure data transmission phase, we propose to create a key shared between the source and the destination. Since the devices have limited energy, we propose to take into consideration the energy of the intermediate nodes of the selected route. Extensive simulations are performed using the Network Simulator (NS2) to validate the proposed protocol. This proposal is compared with the secured Ad-Hoc On-demand Distance Vector (SAODV) in terms of end-to-end delay, overhead and number of compromised devices.

Author 1: Salwa Othmen
Author 2: Wahida Mansouri
Author 3: Somia Asklany
Author 4: Wided Ben Daoud

Keywords: Mutli-hop wireless network; routing protocol; Diffie-Hellman; Weil Pairing; NS2

PDF

Paper 31: A Machine Learning Approach to Weather Prediction in Wireless Sensor Networks

Abstract: Weather prediction is the key requirement to save many lives from environmental disasters like landslides, earthquake, flood, forest fire, tsunami etc. Disaster monitoring and issuing forewarning to people, living in disaster-prone places, can help protect lives. In this paper, the Multiple Linear Regression (MLR) model is proposed for humidity prediction. After exploratory data analysis and outlier treatment, Multiple Linear Regression technique was applied to predict humidity. Intel lab dataset, collected by deploying 54 sensors, to form a wireless sensor network, an advanced networking technology that existed in the frontier of computer networks, is used for solution build. Inputs to the model are various meteorological variables, for predicting weather precisely. The model is evaluated using metrics - Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). From experimentation, the applied method generated results with a minimum error of 11%, hence the model is statistically significant and predictions more reliable than other methods.

Author 1: Suvarna S Patil
Author 2: B.M.Vidyavathi

Keywords: Data mining; wireless sensor network; multiple linear regression; outliers treatment; r-square; adjusted r-square

PDF

Paper 32: Detecting Diabetic Retinopathy in Fundus Images using Combined Enhanced Green and Value Planes (CEGVP) with k-NN

Abstract: Diabetic Retinopathy (DR) is a disease that causes damage to the blood vessels of the retina, especially in patients having high uncontrolled blood sugar levels, which may lead to complications in the eyes or loss of vision. Thus, early detection of DR is essential to avoid complete blindness. The automatic screenings through computational techniques would eventually help in diagnosing the disease more accurately. The traditional DR detection techniques identify the abnormalities such as microaneurysms, hemorrhages, hard exudates, and soft exudates from the diabetic retinopathy images individually. When these abnormalities occur in combination, it becomes difficult to predict them and the individual detection (traditional 4 class classification) accuracy decreases. Hence, there is a need to have separate combinational classes (16 class classification) that help to classify these abnormalities in a group or one by one. The objective of our work is to develop an automated DR prediction scheme that classifies the abnormalities either individually or in combination in retinal fundus images. The proposed system uses Combined Enhanced Green and Value Planes (CEGVP) for processing the fundus images, Principal Component Analysis (PCA) for feature extraction, and k-nearest neighbor (k-NN) for classification of DR. The suggested technique yields an average accuracy of 97.11 percent using a k-NN classifier. This is the first time that a 16-class classification is initiated that precisely gives the ability and flexibility to map the combinational complexity in a single step. The proposed method can assist ophthalmologists in efficiently detecting the abnormalities and starting the diagnosis on time.

Author 1: Minal Hardas
Author 2: Sumit Mathur
Author 3: Anand Bhaskar

Keywords: Combined enhanced green and value plane; diabetic retinopathy; fundus image; image processing; k-NN; principal component analysis

PDF

Paper 33: A Novel Secure Transposition Cipher Technique using Arbitrary Zigzag Patterns

Abstract: Symmetric cipher cryptography is an efficient technique for encrypting bits and letters to maintain secure communication and data confidentiality. Compared to asymmetric cipher cryptography, symmetric cipher has the speed advantage required for various real-time applications. Yet, with the distribution of micro-devices and the wider utilization of the Internet of Things (IoT) and Wireless Sensor Network (WSN), lightweight algorithms are required to operate on such devices. This paper proposes a symmetric cipher based on a scheme consisting of multiple zigzag patterns, a secret key of variable length, and block data of variable size. The proposed system uses transposition principles to generate various encryption patterns with a particular initial point over a grid. The total number of cells in the grid and its dimension are variable. Various patterns can be created for the same grid, leading to different outcomes on different grids. For a grid of n cells, a total of n! * (n-1!) total patterns can be generated. This information is encapsulated in the private key. Thus, the huge number of possible patterns and the variation of the grid size, which are kept hidden, maintain the security of the proposed technique. Moreover, variable padding can be used; two paddings with different lengths lead to a completely different output even with the same pattern and the same inputs, which improves the security of the proposed system.

Author 1: Basil Al-Kasasbeh

Keywords: Cryptography; symmetric cipher; block cipher; transposition algorithms

PDF

Paper 34: Feature Concatenation based Multilayered Sparse Tensor for Debond Detection Optical Thermography

Abstract: Composites being the key ingredients of the manufacturing in the aerospace, aircraft, civil and related industries, it is quite important to check its quality and health during its manufacture or in service. The most commonly found problem in the CFRPs is debonding. As debonds are subsurface defects, the general methods are not quite effective and require destructive tests. The Optical Pulse Thermography (OPT) is a quite promising technology that is being used for detecting the debonds. However, the thermographic time sequences from the OPT system have a lot of noise and normally the defects information is not clear. For solving this problem, an improved tensor nuclear norm (I-TNN) decomposition is proposed in the concatenated feature space with multilayer tensor decomposition. The proposed algorithm utilizes the frontal slice of the tensor to define the TNN and the core singular matrix is further decomposed to utilize the information in the third mode of the tensor. The concatenation helps embed the low-rank and sparse data jointly for weak defect extraction. To show the efficacy and robustness of the algorithm experiments are conducted and comparisons are presented with other algorithms.

Author 1: Junaid Ahmed
Author 2: Abdul Baseer
Author 3: Guiyun Tian
Author 4: Gulsher Baloch
Author 5: Ahmed Ali Shah

Keywords: Improved tensor nuclear norm; low-rank decomposition; concatenated feature space; optical thermography

PDF

Paper 35: Unmoderated Remote Usability Testing: An Approach during Covid-19 Pandemic

Abstract: Online Nurse Test for Indonesian Nurse Competency (ONT UKNI) is a mobile application that was developed to help increase the success rate of nurse competency test participants. By using this application, users can learn more about the materials tested and conduct try out as a competency test simulation. However, ONT UKNI has not yet passed adequate testing stages, especially in terms of User Interface/User Experience (UI/UX). The Covid-19 pandemic situation presents challenges in the UI/UX testing process. Testing process which is ideally carried out face-to-face with respondents to get further insight, have to be carried out using another approach following the new normal protocol. This study aims to test the usability of UI/UX with an unmoderated remote testing approach on ONT UKNI application using a USE questionnaire. The test was performed using 26 respondents and all were nursing profession students of Universitas Muhammadiyah Yogyakarta. Respondents performed 8 tasks on ONT UKNI and answered set of questionnaire that will be tabulated and analyzed. The results indicate that usefulness, ease of learning, and satisfaction variables get the Very Good category while the ease of use variable gets the Good category. Overall, usability testing using an unmoderated remote testing approach can be carried out and able to provide information about areas where users are satisfied with ONT UKNI application. However, some areas still have room for improvement such as better UI design and implementation of gamification.

Author 1: Ambar Relawati
Author 2: Guntur Maulana Zamroni
Author 3: Yanuar Primanda

Keywords: Mobile learning; nurse competency test; unmoderated remote testing; usability

PDF

Paper 36: 4PCDT: A Quantifiable Parameter-based Framework for Academic Software Project Management

Abstract: Many authorities like Project Management Body of Knowledge (PMBOK) and Capability Maturity Model Integration for Development (CMMI-Dev) lend a hand to software development organizations in management of their crucial projects. Though this area needs focused research, such models are not dedicatedly available for the academic projects developed by students of computer science and engineering where software project development is considered as one of the criteria for the award of degree to the future professionals of the IT industry. With this motivation, we explored 4PTRB, 3PR and software project management practices, approaches as well processes framed and provided by PMBOK and CMMI-Dev. The main aim of this research is to introduce and propose a software project management framework for the academic domain. The proposed framework contains identification and description of 7 and 26 quantifiable parameters and sub-parameters respectively. The framework is called 4PCDT for People, Process, Product, Project, Complexity, Duration and Technology for the academic software projects. To validate the proposed framework, an online survey of 113 faculties was conducted to rank and weigh the quantifiable parameters. The results show that People, Process and Technology management parameters are top 3 ranked parameters. The robustness of the approach is further evident from the results of experimentation on 18 actual academic software projects of final year post graduate students of the IT domain. Not only the proposed work is first of its kind, but also it is bound to generate an excellent ripple effect in the research community.

Author 1: Vikas S. Chomal
Author 2: Jatinderkumar R. Saini
Author 3: Hema Gaikwad
Author 4: Ketan Kotecha

Keywords: Academic; CMMI-Dev; PMBOK; project management; software project; student

PDF

Paper 37: Neuromarketing Solutions based on EEG Signal Analysis using Machine Learning

Abstract: Marketing campaigns that promote and market various consumer products are a well-known strategy for increasing sales and market awareness. This simply means the profit of a manufacturing unit would increase. "Neuromarketing" refers to the use of unconscious mechanisms to determine customer preferences for decision-making and behavior prediction. In this work, a predictive modeling method is proposed for recognizing product consumer preferences to online (E-commerce) products as “Likes” and “Dislikes”. Volunteers of various ages were exposed to a variety of consumer products, and their EEG signals and product preferences were recorded. Artificial Neural Networks and other classifiers such as Logistic Regression, Decision Tree Classifier, K-Nearest Neighbors, and Support Vector Machine were used to perform product-wise and subject-wise classification using a user-independent testing method. Though, the subject-wise classification results were relatively low with artificial neural networks (ANN) achieving 50.40 percent and k-Nearest Neighbors achieving 60.89 percent. Furthermore, the results of product-wise classification were relatively higher with 81.23 percent using Artificial Neural Networks and 80.38 percent using Support Vector Machine.

Author 1: Asad Ullah
Author 2: Gulsher Baloch
Author 3: Ahmed Ali
Author 4: Abdul Baseer Buriro
Author 5: Junaid Ahmed
Author 6: Bilal Ahmed
Author 7: Saba Akhtar

Keywords: Electroencephalogram (EEG); brain-computer interface; neuromarketing; machine learning; artificial neural networks

PDF

Paper 38: Tomato Leaf Disease Detection using Deep Learning Techniques

Abstract: Plant diseases cause low agricultural productivity. Plant diseases are challenging to control and identify by the majority of farmers. In order to reduce future losses, early disease diagnosis is necessary. This study looks at how to identify tomato plant leaf disease using machine learning techniques, including the Fuzzy Support Vector Machine (Fuzzy-SVM), Convolution Neural Network (CNN), and Region-based Convolution Neural Network (R-CNN). The findings were confirmed using images of tomato leaves with six diseases and healthy samples. Image scaling, color thresholding, flood filling approaches for segmentation, gradient local ternary pattern, and Zernike moments’ features are used to train the pictures. R-CNN classifiers are used to classify the illness kind. The classification methods of Fuzzy SVM and CNN are analyzed and compared with R-CNN to determine the most accurate model for plant disease prediction. The R-CNN-based Classifier has the most remarkable accuracy of 96.735 percent compared to the other classification approaches.

Author 1: Nagamani H S
Author 2: Sarojadevi H

Keywords: Fuzzy Support Vector Machine (SVM); Convolution Neural Network (CNN); Region-based Convolution Neural Network (R-CNN); color thresholding; flood filling

PDF

Paper 39: Feature Selection Pipeline based on Hybrid Optimization Approach with Aggregated Medical Data

Abstract: For quite some time, the usage of many sources of data (data fusion) and the aggregation of that data have been underappreciated. For the purposes of this study, trials using several medical datasets were conducted, with the results serving as a single aggregated source for identifying eye illnesses. It is proposed in this paper that a diagnostic system that can detect diabetic retinopathy, glaucoma, and cataract can be built as an alternative to current methods. The data fusion and data aggregation techniques used to create this multi-model system made it conceivable. As the name implies, it is a way of compiling data from a large number of legitimate sources. The development of a pipeline of algorithms was accomplished through iterative trials and hyper parameter tweaking. CLAHE (Contrast Level Adaptive Histogram Equalization) approaches, which increase the gradient between picture edges, improve segmentation by raising the contrast between picture edges. The Gabor filter has been shown to be the most effective method of selecting features. The Gabor filter was selected using a hybrid optimization method (LION + Cuckoo), which was developed by the author. For automation, the Support Vector Machine (SVM) radial is the most effective method since it delivers excellent stability and accuracy in terms of accuracy and recall, as well as precision and recall. The discoveries and approaches detailed here provide a more solid foundation for future image-based diagnostics researchers to build on in the future. Eventually, the findings of this study will help to improve healthcare workflows and practices.

Author 1: Palwinder Kaur
Author 2: Rajesh Kumar Singh

Keywords: Content-based image retrieval system; CLAHE; Gabor filter; Cuckoo search; LION optimization; support vector machine

PDF

Paper 40: Educational Data Mining to Identify the Patterns of Use made by the University Professors of the Moodle Platform

Abstract: Due to the events caused by the COVID-19 pandemic and social distancing measures, learning management systems have gained importance, preserving quality standards, they can be used to implement remote education or as support for face-to-face education. Consequently, it is important to know how teachers and students use them. In this work, clustering techniques are used to analyze the use, made by university professors, of the resources and activities of the Moodle platform. The CRISP-DM methodology was applied to implement a data mining process, based on the Simple K-Means algorithm; to identify associated groups of teachers it was necessary to categorize the data obtained from the platform. The Apriori algorithm was applied to identify associations in the use of resources and activities. Performance scales were established in the use of Moodle functionalities, the results show the use made by teachers was very low. Rules were generated to identify the associations between activities and resources. As a result the functionalities that need to be enhanced in the teacher training processes were identified. Having identified the patterns of use of the Moodle platform, it is concluded that it was necessary to use a Likert scale to transform the frequency of use of activities and resources and identify the rules of association that establish profiles of teachers and tools that should be promoted in future training actions.

Author 1: Johan Calderon-Valenzuela
Author 2: Keisi Payihuanca-Mamani
Author 3: Norka Bedregal-Alpaca

Keywords: Clustering; educational data mining; moodle; usage patterns; k-means algorithm; a priori algorithm

PDF

Paper 41: Assessing the Quality of Educational Websites in Sudan using Quality Model Criteria through an Electronic Tool

Abstract: The application of internet has grown in recent years. Due to this, there is an increase in the number of websites which creates diversity in the services. This leads the researchers to do more research in the quality of websites in order to set standards and models to maintain its quality. The main objective of these standards is to support the trust and speed which is the cornerstone which are the basis for using websites. Various statistics reports show that the sites of institutions and companies which applied the quality standards have achieved high rates in terms of the user satisfaction and the number of visitors. This study has incorporates the concept of website and electronic gates, its objectives, advantages, types and in addition to its quality and standards of e-websites. It also touched on previous studies conducted on website in the world, the Arab world and African peninsula and also in the Sudan in support of the development of Sudanese websites. The proposed models consist of important metrics to evaluate the application, quality of content, aesthetic aspects, multimedia, reputation and security etc. This paper also proposed an application for evaluating the quality of websites based. This model is applied on Sudanese websites such as governmental, educational, and commercial etc. The authors used the object oriented programing approach to build the proposed model using the PHP language with the combination of CSS and Java script.

Author 1: Asim Seedahmed Ali Osman

Keywords: Website quality; e-websites; education; information technology; PHP language

PDF

Paper 42: Secure Inter-Domain Routing for Resisting Unknown Attacker in Internet-of-Things

Abstract: With an increasing adoption of Internet-of-Things (IoT) over massively connected device, there is a raising security concern. Review of existing security schemes in IoT shows that there is a significant trade-off due to non-adoption of inter-domain routing scheme over larger domain of heterogeneous nodes in an Internet of Things (IoT) via gateway nodes. Hence, the purpose of the proposed study is to bridge this trade-off by adopting a new security scheme that works over an inter-domain routing without any apriori information of an attacker. The goal of the proposed framework is to identify the malicious intention of attacker by evaluating their increasing attention over different types of hop links information. Upon identification, the framework also aims for resisting attacker node to participate in IoT environment by advertising the counterfeited route information with a target of misleading the attackers promoting autonomous self-isolation. The study outcome shows proposed scheme is secure compared to existing scheme.

Author 1: Bhavana A
Author 2: Nanda Kumar A N

Keywords: Internet-of-things; security; inter-domain routing; gateway node; attacker

PDF

Paper 43: Snowball Framework for Web Service Composition in SOA Applications

Abstract: Service Oriented Architecture (SOA) has emerged as a promising architectural style that provides software applications with high level of flexibility and reusability. However, in several cases where legacy software components are wrapped to be used as web services the final solution does not completely satisfy the SOA aims of flexibility and reusability. The literature review and the industrial applications show that SOA lacks a formal definition and measurement for optimal granularity of web services. Indeed, wrapping several business functionalities as a coarse-grained web services lacks reusability and flexibility. On the other hand, a huge number of fine-grained web services results in a high coupling between services and large size messages transferred over the Internet. The main research question still concerns with “How to determine an optimal level of service granularity when wrapping business functionalities as web services?” This research proposes the Snowball framework as a promising approach to integrate and compose web services. The framework is made up three-step process. The process uses the rules in deciding the web services that have an optimal granularity that maintains the required performance. To demonstrate and evaluate the framework, we realized a car insurance application that was already implemented by a traditional approach. The results show the efficiency of snowball framework over other approaches.

Author 1: Mohamed Elkholy
Author 2: Youcef Baghdadi
Author 3: Marwa Marzouk

Keywords: Service oriented architecture (SOA); web service granularity; web service composition; software flexibility; snowball composition framework

PDF

Paper 44: Selection of Requirement Elicitation Techniques: A Neural Network based Approach

Abstract: Requirement Elicitation is key activity of requirement engineering and has a strong impact on design and other phases of software development life cycle. Poor requirement engineering practices lead to project failure. A sound requirement elicitation process is the foundation for the overall quality of software product. Due to criticality and high impact of this phase on overall success and failure of projects, it is very necessary to perform the requirements elicitation activities in a perfect and specific manner. The most difficult and demanding jobs during Requirement Elicitation phase is to select appropriate and specific technique from a wide array of techniques and tools. In this paper, a new approach is proposed using an artificial neural network for selection of requirement elicitation technique from a wide variety of tools and techniques that are available. The training of Neural Network is done by back propagation algorithm. The trained and resultant network can be used as a base for selection of requirement elicitation techniques.

Author 1: Mohd Muqeem
Author 2: Sultan Ahmad
Author 3: Jabeen Nazeer
Author 4: Md. Faizan Farooqui
Author 5: Afroj Alam

Keywords: Requirement elicitation; requirement engineering; neural network; back propagation

PDF

Paper 45: The Performance of Personality-based Recommender System for Fashion with Demographic Data-based Personality Prediction

Abstract: Currently, the common method to predict personality implicitly (Implicit Personality Elicitation) is Personality Elicitation from Text (PET). PET predicts personality implicitly based on statuses written on social media. The weakness of this method when applied to a recommender system is the requirement to have minimal one social media account. A user without such qualification cannot use such system. To overcome this shortcoming, a new method to predict personality implicitly based on demographic data is proposed. This proposal is based on findings by previous researchers stating that there is a correlation between demographic data and personality trait. To predict personality based on demographic data, a personality model (rule) is needed. This model correlates demographic data and personality. To apply this model to a recommender system, another model is needed, that is preference model which connects personality and preference. These two models are then applied to a personality-based recommender system for fashion. From performance evaluation, the precision of and user satisfaction to the recommendation is 60.19% and 87.50%, respectively. When compared to precision and user satisfaction of PET-based recommender system (which are 82% and 79%, respectively), the precision of demographic data-based recommender system is lower whereas the satisfaction is higher.

Author 1: Iman Paryudi
Author 2: Ahmad Ashari
Author 3: Khabib Mustofa

Keywords: Implicit personality elicitation; demographic data; personality-based recommender system; personality trait

PDF

Paper 46: A Greedy-based Algorithm in Optimizing Student’s Recommended Timetable Generator with Semester Planner

Abstract: Semester planner plays an essential role in students’ society that might help students have self-discipline and determination to complete their studies. However, during the COVID-19 pandemic, they faced difficulty organizing time management and doing a manual schedule. It resulted in substantial disruptions in learning, internal assessment disturbances, and the cancellation of public evaluations. Hence, this research aims to optimize the recommended semester planner, Timetable Generator using a greedy algorithm to increase student productivity. We identified three-set control functions for each entered information: 1) validation for the inserted information to ensure valid data and no redundancy, 2) focus scale, and 3) the number of hours to finish the activity. We calculate the priority task sequence to achieve the best optimal solution. The greedy algorithm can solve the optimization problem with the best optimal solution for each situation. Then, we executed it to make a recommended semester planner. From the test conducted, the functionality shows all the features successfully passed. We validate using test accuracy for the system’s reliability by evaluating it compared to the Brute Force algorithm, and the trends increase from 60% to 100%.

Author 1: Khyrina Airin Fariza Abu Samah
Author 2: Siti Qamalia Thusree
Author 3: Ahmad Firdaus Ahmad Fadzil
Author 4: Lala Septem Riza
Author 5: Shafaf Ibrahim
Author 6: Noraini Hasan

Keywords: Greedy algorithm; optimization; recommendation system; semester planner

PDF

Paper 47: Development of an Efficient Electricity Consumption Prediction Model using Machine Learning Techniques

Abstract: Electricity consumption has continued to go up rapidly to follow the rapid growth of the economy. Therefore, detecting anomalies in buildings' energy data is considered one of the most essential techniques to detect anomalous events in buildings. This paper aims to optimize the electricity consumption in households by forecasting the consumption of these households and, consequently, identifying the anomalies. Further, as the used dataset is huge and published publicly, many research used part of it based on their needs. In this paper, the dataset is grouped as daily consumption and monthly consumption to compare the network topologies of all other works that used the same dataset with the selected part. The proposed methodology will depend basically on long short-term memory (LSTM) because it is powerful, flexible, and can deal with complex multi-dimensional time-series data. The results of the model can accurately predict the future consumption of individual households in a daily or monthly consumption base, even if the household was not included in the original training set. The proposed daily model achieves Root Mean Square Error (RMSE) value of 0.362 and mean absolute error (MAE) of 19.7%, while the monthly model achieves an RMSE value of 0.376 and MAE of 17.8%. Our model got the lowest accuracy result when compared with other compared network topologies. The lowest RMSE achieved from other topologies is 0.37 and the lowest MAE is 18% where our model achieved RMSE of 0.362 and MAE of 17.8%. Further, the model can detect the anomalies efficiently in both daily electricity consumption data and monthly electricity consumption data. However, the daily electricity consumption readings are way better to detect anomalies than the monthly electricity consumption readings because of the different picks that appear in the daily consumption data.

Author 1: Ghaidaa Hamad Alraddadi
Author 2: Mohamed Tahar Ben Othman

Keywords: Anomalies detection; deep learning; electricity consumption forecasting; LSTM

PDF

Paper 48: Critical Review of Technology-Enhanced Learning using Automatic Content Analysis

Abstract: Technology-enhanced learning (TEL) continues to grow gradually while considering a multitude of factors, which underpins the need to develop a TEL maturity assessment as a guideline for this gradual improvement. This study investigates the potential application of TEL’s expert knowledge presented in various research articles as qualitative data for developing assessment questionnaires. A mixed-method approach is applied to analyze the qualitative data using systematic literature review (SLR) with automated content analysis (ACA) as quantitative data processing to strengthen the trustworthiness of the findings and reduce researcher bias. This process is carried out six steps: conducting SLR, data processing with ACA using Leximancer, organizing resulting concepts with facet analysis, contextualizing each TEL facet, constructing the assessment questionnaire for each context, and establishing TEL maturity dimensions. This study generates 64 questionnaire statements grouped according to the target respondents, namely students, teachers, or institutions. This set of questions is also grouped into dimensions representing aligned context: student performance, learning process, applied technology, contents, accessibility, teachers and teachings, strategy and regulation. Further research is required to distribute this questionnaire for pilot respondents to design the improvement roadmap and check data patterns to formulate maturity appraisals and scoring methods.

Author 1: Amalia Rahmah
Author 2: Harry B. Santoso
Author 3: Zainal A. Hasibuan

Keywords: Automatic content analysis; ACA; assessment questionnaire; concept; facet analysis; key terms; Leximancer; systematic literature review; SLR; technology-enhanced learning; TEL; text analysis; theme

PDF

Paper 49: A Knowledge-based Expert System for Supporting Security in Software Engineering Projects

Abstract: Building secure software systems requires the intersection between two engineering disciplines, software engineering and security engineering. There is a lack of a defined security mechanism for each of the software development phases, which affects the quality of the software system intensively. In this paper, the authors are proposing a framework to consider the security aspects in all the phases of the software development process from the requirements until the deployment of the software product, with three additional phases that are important to automatically produce a secure system. The framework is developed after analyzing the existing models for secure system development. The key elements of the framework are the addition of the phases like physical, training, and auditing, where they improve the level of security in software engineering projects. The authors found so a solution for the replacement of the knowledge of the security engineer through the construction of an intelligent knowledge-based system, which provides the software developer with the security rules needed in each phase of the software development lifecycle and it improves the awareness of the software developer about the security-related issues in each phase of the software development lifecycle. The framework and the expert system are tested on a variety of software projects, where a significant improvement of security in each phase of the software development process is achieved.

Author 1: Ahmad Azzazi
Author 2: Mohammad Shkoukani

Keywords: Knowledge-based systems; security engineering; software development process; expert systems

PDF

Paper 50: Performance Comparison between Lab-VIEW and MATLAB on Feature Matching-based Speech Recognition System

Abstract: Speech recognition systems are widely used for smart applications. The smart application-based speech recognition system has different requirements for processing the human voice. The most common performance in the speech recognition system is essential to observe, since it is necessary to design smart application-based speech recognition systems for people's needs. Moreover, feature matching is the principal part of speech recognition systems since it plays a key role to authenticate, separate one human voice from another, and their different articulation. Therefore, this work proposes a performance comparison of speech recognition systems based on feature matching using Lab-VIEW and MATLAB. The feature extraction involves calculation of Mel Frequency Cepstral Coefficients (MFCC) for each frame. For the matching process, the system was tested 100 times for each five speeches by making changes in articulation with the same vocal cords. This matching process uses DTW (Dynamic Time Warping), and then the testing is based on the most common performance in the speech recognition system’s comparison between Lab-VIEW and MATLAB such as accuracy rate, execution time, and CPU usage. Based on experimental results, the average accuracy rate of MATLAB is better than Lab-VIEW. The execution time testing indicates that Lab-VIEW has a shorter execution time than MATLAB. On the other hand, Lab-VIEW and MATLAB have almost the same CPU usage. This result indicates that the performance comparison is able to be used according to the requirements of smart application-based speech recognition systems.

Author 1: Edita Rosana Widasari
Author 2: Barlian Henryranu Prasetio
Author 3: Dian Eka Ratnawati

Keywords: Speech; articulation; feature matching; Lab-VIEW; MATLAB

PDF

Paper 51: A New Priority Rule for Initial Ordering of Jobs in Permutation Flowshop Scheduling Problems

Abstract: Scheduling in a permutation flowshop refers to processing of jobs in a set of available machines in the same order. Among the several possible performance characteristics of a flowshop, makespan remains one of the highest preferred metrics by researchers in the past six decades. The constructive heuristic proposed by Nawaz-Enscore-Ham (NEH) is one of the best for makespan minimization. The performance essentially depends on the initial ordering jobs according to a particular priority rule (PR). The popular priority rules are non-increasing order of the jobs' total processing time, the sum of average processing time and standard deviation and, the sum of average processing time, standard deviation and absolute skewness among others. The objective of this paper is to propose and analyse a new job priority rule for the permutation flowshop. The popular priority rules available in the literature are studied and, one of the best priority rules; the sum of average processing time and standard deviation is slightly modified, by replacing the standard deviation by mean absolute deviation (MAD). To assess the performance of the new rule, four benchmark datasets are used. The computational results and statistical analyses demonstrate the better performance of the new rule.

Author 1: B. Dhanasakkaravarthi
Author 2: A. Krishnamoorthy

Keywords: Priority rule; flowshop scheduling; makespan; NEH algorithm

PDF

Paper 52: Preserving Location Privacy in the IoT against Advanced Attacks using Deep Learning

Abstract: Location-based services (LBSs) have received a significant amount of recent attention from the research community due to their valuable benefits in various aspects of society. In addition, the dependency on LBS in the performance of daily tasks has increased dramatically, especially after the spread of the COVID-19 pandemic. LBS users use their real location to build LBS queries to take benefits. This makes location privacy vulnerable to attacks. The privacy issue is accentuated if the attacker is an LBS provider since all information about users is accessible. Moreover, the attacker can apply advanced attacks, such as map matching and semantic location attacks. In response to these issues, this work employs artificial intelligence to build a robust defense against advanced location privacy attacks. The key idea behind protecting the location privacy of LBS users is to generate smart dummy locations. Smart dummy locations are false locations with the same query probability as the real location, but they are far from both the real location and each other. Relying on the previous two conditions, the deep-learning-based intelligent finder ensures a high level of location privacy protection against advanced attacks. The attacker cannot recognize the dummies from the real location and cannot isolate the real location by a filtering process. In terms of entropy (the privacy protection metric), accuracy (the deep learning metric), and total execution time (the performance metric) and compared to the well-known DDA and BDA systems, the proposed system shows better results, where entropy = 15.9, accuracy = 9.9, and total execution time = 17 sec.

Author 1: Abdullah S. Alyousef
Author 2: Karthik Srinivasan
Author 3: Mohamad Shady Alrahhal
Author 4: Majdah Alshammari
Author 5: Mousa Al-Akhras

Keywords: LBS; dummy; deep-learning; attacks; accuracy; resistance; performance

PDF

Paper 53: A Conceptual User Experience Evaluation Model on Online Systems

Abstract: An online system has become a priority for organisations or companies in many countries, as it allows many processes to be conducted via online platforms, which contributes to profit gain. There are different types of user experience (UX) evaluation models that have been proposed to guide the measurement and development process. However, most of these models only have dimensions, and there is no guidance for UX measurement on online systems. The lack of evaluation models for online system measurement requires further investigation. This paper aims to identify the gaps in UX evaluation models, and develop a conceptual UX evaluation model for online systems. The method used in this study includes reviewing several literatures and shortlisting the relevant publications on UX and online systems. After that, the gaps were identified from the existing UX evaluation model in the relevant publications based on the ISO standard. Then, the study identified the important components of UX, and proposed a new conceptual UX evaluation model for online systems. The results of the study are the identification of the gaps in existing UX evaluation models, and the development of a new conceptual UX evaluation model that is specifically for online systems. Therefore, the results help in considering UX dimensions, criteria, and metrics and potential UX components for evaluation and measurement. The paper contributes to system developers, designers, and also researchers for future UX evaluation model development for online systems. Future studies could use the reviewed UX evaluation models to identify relevant dimensions of online systems, and hence improve the model that they will develop. The findings may also be beneficial to organisations that own online systems by providing guidelines on important dimensions involved in their UX-based evaluations.

Author 1: Norhanisha Yusof
Author 2: Nor Laily Hashim
Author 3: Azham Hussain

Keywords: User experience; UX evaluation; UX model; online system; conceptual model

PDF

Paper 54: Applying Artificial Intelligence in Retrieving Design Solution

Abstract: Design is a very important step in the product life cycle, because it is generally the key for the success or the failure of the product. The field of design theories and methodologies is fill with theories and methods that have been taught and developed throughout the years. Most of them relay on subdividing the design process into phases, where the transition between each two phases relay on using some design tools. One of the main challenges of nowadays is to find a way for the integration of artificial intelligence (AI) in the design process. This integration could be very benefic, due to the fact that AI can learn quickly the relationship between input and output of any phenomena, and it can also give us a prediction of the behavior, if the inputs parameters vary. In our previous work we shaded the light, on how we can improve the transition between design phase by storing and retrieving design solutions using morphological analysis and design tools like DFX. In this work, we present a deferent methodology to perform this transition which relay on using an artificial intelligence tool called Artificial Neural Networks (ANN) instead of morphological analysis to retrieve the right design solution. To illustrate this method, we will take the same example from our previous work and will show how we can use ANN to learn and predict the right design solution.

Author 1: Y. Moubachir
Author 2: B. Hamri
Author 3: S. Taibi

Keywords: Artificial intelligence; ANN; design methodologies; DFX; morphological analysis

PDF

Paper 55: State-of-the-Art Approach to e-Learning with Cutting Edge NLP Transformers: Implementing Text Summarization, Question and Distractor Generation, Question Answering

Abstract: Amid the worldwide wave of pandemic lockdowns, there has been a remarkable growth in E-learning. Online learning has become a challenge for students. It has become difficult for students to find the content they need. The mounting accessibility of textual content has necessitated comprehensive study in the areas of automatic text summarization and question generation. Multiple Choice Questions is very smooth for evaluations, and its assessment is implemented through computerized applications in order that results may be declared within some hours, and the evaluation system is 100% pure. The system proposes an interactive reading platform where the user can upload an E-Book and get textual summary and generates questions like MCQs, fill in the blanks and one word. The user can also evaluate the questions answered. The proposed system is an all-in-one interactive reading platform.

Author 1: Spandan Patil
Author 2: Lokshana Chavan
Author 3: Janhvi Mukane
Author 4: Deepali Vora
Author 5: Vidya Chitre

Keywords: Machine intelligence; natural language processing; neural networks; predictive models; text processing

PDF

Paper 56: A Regression Model to Predict Key Performance Indicators in Higher Education Enrollments

Abstract: Key Performance Indicators (KPIs) are essential factors for the success of an organization. KPIs measure the current performance and identify the ongoing progress for specified business objectives. The Ministry of Higher Education (MoHE) in Palestine used established formulas to predict the KPI. These KPIs are vital for charting the organization aims. This study applies regression models for student enrollment data sets to predict accurate KPIs that can be used and adapted for any higher education system. The predictive engine will determine the KPI based on linear regression techniques such as Lasso, Elastic Net, and non-linear regression such as Support Vector Regression (SVR), and K-Nearest Neighbor (KNN). The Ministry of Higher Education (MoHE) in Palestine provided the datasets related to enrollments and graduations data for different Higher Education Institutions (HEIs). The regression algorithms were evaluated by mean absolute error, mean square error (MSE), root mean square error (RMSE) and the R Squared. The experiment demonstrates that the 40% training with 60% testing splitting using linear regression shows the best result.

Author 1: Ashraf Abdelhadi
Author 2: Suhaila Zainudin
Author 3: Nor Samsiah Sani

Keywords: Data mining; KPI; regression; higher education; prediction model

PDF

Paper 57: A Novel Stance based Sampling for Imbalanced Data

Abstract: While the world is suffering from coronavirus pandemic (COVID-19), a parallel battle with Infodemic, the proliferation of fake news online is also taking place. The spread of fake news during this global pandemic COVID-19 has dangerous consequences. This is the driving force behind this study. Relying on incorrect information obtained from the internet or social media can be fatal. According to a World Health Organization survey, at least 800 people have lost their lives because of COVID-19 misinformation during this time, highlighting the accurate automated classification of fake news. However, the data at disposal for classification is imbalanced. The Internet has a vast repository of authentic healthcare news, whereas Fake News on COVID-19 healthcare is not abundant. This imbalance leads to incorrect classification. The paper studies alternative approaches to text sampling. In this paper, we propose a stance based sampling method for balancing news data. The disparity between the title and content of news items is utilized to sample data points selectively and rectify the imbalance. The key findings are that the proposed stance-based sampling strategies enhance categorisation task performance consistently for varying degrees of imbalance. The proposed techniques can better detect misleading news in the health care sector.

Author 1: Isha Agarwal
Author 2: Dipti Rana
Author 3: Aemie Jariwala
Author 4: Sahil Bondre

Keywords: Fake news; healthcare; sampling; stance; COVID-19; imbalance

PDF

Paper 58: Energy Efficient and Quality-of-Service Aware Routing using Underwater Wireless Sensor Networks

Abstract: In current years, there has been an increasing attention in Underwater wireless sensor networks (UWSNs). Underwater sensor networks (USNs) can be applied for many various purposes. To address the routing issue, the Cuckoo Search Optimization Algorithm with Energy Efficient and QOS Aware (CSOA-EQ) based routing methods have been proposed in this chapter. Every application is important in its own right, but some of them can help improve sea investigation to meet a variety of underwater applications, such as a catastrophic event alert system (such as torrent and seismic monitoring), supported navigation, oceanographic data collection, and underwater surveillance, ecological applications (such as the nature of organic water and contamination monitoring), modern applications (such as marine investigation), and so on. For example, sensors can assess specific metrics, such as base intensity and securing pressure, to monitor the auxiliary nature of the securing environment in offshore engineering applications. UASNs have also improved our understanding of underwater environments, such as climate change, underwater creature life, and the number of inhabitants in coral reefs.

Author 1: P. Sathya
Author 2: P. Sengottuvelan

Keywords: Underwater wireless sensor networks (UWSNs); QOS aware (CSOA-EQ); underwater environments

PDF

Paper 59: The Trend of Segmentation for Arabic Handwritten Touching Characters

Abstract: The paper is a comprehensive study of existing research trends in the sector of Arabic language, with a focus on state-of-the-art methods to illustrate the existing condition of various theory in that sector, with the goal of facilitating the adaptation and extension of prior ones into new systems and applications. In the Arabic alphabet, there are 28 letters. Depending on its place in the word, every Arabic letter has over one shape; a single character may have from one to four shapes. The Touching between character and the Overlapping occurred in the handwritten. Historical documents contained a massive knowledge and culture. There are many old books that need to be converted into readable format. Which would take a long time if humans converted it. However, the main problem is the lack of research in Arabic Handwritten especially for segmentation of touching characters. Thus, current trends of the segmentation techniques are investigated to identify the current state-of-the art of segmenting touching characters in other domains for constructing enhance techniques for Arabic touching characters. In this paper, it reviewed approaches for the segmentation of the touching characters. This paper presents the trend of approaches for the recognition process and segmentation of Arabic handwritten touching characters. In this paper, it highlighted the strength of each technique, the method used, and the drawback of the techniques. Based on the outcome, this will provide a good foundation for constructing a better technique for segmentation of Arabic touching characters, especially from the degraded documents.

Author 1: Ahmed Mansoor Mohsen Algaradi
Author 2: Mohd Sanusi Azmi
Author 3: Intan Ermahani A. Jalil
Author 4: Abdulwahab Fuad Ayyash Hashim
Author 5: Afrah Abdullah Muhammad Al-Malki

Keywords: Component; character segmentation; Arabic handwritten; character touching; recognition

PDF

Paper 60: What Influences Customer’s Trust on Online Social Network Sites (SNSs) Sellers?

Abstract: Customer trust has been recognized as an essential part of the rising trend of social commerce. Lack of trust facilitates the hesitation of customers to shop online or to avoid them completely. Therefore, it is essential to implement and analyze a way of buyer-seller relationship establishment that will improve customers' trust. This paper aims to develop a trust model of Social Network Sites (SNSs) sellers, and to assess the dimensions and criteria that affects customer's trust on Online Social Network Sites (SNSs) sellers by using Analytic Hierarchy Process (AHP) approach. The study was carried out among those who have transactions with Malaysian online SNSs sellers at least every three months. The findings have indicated the top three influencing criteria: recommendation, transaction safety, and rating. This study provides insight into the customers' thoughts about placing trust on online SNSs sellers for selling and purchasing activities.

Author 1: Ramona Ramli
Author 2: Asmidar Abu Bakar
Author 3: Fiza Abdul Rahim

Keywords: Online commerce; trust; social commerce; multi-criteria decision–making

PDF

Paper 61: New Textual Authentication Method to Resistant Shoulder-Surfing Attack

Abstract: Using textual passwords suffer from the balance between security and usability. Password policies are usually adopted by system administrators to force users to choose strong passwords. However, users often use a simple password to make it easy to remember, which reduces the password strength and make it vulnerable to information security threats. When users enter their passwords in public places like airports or cafes, they become exposed to shoulder surfing attacks which are considered as a kind of social engineering. With a little effort, an attacker can capture a password by recording the individual’s authentication session or by direct observation. To overcome this vulnerability, we propose a new textual-password approach that uses camouflage characters and a virtual keyboard which leads to generating strong and easy to remember passwords. The perspective of usability and security was evaluated by experimental studies conducted with 65 users and then compared with recent studies. The results showed that the proposed technique has the lowest shoulder surfing success rate with just 3.63% with reasonable usability.

Author 1: Islam Abdalla Mohamed Abass
Author 2: Loay F.Hussein
Author 3: Tarak kallel
Author 4: Anis Ben Aissa

Keywords: Shoulder surfing; caesar cipher; virtual keyboard; graphical password; social engineering

PDF

Paper 62: CovSeg-Unet: End-to-End Method-based Computer-Aided Decision Support System in Lung COVID-19 Detection on CT Images

Abstract: COVID-19 epidemic continues to threaten public health with the appearance of new, more severe mutations, and given the delay in the vaccination process, the situation becomes more complex. Thus, the implementation of rapid solutions for the early detection of this virus is an immediate priority. To this end, we provide a deep learning method called CovSeg-Unet to diagnose COVID-19 from chest CT images. The CovSeg-Unet method consists in the first time of preprocessing the CT images to eliminate the noise and make all images in the same standard. Then, CovSeg-Unet uses an end-to-end architecture to form the network. Since CT images are not balanced, we propose a loss function to balance the pixel distribution of infected/uninfected regions. CovSeg-Unet achieved high performances in localizing COVID-19 lung infections compared to others methods. We performed qualitative and quantitative assessments on two public datasets (Dataset-1 and Dataset-2) annotated by expert radiologists. The experimental results prove that our method is a real solution that can better help in the COVID-19 diagnosis process.

Author 1: Fatima Zahra EL BIACH
Author 2: Imad IALA
Author 3: Hicham LAANAYA
Author 4: Khalid MINAOUI

Keywords: Deep learning; COVID-19; loss function; balanced data

PDF

Paper 63: Elevint: A Cloud-based Internet of Elevators

Abstract: With the significant growth of the number of high-rise buildings nowadays, the dependence on elevators has also increased. The issue that faces elevator passengers in case of breakdowns is the long waiting time for the arrival of the maintenance engineers to perform the repair, as the process of reporting is done manually. The safety concern increases when people are trapped. Most state-of-the-art approaches detect faults without providing means to facilitate the communication between elevator owners, maintenance companies, and engineers or notify them in case of breakdowns. Moreover, none of the proposed fault detection solutions rely on rules specified by experts in the field. This paper aims at addressing these issues by proposing a system that manages, monitors, detects faults and informs users of any faults instantly by sending notifications. Specifically, the paper proposes a mobile application, Elevint, that is, cloud-based and exploits the Internet of Things (IoT) technology. Elevint provides real-time monitoring of elevator operating conditions collected from sensors. The data is then transferred to the cloud, where faults are detected by applying rules that compare the current conditions with severity levels determined by experts. In the case that a fault is detected, elevator owners and maintenance companies are automatically notified. Moreover, through Elevint, maintenance companies can assign engineers to repair the fault and elevator owners can view and re-schedule the engineer’s visit if needed. Testing our system on an elevator model shows 98% accuracy. In future, we intend to test it on real elevators to verify its applicability in practice.

Author 1: Sarah Mohammed Aljadani
Author 2: Shahd Mohammed Almutairi
Author 3: Saja Saeed Ghaleb
Author 4: Lama Al Khuzayem

Keywords: Internet of things; elevator; fault detection; monitoring; notification; real-time

PDF

Paper 64: Moving Object Detection over Wireless Visual Sensor Networks using Spectral Dual Mode Background Subtraction

Abstract: Wireless Visual Sensor Networks (WVSN) play an essential role in tracking moving objects. WVSN's key drawbacks are storage, power, and bandwidth. Background subtraction is used in the early stages of target tracking to extract moving targets from video images. Many standard methods of subtracting backgrounds are no longer suitable for embedded devices because they use complex statistical models to manage small changes in lighting. This paper introduces a system based on the Partial Discrete Cosine Transform (PDCT), reducing the vast dimensions of processed data while retaining most of the important information, thereby reducing processing and transmission energy. It also uses a dual-mode single Gaussian model (SGM) for accurate detection of moving objects. The proposed system's performance is to be assessed using the standard CDnet 2014 benchmark dataset in terms of detection accuracy and time complexity. Furthermore, the suggested method is compared to previous WVSN background subtraction methods. Simulation results show that the proposed method consistently has 15% better accuracy and is up to 3 times faster than the state-of-the-art object detection methods for WVSN. Finally, we showed the practicality of the suggested method by simulating it in a sensor network environment using the Contiki OS Cooja Simulator and implementing it in a real testbed using Cortex M3 open nodes of IOT-LAB.

Author 1: Ahmed M. AbdelTawab
Author 2: M.B. Abdelhalim
Author 3: S.E.D. Habib

Keywords: Background subtraction; discrete cosine transform; embedded camera networks; Gaussian mixture models; wireless visual sensor networks

PDF

Paper 65: Enhancing the Security of Digital Image Encryption using Diagonalize Multidimensional Nonlinear Chaotic System

Abstract: This paper describes a new efficient cryptosystem for the color image encryption technique, based on a combination of multidimensional proposed chaos systems. This chaos system consists of six bisections: T_1 (x),T_2 (x),T_2 (y),T_3 (x),T_3 (y), andT_3 (z). They induce three chaotic matrix keys and three chaotic vector keys. We use a multidimensional chaotic system together with an encryption algorithm to provide better security and wide key spaces. The proposed cryptosystem uses four levels of random pixel diffusions and permutations simultaneously and ω - times interchange between rows and columns. The correlations between the RGB components of the plain image are reduced. The level of security, the computational complexity, the quality of decoding a decrypted image under closure threat is improved. The simulation results showed that the algorithm shows a high level of security, and the assurance that the image recovered at the receiving point is identified as the image at the transmission point.

Author 1: Mahmoud I. Moussa
Author 2: Eman I. Abd El-Latif
Author 3: Nawaz Majid

Keywords: Chaotic system; encryption; decryption; image; algorithms; cryptosystem

PDF

Paper 66: A Visual-Range Cloud Cover Image Dataset for Deep Learning Models

Abstract: Coastal and offshore oil and gas structures and operations are subject to continuous exposure to environmental conditions (ECs) such as varying air and water temperatures, rough sea conditions, strong winds, high humidity, rain, and varying cloud cover. To monitor ECs, weather and wave sensors are installed on these facilities. However, the capital expenditure (CAPEX) and operational expenses (OPEX) of these sensors are high, especially for offshore structures. For observable ECs, such as cloud cover, a cost-effective deep learning (DL) classification model can be employed as an alternative solution. However, to train and test a DL model, a cloud cover image dataset is required. In this paper, we present a novel visual-range cloud cover image dataset for cloud cover classification using a deep learning model. Various visual-range sky images are captured on nine different occasions, covering six cloud cover conditions. For each cloud cover condition, 100 images are manually classified. To increase the size and quality of images, multiple label-preserving data augmentation techniques are applied. As a result, the dataset is expanded to 9,600 images. Moreover, to evaluate the usefulness of the proposed dataset, three DL classification models, i.e., GoogLeNet, ResNet-50, and EfficientNet-B0, are trained, tested, and their results are presented. Even though EfficientNet-B0 had better generalization ability and marginally higher classification accuracy, it was discovered that ResNet-50 is the best choice for cloud cover classification due to its lower computational cost and competitive classification accuracy. Based on these results, it is concluded that the proposed dataset can be used in further research in DL-based cloud cover classification model development.

Author 1: Muhammad Umair
Author 2: Manzoor Ahmed Hashmani

Keywords: Cloud cover; dataset; classification; GoogLeNet; ResNet-50; EfficientNet-B0

PDF

Paper 67: Blockchain in the Quantum World

Abstract: Blockchain is one of the most discussed and highly accepted technologies, primarily due to its application in almost every field where third parties are needed for trust. Blockchain technology relies on distributed consensus for trust, which is accomplished using hash functions and public-key cryptography. Most of the cryptographic algorithms in use today are vulnerable to quantum attacks. In this work, a systematic literature review is done so that it can be repeated, starting with identifying the research questions. Focusing on these research questions, literature is analysed to find the answers to these questions. The survey is completed by answering the research questions and identification of the research gaps. It is found in the literature that 30% of the research solutions are applicable for the data layer, 24% for the application and presentation layer, 23% for the network layer, 16% for the consensus layer and only 1% for hardware and infrastructure layer. We also found that 6% of the solutions are not blockchain-based but present different distributed ledger technology.

Author 1: Arman Rasoodl Faridi
Author 2: Faraz Masood
Author 3: Ali Haider Thabet Shamsan
Author 4: Mohammad Luqman
Author 5: Monir Yahya Salmony

Keywords: Blockchain; quantum computers; distributed ledger technology; security; systematic literature review; quantum attacks

PDF

Paper 68: Design and Implementation of Deep Depth Decision Algorithm for Complexity Reduction in High Efficiency Video Coding (HEVC)

Abstract: High efficiency video (HEVC) coding made its mark as a codec which compress with low bit rate than its preceding codec that is H.264, but the factor that stop HEVC from many applications is its complex encoding procedure. The rate distortion optimisation (RDO) cost calculation in HEVC consume complex calculations. In this paper, we propose a method to cross out the issue of complex calculations by replacing the traditional inter-prediction procedure of brute force search for RDO by a deep convolutional neural network to predict and perform this process. In the first step, the modelling of the deep depth decision algorithm is done with optimum specifications using convolutional neural network (CNN). In the next step, the model is designed and trained with dataset and validated. The trained model is tested by pipelining it to the original HEVC encoder to check its performance. We also evaluate the efficiency of the model by comparing the average time of encoding for various resolution video input. The testing is done with mutually independent input to maintain the accuracy of the system. The system shows a substantial saving in encoding time that proves the complexity reduction in HEVC.

Author 1: Helen K Joy
Author 2: Manjunath R Kounte
Author 3: B K Sujatha

Keywords: CNN; HEVC; deep learning; RDO; encoding time; complexity reduction

PDF

Paper 69: The Pragmatics of Function Words in Fiction

Abstract: This paper uses a computer-aided text analysis (CATA) to decipher the ideologies pertaining to function words in fictional discourse represented by Edward Bond’s Lear. In literary texts, function words, such as pronouns and modal verbs display a very high frequency of occurrence. Despite the fact that these linguistic units are often employed to channel a mere grammatical function pertaining to their semantic nature, they, sometimes, exceed their grammatical and semantic functionality towards further ideological and pragmatic purposes, such as persuasion and manipulation. This study investigates the extent to which function words, linguistically manifested in two personal pronouns (I, we) and two modal verbs (will, must) are utilized in Bond’s Lear to convey both persuasive and/or manipulative ideologies. This paper sets three main objectives: (i) to explore the persuasive and/or manipulative ideologies the four function words under investigation communicate in the selected text, (ii) to highlight the extent to which CATA software helps in deciphering the ideological weight of function words in Bond’s Lear, and (iii) to clarify the integrative relationship between discourse studies and computer-aided text analysis. Two findings are reported in this paper: first, function words do not only carry semantic functions, but also go beyond their semantic functionality towards pragmatic purposes that serve to achieve specific ideologies in discourse. Second, the application of CATA software proves useful in extracting ideologies from language and helps better understand the power of function words, which, in turn, accentuates the analytical integration between discourse studies and computer, particularly in the linguistic analysis of large data texts.

Author 1: Ayman Farid Khafaga

Keywords: Computer-aided text analysis (CATA); concordance; function words; persuasion; manipulation; ideology; Bond’s Lear

PDF

Paper 70: Fusion of BIFFOA and Adaptive Two-Phase Mutation for Helmetless Motorcyclist Detection

Abstract: Road traffic injuries and deaths cause considerable economic losses to individuals, families, and nations as a whole. One of the strategies needed to curtail these fatalities is the surveillance of helmetless motorcyclists, which is carried out by developing an automatic detection system based on computer vision. Generally, this system consists of three subsystems, namely, moving object segmentation, motorcycle classification, and helmetless head detection. HOPG-LDB (Histogram of Oriented Phase and Gradient - Local Difference Binary) descriptor for this system produced good accuracy; however, it still has a drawback related to a large number of features. Based on these observations, this paper proposed an Adaptive Two-phase Mutation Binary Improved Fruit Fly Optimization Algorithm (ATMBIFFOA) to reduce the features. The ATMBIFFOA is a new feature selection algorithm that improved BIFFOA (Binary Improved Fruit Fly Optimization Algorithm) with an adaptive two-phase mutation algorithm. The BIFFOA produced good accuracy; however, weak in reducing feature dimension. The adaptive two-phase mutation algorithm was used to cover this weakness. The experiment results show that the proposed method can reduce the number of features and computation time effectively from BIFFOA. The proposed method produced motorcycle classification accuracy of 96.06% for the JSC1 dataset and 96.85% for the JSC2 dataset. As for helmetless head detection, the proposed method produced an average precision of 66.29% for the JSC1 dataset and 63.95% for the JSC2 dataset.

Author 1: Sutikno
Author 2: Agus Harjoko
Author 3: Afiahayati

Keywords: Motorcycle classification; helmetless head detection; BIFFOA; two-phase mutation algorithm

PDF

Paper 71: AI-based System for the Detection and Prevention of COVID-19

Abstract: The COVID-19 pandemic has had catastrophic consequences all over the world since the detection of the first case in December 2019. Currently, exponential growth is expected. In order to stop the spread of this pandemic, it is necessary to respect sanitary protocols such as the mandatory wearing of masks. In this research paper, we present an affordable artificial intelligence-based solution to increase the protection against COVID-19, covering several relevant aspects to facilitate the detection and prevention of this pandemic: non-contact temperature measurement, mask detection, automatic gel-dispensing, and automatic sterilization. Our main contribution is to provide high-quality, real-time learning and analysis. To achieve this goal, we used a deep convolutional neural network (CNN) based on MobileNetV2 architecture as the learning algorithm and Advanced Encryption Standard (AES) as an encryption protocol for sending secure data to notify hospital staff. The experimental results show the effectiveness of our model by providing 99.7% accuracy in detecting masks with a runtime of 1.54 s.

Author 1: Sofien Chokri
Author 2: Wided Ben Daoud
Author 3: Wasma Hanini
Author 4: Sami Mahfoudhi
Author 5: Amel Makhlouf

Keywords: Face mask detection; coronavirus; COVID-19; deep learning; MobileNetV2; AES

PDF

Paper 72: Human Emotion Recognition by Integrating Facial and Speech Features: An Implementation of Multimodal Framework using CNN

Abstract: This Emotion recognition plays a prominent role in today's intelligent system applications. Human computer interface, health care, law, and entertainment are a few of the applications where emotion recognition is used. Humans convey their emotions in the form of text, voice, and facial expressions, thus developing a multimodal emotional recognition system playing a crucial role in human-computer or intelligent system communication. The majority of established emotional recognition algorithms only identify emotions in unique data, such as text, audio, or image data. A multimodal system uses information from a variety of sources and fuses the information by using fusion techniques and categories to improve recognition accuracy. In this paper, a multimodal system to recognise emotions was presented that fuses the features from information obtained from heterogenous modalities like audio and video. For audio feature extraction energy, zero crossing rate and Mel-Frequency Cepstral Coefficients (MFCC) techniques are considered. Of these, MFCC produced promising results. For video feature extraction, first the videos are converted to frames and stored in a linear scale space by using a spatial temporal Gaussian Kernel. The features from the images are further extracted by applying a Gaussian weighted function to the second momentum matrix of linear scale space data. The Marginal Fisher Analysis (MFA) fusion method is used to fuse both the audio and video features, and the resulted features are given to the FERCNN model for evaluation. For experimentation, the RAVDESS and CREMAD datasets, which contain audio and video data, are used. Accuracy levels of 95.56, 96.28, and 95.07 on the RAVDESS dataset and accuracies of 80.50, 97.88, and 69.66 on the CREMAD dataset in audio, video, and multimodal modalities are achieved, whose performance is better than the existing multimodal systems.

Author 1: P V V S Srinivas
Author 2: Pragnyaban Mishra

Keywords: Emotion recognition; multimodal; fusion; MFCC; MFA; FERCNN; CREMAD; RAVDESS

PDF

Paper 73: BERT based Named Entity Recognition for Automated Hadith Narrator Identification

Abstract: Hadith serves as a second source of Islamic law for Muslims worldwide, especially in Indonesia, which has the world's most significant Muslim population of 228.68 million people. However, not all Hadith texts have been certified and approved for use, and several falsified Hadiths make it challenging to distinguish between authentic and fabricated Hadiths. In terms of Hadith science, determining the authenticity of a Hadith can be accomplished by examining its Sanad and Matn. Sanad is an essential aspect of the Hadith because it indicates the chain of the Narrator who transmits the Hadith. The research reported in this paper provides an advanced Natural Language Processing (NLP) technique for identifying and authenticating the Narrator of Hadith as a part of Sanad, utilizing Named Entity Recognition (NER) to address the necessity of authenticating the Hadith. The NER technique described in the research adds an extra feed-forward classifier to the last layer of the pre-trained BERT model. In the testing process using Cahya/bert-base-indonesian-1.5G, the proposed solution received an overall F1-score of 99.63 percent. On the Hadith Narrator Identification using other Hadith passages, the final examination yielded a 98.27 percent F1-score.

Author 1: Emha Taufiq Luthfi
Author 2: Zeratul Izzah Mohd Yusoh
Author 3: Burhanuddin Mohd Aboobaider

Keywords: Hadith narrator; hadith authentication; natural language processing; named entity recognition; NLP; NER; BERT; BERT fine-tune

PDF

Paper 74: An Early Intervention Technique for At-Risk Prediction of Higher Education Students in Cloud-based Virtual Learning Environment using Classification Algorithms during COVID-19

Abstract: Higher Education is considered vital for societal development. It leads to many benefits including a prosperous career and financial security. Virtual learning through cloud platforms has become fashionable as it is expediency and flexible to students. New student learning models and prediction outcomes can be developed by using these platforms. The appliance of machine learning techniques in identifying students at-risk is a challenging and concerning factor in virtual learning environment. When there are few students, it is easy for identification, but it is impractical on larger number of students. This study included 530 higher education students from various regions in India and the outcomes generated from online survey data were analyzed. The main objective of this research is to predict early identification of students at-risk in cloud virtual learning environment by analyzing their demographic characteristics, previous academic achievement, learning behavior, device type, mode of access, connectivity, self-efficacy, cloud platform usage, readiness and effectiveness in participating online sessions using four machine learning algorithms namely K Nearest Neighbor (KNN), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA) and Random Forest (RF). Predictive system helps to provide solutions to low performance students. It has been implemented on real data of students from higher education who perform various courses in virtual learning environment. Deep analysis is performed to estimate the at-risk students. The experimental results exhibited that random forest achieved higher accuracy of 88.61% compared to other algorithms.

Author 1: Arul Leena Rose. P. J
Author 2: Ananthi Claral Mary.T

Keywords: Prediction; at-risk; machine learning; virtual learning environment; cloud platforms; classification; COVID-19; random forest; student academic performance

PDF

Paper 75: Balanced Schedule on Storm for Performance Enhancement

Abstract: In recent years, real-time and big data aroused and received a lot of attention due to the spread of embedded systems in almost everything in life. This has led to many challenges that need to be solved to enhance and improve systems that work on big real-time data. Apache Storm is a system used for computing and analyzing big real-time data of distributed systems. This paper aims to develop a scheduler to improve the scheduling of the applications represented by topologies on the Storm cluster. The proposed scheduler is hybridization between the scheduling algorithms of A3 Storm and the Workload scheduler. Its objective is to minimize the communication between tasks while balancing the workload on all cluster machines. The proposed scheduler is compared with the A3 Storm and Fischer and Bernstein’s scheduling algorithm. The comparison has been made using four different topologies. The experimental results show that our proposed scheduler outperforms the two other schedulers in throughput and complete latency.

Author 1: Arwa Z. Selim
Author 2: Noha E. El-Attar
Author 3: I. M. Hanafy
Author 4: Wael A. Awad

Keywords: Real-time; big data; apache storm; scheduling

PDF

Paper 76: Extract Concept using Subtitles in MOOC

Abstract: Massive open online courses (MOOCs) are a variety of courses offered through the online mode, paid or unpaid and has evolved as an excellent learning resource for students. The structure of the course design is mainly linear where there are a few video lectures provided by either professors of several universities, or people with expertise in the particular subject. They are usually graded on a weekly basis through quizzes or peer-graded assignments. The objective of this paper is to extract the concepts taught in the videos from the subtitles, which could later be used to enhance recommendations of the learners using their clickstream data. The teachers could also use this to see the demand for their courses. Evaluate two keyword extraction methods, which are BERT and LDA using different Coursera courses. The experimental results show that BERT outperforms LDA in terms of Coherence.

Author 1: Aarika Kawtar
Author 2: Habib Benlahmar
Author 3: Mohamed Amine Naji
Author 4: Elfilali Sanaa
Author 5: Zouheir Banou

Keywords: LDA; BERT; topic coherence; overlap coefficient

PDF

Paper 77: Identification of Coronary Heart Disease through Iris using Gray Level Co-occurrence Matrix and Support Vector Machine Classification

Abstract: Now-a-days, coronary heart disease is one of the deadliest diseases in the world. An unfavorable lifestyle, lack of physical activity, and consuming tobacco are the causes of coronary heart disease aside from genetic inheritance. Sometimes the patient does not know whether he has abnormalities in heart function or not. Therefore, this study proposes a system that can detect heart abnormalities through the iris, known as the Iridology method. The system is designed automatically in the iris detection to the classification results. Feature extraction using five characteristics is applied to the Gray Level Co-occurrence Matrix (GLCM) method. The classification process uses the Support Vector Machine (SVM) with linear kernel variation, Polynomial, and Gaussian to obtain the best accuracy in the system. From the system simulation results, the use of the Gaussian kernel can be relied on in the classification of iris conditions with an accuracy rate of 91%, then the Polynomial kernel accuracy reaches 89%, and the linear kernel accuracy reaches 87%. This study has succeeded in detecting heart conditions through the iris by dividing the iris into normal iris and abnormal iris.

Author 1: Vincentius Abdi Gunawan
Author 2: Leonardus Sandy Ade Putra
Author 3: Fitri Imansyah
Author 4: Eka Kusumawardhani

Keywords: Iris; iridology; coronary heart; circle hough transform; gray level co-occurrence matrix; support vector machine

PDF

Paper 78: Performance of Data Reduction Algorithms for Wireless Sensor Network (WSN) using Different Real-Time Datasets: Analysis Study

Abstract: This paper investigates the effect of data reduction methods in the performance of Wireless Sensor Network (WSN) using a variety of real-time datasets. The simulation tests are carried out in MATLAB for several methods of reducing the quantity of sent data. These approaches are Data Reduction based - Neural Network Fitting (NNF), Neural Network Time Series (NNTS), Linear Regression with Multiple Variables (LRMV), Data Reduction based – “An Efficient Data Collection and Dissemination (EDCD2)” and Data Reduction based – Fast Independent Component Analysis (FICA). The selected algorithms NNF, NNST, EDCD2, LRMV, and FICA are evaluated using real-time datasets. The performance indicators included are energy consumption, data accuracy, and data reduction percentage. The research results show that the selected algorithm helps to reduce the amount of data transferred and consumed energy, but each algorithm performs differently depending on the dataset used.

Author 1: M. K. Hussein
Author 2: Ion Marghescu
Author 3: Nayef.A.M. Alduais

Keywords: Data reduction algorithms; WSN; energy consumption; accuracy; neural network; independent component analysis

PDF

Paper 79: A Global Survey of Technological Resources and Datasets on COVID-19

Abstract: The application and successful utilization of technological resources in developing solutions to health, safety, and economic issues caused by COVID-19 indicate the importance of technology in curbing COVID-19. Also, the medical field has had to race against tie to develop and distribute the COVID-19 vaccine. This endeavour became successful with the vaccines created and approved in less than a year, a feat in medical history. Currently, much work is being done on data collection, where all significant factors impacting the disease are recorded. These factors include confirmed cases, death rates, vaccine rates, hospitalization data, and geographic regions affected by the pandemic. Continued research and use of technological resources are highly recommendable—the paper surveys list of packages, applications and datasets used to analyse COVID-19.

Author 1: Manoj Muniswamaiah
Author 2: Tilak Agerwala
Author 3: Charles C. Tappert

Keywords: Vaccination; hospitalization; confirmed cases; datasets; data science; COVID-19

PDF

Paper 80: A Comparison between Online and Offline Health Seeking Information using Social Networks for Patients with Chronic Health Conditions

Abstract: The patient is now better connected with other patients just like the consumer is now better connected with other consumers in particular through the growing adoption of social media and online peer to peer communities. These relationships which become collaborative have either positive or indeed negative consequences that may either endorse or have implications for a firm’s products [32]. The aim of this research was to gain an understanding of the impact social media has on patient influence on healthcare provision especially in relation to information seeking and clinical product choice. It compares a group of patients who are predominantly online information seekers with a group who are predominantly offline information seekers. Bias will be eliminated by utilising probability sampling techniques in order to be able to perform statistical analysis on the results obtained. This study capitalises on having access to approximately 8000+ Direct to Patient consumers who are currently receiving devices for the management of their bladder problems. The intention of this research project is to gain an understanding of how two way online interactions have developed between patients with similar chronic medical conditions and how firms can use online social media to improve their relationship with patients. The key research question of this paper is: Have online social media tools affected demand for healthcare intermediation in patients, who experience chronic medical conditions and reflect a need to become better informed. The findings of this pre-Covid research were that, for patient groups that had chronic conditions, there was a positive relationship between time spent in developed peer to peer communities, are more trusting of online information and spend more time online.

Author 1: Andrew Kear
Author 2: Simon Talbot

Keywords: Component; social media; healthcare; peer to peer networks; patient networks; pre-Covid

PDF

Paper 81: Predicting Cyber-Attack using Cyber Situational Awareness: The Case of Independent Power Producers (IPPs)

Abstract: The increasing critical dependencies on Internet-of-Things (IoT) have raised security concerns; its application on the critical infrastructures (CIs) for power generation has come under massive cyber-attack over the years. Prior research efforts to understand cybersecurity from Cyber Situational Awareness (CSA) perspective fail to critically consider the various Cyber Situational Awareness (CSA) security vulnerabilities from a human behavioural perspective in line with the CI. This study evaluates CSA elements to predict cyber-attacks in the power generation sector. Data for this research article was collected from IPPs using the survey method. The analysis method was employed through Partial Least Squares Structural Equation Modeling (PLS-SEM) to assess the proposed model. The results revealed negative effects on people and cyber-attack, but significant in predicting cyber-attacks. The study also indicated that information handling is significant and positively influences cyber-attack. The study also reveals no mediation effect between the association of People and Attack and Information and Attack. It could result from an effective cyber security control implemented by the IPPs. Finally, the study also shows no sign of network infrastructure cyber-attack predictions. The reasons could be because managers of IPPs had adequate access policies and security measures in place.

Author 1: Akwetey Henry Matey
Author 2: Paul Danquah
Author 3: Godfred Yaw Koi-Akrofi

Keywords: Internet of things; cyber situational awareness; critical infrastructures; power generation; cyber-attack; cyber security; human behavioural and independent power producers

PDF

Paper 82: Design and Performance Analysis of Anti-Surge Control Mechanism for Compressor System using Neural Networks

Abstract: The compressor system is caused by the surge, which is an instability occurrence in most gas-process and oil industries. These issues are solved by using a recycle valve that avoids the surge and provides higher mass flow in the compressor system. An advanced controller-based anti-surge control mechanism is a need in the compressor system to improve the stability and surge issues. In this manuscript, an efficient, Neural-network predictive controller (NNPC) based variable speed compressor recycle system is modeled with an anti-surge control mechanism. When the mass flow is deficient, the recycle system is introduced, acts as a safety system, and feeds the compressed gas back to the upstream system. The different controllers like Proportional Integral Derivative (PID) controller, Fuzzy logic controller (FLC), and Neuro-fuzzy controller (NFC) based anti-surge control mechanism are also used in Compressor recycle system to compare the stability and performance metrics with NNPC. The NNPC based compressor system provides a better operating position and dynamic response with less error than other controllers-based compressor systems.

Author 1: Divya M. N
Author 2: Narayanappa C.K
Author 3: S L Gangadhariah
Author 4: V Nuthan Prasad

Keywords: Anti-surge; fuzzy logic controller (FLC); neuro-fuzzy controller (NFC); compressors; neural-network

PDF

Paper 83: Design and Implementation of True Parallelism Quad-Engine Cybersecurity Architecture on FPGA

Abstract: Applications, such as Internet of Things, deal with huge amount of transmitted, processed and stored images that required a high computing capability. Therefore, there is a need a computing architecture that contribute in increasing the throughput by exploiting modern technologies in both spatial and temporal parallelisms. This paper conducts a parallel quad-engine cybersecurity architecture with new configuration to increase the throughput. using DE1-SoC and Neek FPGA boards and HDL. In this architecture, each engine operates with 600MHz maximum frequency. Each image is divided into four parts of equal size and each part processed by single engine concurrently to achieve spatial parallelism. Internally, engine is handling image’s part in temporal parallelism and deep pipelining abstraction applied in every engine by dividing it to sub modules to execute different tasks concurrently. All data processed in engines is encrypted via AES algorithm that implemented as a significant part of engine architecture. The obtained results increased the throughput by four times, with 153,600Mbps, that make this computing architecture efficient and suitable for fast applications such as IoT and cybersecurity level of processing.

Author 1: Nada Qaim Mohammed
Author 2: Amiza Amir
Author 3: Muataz Hammed Salih
Author 4: Badlishah Ahmad

Keywords: Field programmable gate array (FPGA); spatial parallelism; cybersecurity; throughput component

PDF

Paper 84: Cotton Crop Yield Prediction using Data Mining Technique

Abstract: Cotton is a very important crop, as India leads it in terms of production in the world; and also that a vast number of manpower is engaged in farming as well as post-harvest processing and management of different derivatives of it. Weather is crucial for the productivity of the crop. The challenges of climate change; availability of limited land and water for farming; lake of knowledge for good cultivation practices and judicious use of agricultural inputs with farmers are critical hindrances for improving productivity. This requires thorough research on land preparation and use, how to improve fertility of soil, good agronomic practices in lieu of variable climatic conditions, etc. All the talukas of the three districts of North Gujarat where cotton is cultivated have been selected purposively for this study. The effect of soil type, soil pH, soil organic carbon, phosphorous, potassium, precipitation and temperature were selected as independent factors. The yield of cotton crop has positive correlation with the selected parameters. The data sets were applied for analytical process to WEKA. The difference between average of predicted and actual yields of all talukas for high rainfall year 2013 was only 1.55 per cent. The difference between actual and predicted yield for the low temperature year (2015) in different talukas of all talukas was only 0.44 per cent.

Author 1: Amiksha Ashok Patel
Author 2: Dhaval Kathiriya

Keywords: Data mining; cotton crop yield prediction; agriculture; data processing; data visualization

PDF

Paper 85: Data Analysis of Coronavirus CoVID-19: Study of Spread and Vaccination in European Countries

Abstract: Humanity has gone since a long time through several pandemics, such as: H1N1 in 2009 and also Spanish flu in 1917. In December 2019, the health authorities of China detected unexplained cases of pneumonia. The WHO World Health Organization has declared the apparition of CoVID-19 (novel Coronavirus) that caused a global pandemic in 2020. In data analysis, multiple approaches and diverse techniques were used to extract useful information from multiple heterogeneous sources and to discover knowledge and new information for decision-making; it is used in different business and science domains. In this context, we propose to use the multidimensional analysis techniques based on two concepts: fact (subject of analysis) and dimensions (axes of analyses). This technique allows decision makers to observe data from various heterogeneous sources and analyze them according several viewpoints or perspectives. More precisely, we propose a multidimensional model for analyzing the Coronavirus CoVID-19 data (spread and vaccination in European countries). This model is based on constellation schema that contains several facts surrounded by common dimensions.

Author 1: Hela Turki
Author 2: Kais Khrouf

Keywords: Multidimensional model; constellation schema; coronavirus covid-19; vaccination; European countries

PDF

Paper 86: Design of Low Cost Bio-impedance Measuring Instrument

Abstract: It is a well-established fact that the electrical bio-impedance of a part of the human body can provide valuable information regarding physiological parameters of the human body, if the signal is correctly detected and interpreted. Accord-ingly, an efficient low-cost bio-electrical impedance measuring instrument was developed, implemented, and tested in this study. Primarily, it is based upon the low-cost component-level approach so that it can be easily used by researchers and investigators in the specific domain. The measurement setup of instrument was tested on adult human subjects to obtain the impedance signal of the forearm which is under investigation in this case. However, depending on the illness or activity under examination, the instrument can be used on any other part of the body. The current injected by the instrument is within the safe limits and the gain of the biomedical instrumentation amplifier is highly reasonable. The technique is easy and user-friendly, and it does not necessitate any special training, therefore it can be effectively used to collect bio-impedance data and interpret the findings for medical diagnostics. Moreover, in this paper, several existing methods and associated approaches have been extensively explored, with in-depth coverage of their working principles, implementations, merits, and disadvantages, as well as focused on other technical aspects. Lastly, the paper also deliberates upon the present status, future challenges and scope of various other possible bio-impedance methods and techniques.

Author 1: Rajesh Birok
Author 2: Rajiv Kapoor

Keywords: Noninvasive; bio-electrical; Impedance; bio-impedance; bio-medical; instrumentation

PDF

Paper 87: Detecting Irony in Arabic Microblogs using Deep Convolutional Neural Networks

Abstract: A considerable amount of research has been de-veloped lately to analyze social media with the intention of understanding and exploiting the available information. Recently, irony has took a significant role in human communication as it has been increasingly used in many social media platforms. In Natural Language Processing (NLP), irony recognition is an important yet difficult problem to solve. It is considered to be a complex linguistic phenomenon in which people means the opposite of what they literally say. Due to its significance, it becomes essential to analyze and detect irony in subjective texts to improve the analysis tools to classify people opinion automatically. This paper explores how deep learning methods can be employed to the detection of irony in Arabic language with the help of Word2vec term representations that converts words to vectors. We applied two different deep learning models; Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM). We tested our frameworks with a manually annotated datasets that was collected using Tweet Scraper. The best result was achieved by the CNN model with an F1 score of 0.87.

Author 1: Linah Alhaidari
Author 2: Khaled Alyoubi
Author 3: Fahd Alotaibi

Keywords: Verbal irony; natural language processing; machine learning; automatic irony detection

PDF

Paper 88: Analysis about Benefits of Software-Defined Wide Area Network: A New Alternative for WAN Connectivity

Abstract: This article is based on conducting research to analyze the benefits of emerging trends in communications and networking technology, such as software-defined wide area networks. Using Waterfall as a methodology, the main objective is to carry out a technical comparison at the design and configuration level, creating a virtual environment that simulates traditional and SDWAN (Software-Defined Wide Area Network) infrastructures. The results obtained verify that the benefits of SDWAN maintain business continuity, anticipate situations in which the infrastructure can act intelligently, optimize connectiv-ity while maintaining security, and provide improvements in the management of the entire infrastructure. People will be able to see the results obtained between both technologies and validate the benefits that SDWAN offers .

Author 1: Catherine Janir´e Mena Diaz
Author 2: Laberiano Andrade-Arenas
Author 3: Javier Gustavo Utrilla Arellano
Author 4: Miguel Angel Cano Lengua

Keywords: Networking technology; connectivity; SDWAN; wide area network; Waterfall

PDF

Paper 89: Data Recovery Approach for Fault-Tolerant IoT Node

Abstract: Internet of Things (IoT) has a wide range of applications in many sectors like industries, health care, homes, militarily, and agriculture. Especially IoT-based safety and critical applications must be more securable and reliable. Such type of applications needs to be operated continuously even in the presence of errors and faults. In safety and critical IoT applica-tions maintaining data reliability and security is the critical task. IoT suffers from node failures due to limited resources and the nature of deployment which results in data loss consequently. This paper proposes a Data Recovery Approach for Fault Tolerant (DRAFT) IoT node algorithm, which is fully distributed, data replication and recovery implemented through redundant local database storage of other nodes in the network. DRAFT ensures high data availability even in the presence of node failures to preserve the data. When an IoT node fails in any cluster in the network data can be retrieved through redundant storage with the help of neighbor nodes in the cluster. The proposed algorithm is simulated for 100-150 IoT nodes which enhances 5% of network lifetime, and throughput. The performance metrics such as Mean Time to Data Loss (MTTDL), throughput, Network lifetime, and reliability are computed and results are found to be improved.

Author 1: Perigisetty Vedavalli
Author 2: Deepak Ch

Keywords: Internet of things; data recovery; RAID; node failures; reliability; network lifetime

PDF

Paper 90: An Enhanced Traffic Split Routing Heuristic for Layer 2 and Layer 1 Services

Abstract: Virtual Private Networks (VPNs) have now taken an important place in computer and communication networks. A virtual private network is the extension of a private network that encompasses links through shared or public networks, such as the Internet. A VPN is a transmission network service for businesses with two or more remote locations. It offers a range of access speeds and options depending on the needs of each site. This service supports voice, data and video and is fully managed by the service provider, including routing equipment installed at the customer’s premises. According to its characteristics, VPN has widely deployed on ”COVID-19” offering extensive services to connect roaming employees to their corporate networks and have access to all the company information and applications. Hence, VPN focuses on two important issues such as security and Quality-of-Service. This latter has a direct relationship with network performance such as delay, bandwidth, throughput, and jitter. Traditionally, Internet Service Providers (ISPs) accommo-date static point-to-point resource demand, named, Layer 1 VPN (L1VPN). The primary disadvantage of L1VPN is that the data plane connectivity does not guarantee control plane connectivity. Layer 2 VPN is designed to provide end-to-end layer 2 connection by transporting layer 2 frames between distributed sites. An L2VPN is suitable for supporting heterogeneous higher-level protocols. In this paper we propose an enhanced routing protocol based on Traffic Split Routing (TSR) and Shortest Path Routing (SPR) algorithms. Simulation results show that our proposed scheme outperforms the Shortest Path Routing (SPR) in term of network resources. Indeed, 72% of network links are used by the Enhanced Traffic Split Routing compared to Shortest Path Routing (SPR) which only used 44% of the network links.

Author 1: Ahlem Harchay
Author 2: Abdelwahed Berguiga
Author 3: Ayman Massaoudi

Keywords: Virtual private network; enhanced traffic split rout-ing; quality of service; shortest path routing; layer 1 VPN; layer 2 VPN

PDF

Paper 91: AuSDiDe: Towards a New Authentication System for Distributed and Decentralized Structure based on Shamir’s Secret Sharing

Abstract: Nowadays, connected devices are growing exponen-tially; their produced data traffic has increased unprecedent-edly. Information systems security and cybersecurity are critical because data typically contain sensitive personal information, requiring high data protection. An authentication system manages and controls access to this data allowing the system to ensure the legitimacy of the access request. Most of the current identification and authentication systems are based on a centralized architec-ture. However, some concepts as Cloud computing and Blockchain use respectively distributed and decentralized architectures. Users without a central server will own platforms and applications of the next generation of Internet and Web3. This paper proposes AuSDiDe, a new authentication system for the distributed and decentralized structure. This solution aims to divide and share keys toward different and distributed nodes. The main objective of AuSDiDe is to securely store and manage passwords, private keys, and authentication based on the Shamir secret sharing algo-rithm. This new proposal significantly reinforces data protection in information security.

Author 1: Omar SEFRAOUI
Author 2: Afaf Bouzidi
Author 3: Kamal Ghoumid
Author 4: El Miloud Ar-Reyouchi

Keywords: Shamir’s secret sharing; authentication system; de-centralized; distributed; blockchain

PDF

Paper 92: Keyphrases Concentrated Area Identification from Academic Articles as Feature of Keyphrase Extraction: A New Unsupervised Approach

Abstract: The extraction of high-quality keywords and sum-marising documents at a high level has become more difficult in current research due to technological advancements and the expo-nential expansion of textual data and digital sources. Extracting high-quality keywords and summarising the documents at a high-level need to use features for the keyphrase extraction, becoming more popular. A new unsupervised keyphrase concentrated area (KCA) identification approach is proposed in this study as a feature of keyphrase extraction: corpus, domain and language independent; document length-free; utilized by both supervised and unsupervised techniques. In the proposed system, there are three phases: data pre-processing, data processing, and KCA identification. The system employs various text pre-processing methods before transferring the acquired datasets to the data processing step. The pre-processed data is subsequently used during the data processing step. The statistical approaches, curve plotting, and curve fitting technique are applied in the KCA identification step. The proposed system is then tested and evaluated using benchmark datasets collected from various sources. To demonstrate our proposed approach’s effectiveness, merits, and significance, we compared it with other proposed techniques. The experimental results on eleven (11) datasets show that the proposed approach effectively recognizes the KCA from articles as well as significantly enhances the current keyphrase extraction methods based on various text sizes, languages, and domains.

Author 1: Mohammad Badrul Alam Miah
Author 2: Suryanti Awang
Author 3: Md. Saiful Azad
Author 4: Md Mustafizur Rahman

Keywords: Keyphrase concentrated area; KCA identification; feature extraction; data processing; keyphrase extraction; curve fitting

PDF

Paper 93: Transfer Learning based Performance Comparison of the Pre-Trained Deep Neural Networks

Abstract: Deep learning has grown tremendously in recent years, having a substantial impact on practically every discipline. Transfer learning allows us to transfer the knowledge of a model that has been formerly trained for a particular task to a new model that is attempting to solve a related but not identical problem. Specific layers of a pre-trained model must be retrained while the others must remain unmodified to adapt it to a new task effectively. There are typical issues in selecting the layers to be enabled for training and layers to be frozen, setting hyper-parameter values, and all these concerns have a substantial effect on training capabilities as well as classification performance. The principal aim of this study is to compare the network performance of the selected pre-trained models based on transfer learning to help the selection of a suitable model for image classifica-tion. To accomplish the goal, we examined the performance of five pre-trained networks, such as SqueezeNet, GoogleNet, ShuffleNet, Darknet-53, and Inception-V3 with different Epochs, Learning Rates, and Mini-Batch Sizes to compare and evaluate the network’s performance using confusion matrix. Based on the experimental findings, Inception-V3 has achieved the highest accuracy of 96.98%, as well as other evaluation metrics, including precision, sensitivity, specificity, and f1-score of 92.63%, 92.46%, 98.12%, and 92.49%, respectively.

Author 1: Jayapalan Senthil Kumar
Author 2: Syahid Anuar
Author 3: Noor Hafizah Hassan

Keywords: Transfer learning; deep neural networks; image classification; Convolutional Neural Network (CNN) models

PDF

Paper 94: Augmented Reality: Prototype for the Teaching-Learning Process in Peru

Abstract: In recent years, augmented reality is playing an important role in the world of mobile technology, since it is a way to facilitate the teaching-learning processes, this easy teaching-learning process generates a great contribution in companies, either creating opportunities or changing the way in which companies approach and interact with their end customers, this can mean a remarkable growth of the organization, that is why in this work an augmented reality prototype has been made using the methodology Scrum at the University of Sciences and Humanities of Lima-Peru, but with a focus on the nursing career. Knowing that the problem is the limited learning that students acquire in the classrooms, for which, we want to make use of augmented reality, so that this improves the form of education that is provided to university students. The result obtained, from developing the case study, was an augmented reality prototype for the improvement of education at the University of Sciences and Humanities of Lima-Peru, which shows a virtual model (it depends on the image shown), able to interact with the user, making it attractive and motivating for the student, this prototype was achieved, using Unity (3D development platform, Vuforia (augmented reality software development kit), Microsoft Visual Studio (integrated development environment), the Scrum Methodology (Scrum Pillars, Product Backlog, Product Backlog Estimation, Speed, Backlog Prioritization and Sprint Planning) and the C# Language (C Sharp).

Author 1: Shalom Adonai Huaraz Morales
Author 2: Laberiano Andrade-Arenas
Author 3: Alexi Delgado
Author 4: Enrique Lee Huamani

Keywords: Augmented reality; teaching; education; scrum; unity

PDF

Paper 95: On the Long Tail Products Recommendation using Tripartite Graph

Abstract: The growth of the number of e-commerce users and the items being sold become both opportunities and challenges for e-commerce marketplaces. As the existence of the long-tail phenomenon, the marketplaces need to pay attention to the high number of rarely sold items. The failure to sell these products would be a threat for some B2C e-commerce companies that apply a non-consignment sale system because the products cannot be returned to the manufacturer. Thus, it is important for the marketplace to boost the promotion of long-tail products. The objective of this study is to adapt the graph-based technique to build the recommendation system for long-tail products. The set of products, customers, and categories are represented as nodes in the tripartite graph. The Absorbing Time and Hitting Time algorithms are employed together with the Markov Random Walker to traverse the nodes in the graph. We find that using Absorbing Time achieves better accuracy than the Hitting Time for recommending long-tail products.

Author 1: Arlisa Yuliawati
Author 2: Hamim Tohari
Author 3: Rahmad Mahendra
Author 4: Indra Budi

Keywords: Long tail; recommender system; tripartite graph; random walker; hitting time; absorbing time

PDF

Paper 96: Machine Learning Applied to Prevention and Mental Health Care in Peru

Abstract: The present research aims to develop an application that allows the early and timely detection of signs of problems in the mental health of citizens. Agile methodology was used, with its SCRUM framework developing its four steps. In addition, technological tools such as artificial intelligence, mobile appli-cations, social networks and the python programming language were used. Also using SQL Server, Android Studio and the Marvel applications, the latter for the design of the prototypes, through the method of sentiment analysis and machine learning, in order to create a mobile application that is as accurate as possible in its results. For this, several types of algorithm were evaluated, managing to select the most appropriate one since it works based on information collected through the social networks Facebook and Twitter. The result that was obtained was the application that uses machine learning to prevent and take care of mental health in Peru, thus benefiting the citizens of society.

Author 1: Edwin Kcomt Ponce
Author 2: Melissa Flores Cruz
Author 3: Laberiano Andrade-Arenas

Keywords: Artificial intelligence; machine learning; mental health; scrum; sentiment analysis

PDF

Paper 97: Modeling and Predicting Blood Flow Characteristics through Double Stenosed Artery from Computational Fluid Dynamics Simulations using Deep Learning Models

Abstract: Establishing patient-specific finite element analysis (FEA) models for computational fluid dynamics (CFD) of double stenosed artery models involves time and effort, restricting physicians’ ability to respond quickly in time-critical medical applications. Such issues might be addressed by training deep learning (DL) models to learn and predict blood flow character-istics using a dataset generated by CFD simulations of simplified double stenosed artery models with different configurations. When blood flow patterns are compared through an actual double stenosed artery model, derived from IVUS imaging, it is revealed that the sinusoidal approximation of stenosed neck geometry, which has been widely used in previous research works, fails to effectively represent the effects of a real constriction. As a result, a novel geometric representation of the constricted neck is proposed which, in terms of a generalized simplified model, outperforms the former assumption. The sequential change in artery lumen diameter and flow parameters along the length of the vessel presented opportunities for the use of LSTM and GRU DL models. However, with the small dataset of short lengths of doubly constricted blood arteries, the basic neural network model outperforms the specialized RNNs for most flow properties. LSTM, on the other hand, performs better for predicting flow properties with large fluctuations, such as varying blood pressure over the length of the vessels. Despite having good overall accuracies in training and testing across all the properties for the vessels in the dataset, the GRU model underperforms for an individual vessel flow prediction in all cases. The results also point to the need of individually optimized hyperparameters for each property in any model rather than aiming to achieve overall good performance across all outputs with a single set of hyperparameters.

Author 1: Ishat Raihan Jamil
Author 2: Mayeesha Humaira

Keywords: Double stenosed artery; CFD; neural network; LSTM; GRU

PDF

Paper 98: NLI-GSC: A Natural Language Interface for Generating SourceCode

Abstract: There are many different programming languages and each programming language has its own structure or way of writing the code, it becomes difficult to learn and frequently switch between different programming languages. Due to this reason, a person working with multiple programming languages needs to look at documentations frequently which costs time and effort. In the past few years, there have been significant increase in the amount of papers published on this topic, each providing a unique solution to this problem. Many of these papers are based on applying NLP concepts in unique configuration to get the desired results. Some have used AI along with NLP to train the system to generate source-code in specific language, and some have trained the AI directly without pre-processing the dataset with NLP. All of these papers face two problems: a lack of proper dataset for this particular application and each paper can convent natural language into only one specified programming language source-code. This proposed system shows that a language independent solution is a feasible alternate for writing source-code without having full knowledge about a programming language. The proposed system uses Natural Lan-guage Processing to convert Natural Language into programming language-independent pseudo code using custom Named Entity Recognition and save it in XML (eXtensible Markup Language) format which is an intermediate step. Then, using traditional programming, this system converts the generated pseudo code into programming language-dependent source-code. In this paper, another novel method has been proposed to create dataset from scratch using predefined structure that is filled with predefined keywords creating unique combination of training dataset.

Author 1: Aaqib Ahmed R.H. Ansari
Author 2: Deepali R. Vora

Keywords: Natural Language Processing (NLP); Natural Lan-guage Interface (NLI); Entity Recognition (ER); Artificial Intelli-gence (AI); source code generation; pseudocode generation

PDF

Paper 99: Improving Arabic Cognitive Distortion Classification in Twitter using BERTopic

Abstract: Social media platforms allow users to share thoughts, experiences, and beliefs. These platforms represent a rich resource for natural language processing techniques to make inferences in the context of cognitive psychology. Some inaccurate and biased thinking patterns are defined as cognitive distortions. Detecting these distortions helps users restructure how to perceive thoughts in a healthier way. This paper proposed a machine learning-based approach to improve cognitive distortions’ classi-fication of the Arabic content over Twitter. One of the challenges that face this task is the text shortness, which results in a sparsity of co-occurrence patterns and a lack of context information (semantic features). The proposed approach enriches text rep-resentation by defining the latent topics within tweets. Although classification is a supervised learning concept, the enrichment step uses unsupervised learning. The proposed algorithm utilizes a transformer-based topic modeling (BERTopic). It employs two types of document representations and performs averaging and concatenation to produce contextual topic embeddings. A comparative analysis of F1-score, precision, recall, and accuracy is presented. The experimental results demonstrate that our enriched representation outperformed the baseline models by different rates. These encouraging results suggest that using latent topic distribution, obtained from the BERTopic technique, can improve the classifier’s ability to distinguish between different CD categories.

Author 1: Fatima Alhaj
Author 2: Ali Al-Haj
Author 3: Ahmad Sharieh
Author 4: Riad Jabri

Keywords: Arabic tweets; cognitive distortions’ classification; machine learning; social media; supervised learning; unsupervised learning; transformers; BERTopic; topic modeling

PDF

Paper 100: Investigation Framework for Cloud Forensics using Dynamic Genetic-based Clustering

Abstract: Cloud computing allows a pool of resources, such as storage, computation power, communication bandwidth, to be shared and accessed by many users from different locations. High dependency on sharing resources among different cloud users allows some attacker to hide and commit crimes using cloud resources and as a result, cloud computing forensics become essential. Many solutions and frameworks for cloud computing forensics have been developed to deal with could based crimes. However, many problems and issues face the proposed solutions and frameworks. In this paper, a new framework for cloud computing forensics is proposed to enhance the investigation process performance and accuracy by adding a new stage to conventional stages. This new stage includes the implementation of a new way for matching based on the LSH algorithm. The proposed framework evaluation results show an improvement for matching and accurate cluster retrieval through the collection process.

Author 1: Mohammed Y. Alkhanafseh
Author 2: Mohammad Qatawneh
Author 3: Wesam Almobaideen

Keywords: Cloud computing forensics; genetic clustering al-gorithms; genetic dynamic clustering; forensics framework; digital forensics

PDF

Paper 101: Towards a Low-Cost FPGA Micro-Server for Big Data Processing

Abstract: The development of big data in the era of data explosion and the growing demand for micro-servers in place of traditional servers to adapt to lightweight tasks in recent years has put into question how to integrate and make use of these two important domains. During the same era, CPU performance growth has reached a certain maturity. In order to surpass these issues and to reach high performances computing, a new trend now is to use multiple processing units or heterogeneous components in micro-servers to reduce computational complexity. The implementation of Big Data processing algorithms using embedded heterogeneous architectures rises a new challenges due to constraints of the used architecture-based system on chip which require a special attention and imposed new demands to our works. In this article, we focus on using embedded FPGA accelerator to give a solution to this problem. Precisely, we will attempt to prototype a micro-server for the processing of big data on FPGA and compare its performances with a high-end GPGPU using existing benchmarks. The implementation on the FPGA is done using a High-Level Synthesis based-OpenCL (HLS) instead of the traditional description language. The obtained results shows that FPGA is an interesting alternative and can be a promising platform to design a micro-server when it comes to process a hug amount of data, in particular with the emerging technologies for FPGA programming using HLS approach and by adopting the OpenCL optimization strategies.

Author 1: Mohamed Abouzahir
Author 2: Khalifa Elmansouri
Author 3: Rachid Latif
Author 4: Mustapha Ramzi

Keywords: Arria 10 FPGA (Field Programmable Gate Arrays); GPGPU (General Purpose Graphics Processing Unit) ; big data; parallel computing; (HLS) High-Level Synthesis

PDF

Paper 102: Assessing and Proposing Countermeasures for Cyber-Security Attacks

Abstract: Cyber-attacks on IT domain infrastructure directly affect the security of businesses’ operational processes, potentially leading to system failure. Some industries have a high risk than others due to the sensitivity of their data, including the transportation industry, which has recently moved from traditional data management to digitalization. This study aims to identify the main cyber threats in the transportation sector by analyzing related works and highlighting the main countermeasures used to respond to such threats as well as enhance overall cybersecurity. This paper presents a comprehensive cybersecurity risk assessment for the transportation companies, identifying the most common attacks and proposing methods to minimize risk as much as possible. A risk assessment analysis was prepared by industry experts that included previous cyberattack scenarios. The results of our paper identified the most critical attacks on the transportation company’s booking system and recommended suitable countermeasures to minimize the risk of those attacks.

Author 1: Ali Al-Zahrani

Keywords: Cyber-attacks; cyber-security; risk assessment; countermeasures

PDF

Paper 103: ASM-ROBOT: A Cyber-Physical Home Automation Controller with Memristive Reconfigurable State Machine

Abstract: In the next 5 to 10 years, digital Artificial Intelligence with Machine Circuit Learning Algorithms (MCLA) will become the mainstream in complex automated robots. Its power concerns, ethical perspectives, including the issues of digital sensing, actuation, mobility, efficient process-computation, and wireless communication will require advanced neuromorphic process variable controls. Existing home automated robots lack memristic associative memory. This work presents Cyber-Physical Home Automation System (CPHAS) using Memristive Reconfigurable Algorithmic State Machine (MRASM) chart. A process control architecture that supports Concurrent Wireless Data Streams and Power-Transfer (CWDSPT) is developed. Unlike legacy systems with power-splitting (PS) and time-switching (TS) controls, the MRASM-ROBOT explores granular wireless signal controls through unmodulated high-power continuous wave (CW). This transmits infinite process variables using Orthogonal Space-Time Block Code (OSTBC) for interference reduction. The CWDSPT transmitter and receiver circuit design for signal processing are implemented with complexity noise-error reduction during telemetry data decoding. Received signals are error-buffered while gathering control variables' status. Transceiver Memristive neuromorphic circuits are introduced for computational acceleration in the design. Hardware circuit design is tested for system reliability considering the derived schematic models for all process variables. Under small range space diversity, the system demonstrated significant memory stabilization at the synchronous iteration of the synaptic circuitry.

Author 1: Kennedy Chinedu Okafor
Author 2: Omowunmi Mary Longe

Keywords: Cloud computing; cyber-physical systems; complex robot; computational science; IoT; machine learning

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org