The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 3

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Helping People with Social Anxiety Disorder to Recognize Facial Expressions in Video Meetings

Abstract: According to previous research on social anxiety disorder (SAD) and facial expressions, those with SAD tend to view all faces as portraying negative emotions; thus, they are afraid of chatting with others when they cannot understand the real emotions being communicated. The advancement of facial recognition technology has given people opportunities to get a more precise emotional estimation of facial expressions. This study aims to investigate the practical effects of apps that detect facial expressions of emotion (e.g., AffdexMe) on people with SAD when communicating with other people through video chatting. We conducted empirical research to examine whether facial emotion recognition software can help people with SAD overcome the fear of chatting with others in video meetings and help them understand others’ emotions to reduce communication conflicts. This paper presents the design of an experiment to measure participants’ reactions when they video-chat with others using the AffdexMe application and to interview participants to get in-depth feedback. The results show that people with SAD could better recognize the emotions of others using AffdexMe. This results in more reasonable responses and better interaction during video chats. We also propose design suggestions to make the described approach better and more convenient to use. This research shed a light on the future design of emotion recognition in video chatting for people with disabilities or even ordinary users.

Author 1: Jieyu Wang
Author 2: Abdullah Abuhussein
Author 3: Hanwei Wang
Author 4: Tian Qi
Author 5: Xiaoyue Ma
Author 6: Amani Alqarni
Author 7: Lynn Collen

Keywords: Social phobia/social anxiety disorder; video meeting; facial expression recognition; emotion recognition; empirical research

PDF

Paper 2: Can the Futures Market be Predicted-Perspective based on AutoGluon

Abstract: This paper discusses how to raise efficiency of predicting the Chinese futures market correlation coefficient. First, the predicted periods are divided by major events and the predictabilities between different periods are compared at the same time. Second, on this basis, an automatic machine learning framework, AutoGluon is applied to compare the predictive ability between different deep learning models such as LSTM and GRU. Results demonstrate that: (1) Compared by LSTM and GRU, AutoGluon can indeed raise efficiency of predicting. (2) The changes of prediction error between different periods can explain the influence of major events happened in futures market. (3) Although the predictive ability of many models decline over time, the performance of XGBoost is relatively stable, which can provide useful tools for market participants.

Author 1: YangChun Xiong
Author 2: ZiXuan Pan
Author 3: BaiFu Chen

Keywords: AutoGluon; LSTM; GRU; Chinese futures market

PDF

Paper 3: Stochastic Rounding for Image Interpolation and Scan Conversion

Abstract: The stochastic rounding (SR) function is proposed to evaluate and demonstrate the effects of stochastically rounding row and column subscripts in image interpolation and scan conversion. The proposed SR function is based on a pseudorandom number, enabling the pseudorandom rounding up or down any non-integer row and column subscripts. Also, the SR function exceptionally enables rounding up any possible cases of subscript inputs that are inferior to a pseudorandom number. The algorithm of interest is the nearest-neighbor interpolation (NNI) which is traditionally based on the deterministic rounding (DR) function. Experimental simulation results are provided to demonstrate the performance of NNI-SR and NNI-DR algorithms before and after applying smoothing and sharpening filters of interest. Additional results are also provided to demonstrate the performance of NNI-SR and NNI-DR interpolated scan conversion algorithms in cardiac ultrasound videos.

Author 1: Olivier Rukundo
Author 2: Samuel Emil Schmidt

Keywords: Cardiac ultrasound; deterministic rounding; image quality; interpolation; pseudorandom number; scan conversion; stochastic rounding; video quality

PDF

Paper 4: Analysis of the Elderly's Internet Accessed Time using XGB Machine Learning Model for Solving the Level of the Information Gap of the Elderly

Abstract: This study aims to construct machine learning models to predict the elderly's internet-accessed time. These models can resolve the information gaps in the present and future by analyzing information use factors such as internet access and mobile device usability. We analyzed 2,300 adults 55 years of age and older who participated in the national survey. This study followed a pipeline of five steps: primary data selection, data imputation to process missing data, feature ranking to identify most important features, machine learning algorithms to develop classifier models, and model evaluation. We applied the Extremely Randomized Trees classifier (Extra Tree) model, the Random Forest classifier (RF) model, and the Extreme Gradient Boosting classifier (XGB) model to look for feature ranking, then select feature importance. All classification models used the accuracy score to calculate the effect. In our study, the most accurate model for predicting the Internet access time of the elderly was the XGB model. The evaluation scores of the XGB machine learning model are very positive and bring high expectations. To solve the information gap of the elderly problem, we can use these effective models to predict the elderly object. Then, we can give some solutions to help them in a society with a strong information technology base.

Author 1: Hung Viet Nguyen
Author 2: Haewon Byeon

Keywords: Information gap; machine learning; prediction model; elderly

PDF

Paper 5: Accessibility of Bulgarian Regional Museums Websites

Abstract: Web accessibility is an inclusive practise that ensures everyone including people with disabilities can successfully work and interact with websites and use all their functionality. The research in the paper investigates the problem of web accessibility of Regional museums in Bulgaria and the compliance of their websites with the recommendations of Web Content Accessibility Guidelines 2.1 (WCAG 2.1), published by World Wide Web Consortium (W3C). The study presents the results of the user experience of people with disabilities regarding the accessibility of museums and exhibits in them. A methodology for automated testing of web accessibility with several software tools is described in the paper. Results from these tests are analysed and visualized with graphical tools. Some important conclusions about most common accessibility problems are given.

Author 1: Todor Todorov
Author 2: Galina Bogdanova
Author 3: Mirena Todorova–Ekmekci

Keywords: Accessibility; museums; web accessibility; visual disability; disabled person; testing; automatic validation tools; WCAG criteria

PDF

Paper 6: Can Ready-to-Use RNNs Generate “Good” Text Training Data?

Abstract: There is much research on the state-of-the-art techniques for generating training data through neural networks. However, many of these techniques are not easily implemented or available due to factors such as copyright of their research code. Meanwhile, there are other neural network codes currently available that are easily accessible for individuals to generate text data; this paper explores the quality of the text data generated by these ready-to-use neural networks for classification tasks. This paper’s experiment showed that using the text data generated by a default configured RNN to train a classification model can match closely with baseline accuracy.

Author 1: Jia Hui Feng

Keywords: Neural networks; machine learning; text genera-tion; classification; natural language processing; data augmenta-tion; artificial intelligence

PDF

Paper 7: Spectrum Pricing in Cognitive Radio Networks: An Analysis

Abstract: The wireless technology is applied in developing various applications in different trust areas. Due to this there is a huge demand for the spectrum band. The available spectrum can be shared among the primary users and the secondary users. The spectrum is utilized by the secondary user on rental basis. In this competitive world, the primary users provide a good quality for services to the end users for retaining the spectrum band. The pricing is one of the vital components in Cognitive Radio Networks (CRN) for owning/renting the spectrum. The spectrum is utilized by the secondary users when the spectrum is in idle state. This research work focuses on the spectrum pricing for the secondary users based on the price paid by the primary user. The primary users generate revenue, the same price is utilized for maintenance or annual fees which is to be paid to the governance of telecommunication department. The pricing and trading issues is one of the research areas for allocating the spectrum to the primary users. This research work focuses on providing spectrum to the secondary users with the minimal price for utilization during the specified time. The work highlights the open fact that there is a huge scarcity of the spectrum and the price are high, and not affordable to the individuals. Hence primary users lease/rent out the idle bandwidth for the secondary user. To utilize the spectrum for a dedicated period of time the secondary user has to pay the usage charges to the primary user. In this research work, various methods are presented for determining the price for the secondary users. The pricing components are analyzed by adapting the one-way Anova which compares the values among the groups. The results indicates that all the group means are not same and they are independent variables.

Author 1: Reshma C R
Author 2: Arun kumar B. R

Keywords: Price; game theory; analysis of price; trading; usage

PDF

Paper 8: Mobile-based Vaccine Registry to Improve Collection and Completeness of Maternal Immunization Data

Abstract: Immunization during pregnancy and infancy significantly reduces morbidity and mortality of mothers, unborn fetuses, and young infants. Several studies show the merits of getting complete, quality, and accurate data on time to enhance policy and decision-making for society or country development. Despite the efforts by nations to ensure the success of maternal immunization through electronic immunization registries, limited resources such as poor internet access, shortage of electricity, and digital illiteracy in developing countries hinder the goal of full immunization of mothers and infants. Since 2015, immunization programs in Tanzania use internet-based information systems to collect immunization data from health facilities and submit them to the responsible authority for further decision-making such as the allocation of vaccines to health facilities. The internet-based media is not fully achieved in developing countries due to its cost and resource setting, thus, the responsible authority does not receive instant data to update its vaccine inventory and management activities which often results in partial immunization due to the unavailability of vaccines in some facilities. This challenge can be solved by having an affordable system that instantly incorporates and transmits vaccination details such as the utilization of vaccines and demands from each health facility to responsible authority with less resources. The present study proposes a USSD platform to enhance the receipt of real-time data by immunization authorities from both health facilities with poor and good internet connectivity at a lesser cost. A greater number of health facilities in Tanzania prefer to use both online and offline platforms for collecting and recording immunization data. As electronic immunization registry has been introduced in areas with limited resources, it is recommended the use online and offline platforms for data collection so that they can submit immunization data in real-time without the delays caused by poor resource setting.

Author 1: Zubeda S. Kilua
Author 2: Mussa A. Dida
Author 3: Devotha N. Nyambo

Keywords: Maternal immunization; electronic immunization registry; USSD; data collection; limited resource setting

PDF

Paper 9: Personalized Desire2Learn Recommender System based on Collaborative Filtering and Ontology

Abstract: In this century, attention has grown to recommendation systems (RS), especially in e-learning, to solve the problem of overloading information in e-learning systems. E-learning providers also play a major role in helping learners to find appropriate courses that fit their learning plan using Desire2Learn at Majmaah University. Although recommendation systems generally have a clear advantage in solving problems related to overloading information in various areas of e-business and making accurate recommendations, e-learning recommendation systems still have problems with overloading information about the characteristics of the learning recipient Such as the appropriate education style, the level of skills provided and the student's level of education. In this paper, we suggest that a recommendation technique combining collaborative filtering and ontology be introduced to recommend courses for learning recipients through Desire2Learn. Ontology involves the integration of the characteristics of the learning recipient into the recommendation process as well as the classifications, while the liquidation process cooperates in the predictions and generates recommendations for e-learning. In addition, ontological knowledge is employed by the educational RS in the early stages if no assessments can be made to mitigate the cold start problem. The results of this study show that the proposed recommendation technique is distinguished and superior to the cooperative liquidation in terms of specialization and accuracy of the recommendation.

Author 1: Walid Qassim Qwaider

Keywords: Collaborative filtering; Desire2Learn; ontology; recommender system (RS); personalized Desire2Learn; PDRS

PDF

Paper 10: Design Level Class Decomposition using the Threshold-based Hierarchical Agglomerative Clustering

Abstract: Refactoring activity is essential to maintain the quality of a software’s internal structure. It decays as the impact of software changes and evolution. Class decomposition is one of the refactoring processes in maintaining internal quality. Mostly, the refactoring process is done at the level of source code. Shifting from source code level to design level is necessary as a quick step to refactoring and close to the requirement. The design artifact has a higher abstraction level than the source code and has limited information. The challenge is to define new metrics needed in class decomposition using the design artifact's information. Syntactic and semantic information from the design artifact provides valuable data for the decomposition process. Class decomposition can be done at the level of design artifact (class diagram) using syntactic and semantic information. The dynamic threshold-based Hierarchical Agglomerative Clustering produces a more specific cluster that is considered to produce a single responsibility class.

Author 1: Bayu Priyambadha
Author 2: Tetsuro Katayama

Keywords: Refactoring; design level refactoring; software refactoring; hierarchical clustering; class decomposition

PDF

Paper 11: A Comparative Analysis of Multi-Criteria Decision Making Techniques for Ranking of Attributes for e-Governance in India

Abstract: e-Governance is the system in which all the public services are made available in the online platform with the help of secured cyber architecture. Government along with the people have praised the ability of Information and communications technology (ICT) around the world in stimulating the various vital sectors of the economy. The advanced technologies have provided speed, inexpensive and convenient method of interaction and communication. In various developing and developed countries, these newly adopted technologies have shown direct positive impact on the country’s productivity, efficiency and thus leads to rapid development. This work represents a comparative study of various Multi-Criteria Decision Making (MCDM) techniques like Technology, Multi-criteria Decision making, Ranking, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Weighted Sum Model (WSM) and Weighted Product Model (WPM) to find the ranking of various attributes responsible for better decision making for implementing successful e-Governance in developing country, India.

Author 1: Bhaswati Sahoo
Author 2: Rabindra Narayana Behera
Author 3: Prasant Kumar Pattnaik

Keywords: e-Governance; information and communication technology; multi-criteria decision making; ranking; technique for order of preference by similarity to ideal solution (TOPSIS); usability; weighted sum model (WSM); weighted product model (WPM)

PDF

Paper 12: Comparative Analysis of Lexicon and Machine Learning Approach for Sentiment Analysis

Abstract: Opinion mining or analysis of text are other terms for sentiment analysis. The fundamental objective is to extract meaningful information and data from unstructured text using natural language processing, statistical, and linguistics methodologies. This further is used for deriving qualitative and quantitative results on the scale of ‘positive’, ‘neutral’, or ‘negative to get the overall sentiment analysis. In this research, we worked with both approaches, machine learning, and an unsupervised lexicon-based algorithm for sentiment calculation and model performance. Stochastic gradient descent (SGD) is utilized in this work for optimization for support vector machine (SVM) and logistic regression. AFINN and Vader lexicon are used for the lexicon model. Both the feature TF-IDF and bag of a word are used for classification. This dataset includes "Trip advisor hotel reviews". There are around 20k reviews in the dataset. Cleaned and preprocessed data were used in our work. We conducted some training and assessment. A classifier's accuracy is measured using evaluation metrics. In TF-IDF, the Support Vector Machine is the more accurate of the two classifiers used to assess machine learning accuracy. The classification rate in Bag of Words was 95.2 percent and the accuracy in TF-IDF was 96.3 percent on the support vector machine algorithm. VADER outperforms the Lexicon model with an accuracy of 88.7%, whereas AFINN Lexicon has an accuracy of 86.0%. When comparing the Supervised and unsupervised lexicon approaches, support vector machine model outperforms with a TFIDF accuracy of 96.3 percent and a VADER lexicon accuracy of 88.7%.

Author 1: Roopam Srivastava
Author 2: P. K. Bharti
Author 3: Parul Verma

Keywords: NLP; sentiment analysis; SGD (stochastic gradient descent); machine learning; TFIDF; BoW; VADER; SVM; AFINN

PDF

Paper 13: COVID-19 Detection from X-Ray Images using Convoluted Neural Networks: A Literature Review

Abstract: This paper reviews a host of other peer-reviewed articles related to the detection of COVID-19 infection from X-ray images using Convoluted Neural Network (CNN) approaches. It stems from a background of a pandemic that has hit the world and negatively affected all spheres of life. The currently available testing mechanisms are invasive, expensive, time-consuming, and not everywhere. The paper considered 33 main articles supported by several other articles. The measurement metrics considered in this review are accuracy, precision, recall, F1-score, and specificity. The inclusion criteria for studies was that the article should have been written after the pandemic began, deliberates on CNN, and attempts to detect the disease from X-ray images. Findings suggest that transfer learning, support vector machines, long short-term memory, and other CNN approaches are highly effective in predicting the likelihood of the disease from X-rays. However, multi-class predictions seemed to score lowly on the accuracy score relative to their binary counterparts. Also, data augmentation significantly improved the performance of the models. Hence, the paper concluded that all reviewed approaches are effective. Recommendations are that analysts should integrate transfer learning procedures in the model formulation process, engage in data augmentation practices, and focus on classifying data based on binary classes.

Author 1: Othman A. Alrusaini

Keywords: Convoluted neural networks; COVID-19; chest x-ray; transfer learning; support vector machines; long short-term memory

PDF

Paper 14: Development of Mathematics Web-based Learning on Table Set-Up Activities

Abstract: This paper aimed to discuss product design and expert validation of the mathematics web-based learning table set up activities in the hospitality industry. This research was a type of Research and Development, which aimed to develop a new product. The experts involved in this study were four experts. There were two experts in the field of learning technology as media validators and two experts in the field of mathematics education as material validators. In the process of validating the mathematics web-based learning in this study, using a questionnaire that had been prepared to evaluate it as a research instrument. This research had produced mathematics web-based learning which consists of five parts, namely, the initial part to recall about the Cartesian coordinates; the translation sub-material section; the reflection sub-material section; the rotation sub-material section; and the dilatation sub-material section. In the review activity by experts, the average percentage of material validators was eighty five percent, its means is very good and the average percentage of media validators was ninety five its means is very good also. It showed that this mathematics web-based learning can be said to be proper to use.

Author 1: Gusti Ayu Dessy Sugiharni
Author 2: I Made Ardana
Author 3: I Gusti Putu Suharta
Author 4: I Gusti Putu Sudiarta

Keywords: Development; web-based learning; mathematics; table set-up; activities

PDF

Paper 15: A Risk Management Framework for Large Scale Scrum using Metadata Outer Request Management Methodology

Abstract: Recently, most software projects became naturally Distributed Agile Development (DAD) projects. The main benefits of DAD projects are cost-saving and being close to markets due to their distributed nature, such as in large-scale Scrum (LeSS). Developing LeSS projects leads to the emergence of challenges in risk management, especially the team collaboration challenges, where there is no standardized process for teams to communicate collaboratively. Team collaboration and the knowledge sharing is a vital resource for a large Scrum team's success. Hence, finding a dynamic technique that facilitates team collaboration in the LeSS environment is necessary. This paper proposes a risk management framework for LeSS using outer metadata requests. The proposed framework manages the outer requests amongst the distributed team. Therefore, it avoids missing team collaboration, risks, and threats to project completion. It also contributes to exchanging team skills and experience. The proposed framework is evaluated by applying it to two different case studies for large-scale Scrum projects. The evaluation results are given. The evaluation proved the effectiveness of the proposed framework.

Author 1: Rehab Adel
Author 2: Hany Harb
Author 3: Ayman Elshenawy

Keywords: Distributed agile development; knowledge sharing; risk management; large scale scrum; metadata outer request management

PDF

Paper 16: Technique for Balanced Load Balancing in Cloud Computing Environment

Abstract: Resource sharing by means of load balancing in cloud computing environments helps for efficient utilization of cloud resources and higher overall throughput. However, implementation of poor load balancing algorithms may cause some virtual machines starving for additional cloud resources. Employing meagre crafted mechanism for priority-oriented load balancing may leave low-level priority virtual machines starving. We suggest an improved resource sharing mechanism for load balancing in the cloud computing environments. The suggested mechanism helps to provide efficient load balancing by avoiding starvation. In order to cater efficient load balancing, the proposed resource sharing technique takes respective virtual machines’ priority levels into consideration. An implementation of the suggested load balancing algorithm in cloud environment provides reduction in waiting time of the starving virtual machines which are looking for additional resources in cloud platform. The implementation of our proposed algorithm has been deployed on a prototype cloud computing infrastructure testbed established with open source software OpenStack. The prototype cloud testbed is supported in backend by the open source CentOS Linux operating system’s minimal setup. Experimental results of proposed load balancing mechanism in the prototype cloud computing infrastructure setup designate reduction in the waiting time of overloaded starving virtual machines. The proposed mechanism is beneficial to accomplish priority-oriented and starvation free resource sharing for load balancing in cloud computing environments. In future, the proposed technique can be further enhanced for implementing load balancing in collaborated cloud computing environments.

Author 1: Narayan A. Joshi

Keywords: Cloud environment; resource sharing; load balancing; starvation; priority oriented resource allocation

PDF

Paper 17: A Heuristic Feature Selection in Logistic Regression Modeling with Newton Raphson and Gradient Descent Algorithm

Abstract: Binary choices, such as success or failure, acceptance or rejection, high or low, heavy or light, and so on, can always be used to express decision-making. Based on the known predictor feature values, a classification model can be used to predict an unknown categorical value. The logistic regression model is a commonly used classification approach in a variety of scientific domains. The goal of this research is to create a logistic regression model with a heuristic approach for selecting input characteristics and to compare the Newton Raphson and gradient descent (GD) algorithms for estimating parameters. Among predictor traits, there were four that met the criterion for being both dependent on the target and independent of one another. Also, optional features In Malang, Indonesia, researchers used the Chi-square test to find four significant characteristics that increase the incidence of pregnant women developing preeclampsia: age (X1), parity (X2), history of hypertension (X3) and salty food consumption (X6). In the above work author proposed, the logistic regression model developed using the gradient descent approach had a lower risk of error than the logistic regression model generated using the Newton Raphson algorithm. The model with the gradient descent approach has a precision of 98.54 percent and an F1 score of 97.64 percent, while the model with the Newton Raphson algorithm has a precision of 86.34 percent and an F1 score of 72.55 percent.

Author 1: Samingun Handoyo
Author 2: Nandia Pradianti
Author 3: Waego Hadi Nugroho
Author 4: Yusnita Julyarni Akri

Keywords: Classification model; feature selection; gradient descent; logistic regression; Newton Raphson

PDF

Paper 18: HEMClust: An Improved Fraud Detection Model for Health Insurance using Heterogeneous Ensemble and K-prototype Clustering

Abstract: Health insurance plays an integral part of society's economic well-being; the existence of fraud creates innumerable challenges in providing affordable health care support for the people. In order to reduce the losses incurred due to fraud, there is a need for a powerful model to predict fraud on the data accurately. The purpose of the paper is to implement a more sophisticated technique for fraud detection using machine learning: HEMClust (Heterogeneous Ensemble Model with Clustering). The first phase of the model aims in improving the quality of claims data by providing effective preprocessing. The second stage addresses the overlapping instances in provider specialties by grouping them using k-prototype clustering. The final stage includes building the model using a heterogeneous stacking ensemble that performs classification on multiple levels, with four base learners in level 0 and a meta learner in level 1. The results were assessed using evaluation metrics and statistical tests such as Friedman and Nememyi to compare the performance of base classifiers against the proposed HEMClust. The empirical results show that the HEMClust produced 94% and 96% overall precision-recall rates on the dataset, which was an increase of 45% to 50% in the fraud detection rate for each class in the data.

Author 1: Shamitha S Kotekani
Author 2: V Ilango

Keywords: Fraud detection; health insurance; ensemble learners; meta-level learning; clustering; classification algorithms

PDF

Paper 19: An Authorization Framework for Preserving Privacy of Big Medical Data via Blockchain in Cloud Server

Abstract: In recent years, cloud-based medical record sharing has greatly improved the process of researching the disease and patient diagnosis. However, since cloud systems are centralized, there is serious concern about data security and privacy. Blockchain technology is viewed as a promising method of dealing with privacy issues and data security because of its exclusive features of distributed ledgers, secrecy, verifiability, and enhanced security. The literature review has shown significant works on integrating blockchain technology with cloud system for managing and sharing healthcare data. It has been analyzed that previous works are primarily dependent on the centralized data storage approach, which raises privacy concerns. The previous works also do not emphasize handling big medical data and lack the reliability of the end-to-end security features system. This paper has presented an authorization framework for ensuring data security and privacy preservation using blockchain technology with IPFS as decentralized file storage and sharing system. The proposed study devises a proof of replication algorithm using smart contracts to provide a better access control mechanism. The implementation of the proposed framework is based on the symmetric encryption and Ethereum blockchain platform. The study outcome illustrates the efficiency and availability of the proposed scheme compared to the typical cloud-based blockchain method.

Author 1: Hemanth Kumar N P
Author 2: Prabhudeva S

Keywords: Medical data; cloud; blockchain; data sharing; access control; security; privacy preservation

PDF

Paper 20: Dynamic and Optimized Routing Approach (DORA) in Vehicular Ad hoc Networks (VANETs)

Abstract: Vehicular Ad hoc Networks (VANETs) are one of the significant areas of research and this is also a subfield in Ad Hoc Networks. This is mainly focused on improving the safety of roads and reducing the total number of accidents. There is no central coordination to this network, nodes are mobile, dynamic topology, the routing process is a big challenge, and this is most responsible for the delivery message to the small overhead and delay. Routing is a tedious task that occurs huge changes in network topology and delivers the data packets in a limited period. In VANETs many existing routing protocols are introduced to overcome various issues but these are not efficient to overcome all the issues in routing. Routing shows a huge impact on other parameters such as data transmission rate (DTR), packet delivery ratio (PDR), Packet Drop Ratio (PDRatio), Average Propagation Delay (APD) and throughput. In this paper, the dynamic and optimized routing approach (DORA) is introduced in VANETs to overcome the various issues and improve the performance of IP by measuring the DTR, PDR and PDRatio. Comparisons among the Ant Colony Optimization (ACO), improved distance-based ant colony optimization routing (IDBACOR), and DORA is shown.

Author 1: Satyanarayana Raju K
Author 2: Selvakumar K

Keywords: Data transmission rate (DTR); packet delivery ratio (PDR); packet drop ratio (PDRatio); throughput

PDF

Paper 21: Comparative Analysis of RSA and NTRU Algorithms and Implementation in the Cloud

Abstract: The emergence of cloud computing platforms makes it easier to connect and collaborate globally without setting up additional infrastructures such as servers and data centers. This causes the emergence of threats to data security against digital information. This security threat can be overcome by cryptography. Examples of cryptographic algorithms are RSA and NTRU. The main concern that arises in this research is how to perform a comparative analysis between asymmetric cryptographic algorithms, RSA (Rivest-Shamir-Adleman) and NTRU (Nth-Degree Truncated Polynomial Ring) algorithms and their implementation in cloud storage. Comparison of performance between the RSA and NTRU algorithms at security levels 80, 112, 128, 160, 192, and 256 bits by running 5 – 1000 data, the results obtained that the running time of the key generation process and encryption of the NTRU algorithm is more efficient than the RSA algorithm. Wiener's Attack test on the RSA algorithm and LLL Lattice Basis Reduction on the NTRU algorithm. NTRU algorithm has a more secure level of resilience, so that it can be said that the NTRU algorithm is more recommended for cloud storage security.

Author 1: Bambang Harjito
Author 2: Henny Nurcahyaning Tyas
Author 3: Esti Suryani
Author 4: Dewi Wisnu Wardani

Keywords: Attacks; privacy; cryptography; RSA; NTRU; cloud storage

PDF

Paper 22: A Conceptual Framework for using Big Data in Egyptian Agriculture

Abstract: Agriculture is a typical contributor to the Egyptian economy, which could benefit from the comprehensive capabilities of Big Data (BD). In this work, we review the BD role in the agriculture sector in responding to two main questions: 1) Which technique, frameworks and data types were adopted. 2) Identification of the existing gap associated with the data sources, modeling, and analysis techniques. Therefore, the contribution in this paper can be outlined in three main aspects. 1) Popular BD frameworks were briefed, and a thorough comparison was conducted between them. 2) The potential data sources were described and characterized. 3) A Conceptual framework for Egyptian agriculture practice based on BD analytics was introduced. 4) Challenges and extensive recommendations have been provided, which could guide future development.

Author 1: Sayed Ahmed
Author 2: Amira S. Mahmoud
Author 3: Eslam Farg
Author 4: Amany M. Mohamed
Author 5: Marwa S. Moustafa
Author 6: Mohamed A. E. AbdelRahman
Author 7: Hisham M. AbdelSalam
Author 8: Sayed M. Arafat

Keywords: Agriculture; big data (BD); big data paradigm; BD processing framework; conceptual BD framework; geographical information systems (GIS); Hadoop; spark

PDF

Paper 23: Design and Development for a Vehicle Tracking System

Abstract: In recent years, the drastic increase in the number of vehicle thefts brings about at an alarming rate around the world. However, existing vehicle tracking devices have certain limitations including the lack of ability to determine if the vehicle is on the right route. To address this problem, this study focused on the design and development of a vehicle tracking prototype with route detection, emergency button, and STATUS command to monitor the current location of the vehicle. Arduino Mega 2560, SIM900 Global System for Mobile communication (GSM) module, and NEO-6M Global Positioning System (GPS) module were used to develop the prototype. The GPS module, push buttons, and SMS command served as an input. The Arduino Mega 2560 was programmed using an algorithm to determine if the device deviated from its route, detect if the emergency button was pressed, and if STATUS command was received. The system sends a SMS if the vehicle deviated from its path, emergency button is pressed, and a STATUS command from the operator is received. Results showed that after several trials the prototype was successful in performing its functional objectives. The prototype was only limited to the use of a prototyping grade GPS module. The GPS used a built-in antenna and took time to connect to satellites. It is recommended to use an industrial grade GPS module and connect an external antenna to improve signal strength.

Author 1: Tim Abe P. Andutan
Author 2: Rosanna C. Ucat

Keywords: Arduino; global positioning system; GPS; global system for mobile communications; GSM; vehicle tracking; route deviation detection

PDF

Paper 24: EMOGAME: Digital Games Therapy for Older Adults

Abstract: EmoGame is a cognitive and emotional game useful for helping older adults who experience Mild Cognitive Impairment (MCI). EmoGame was developed with a memory therapy approach. This therapy can help cognitive and positive emotions and introduce objects through pictures, such as pictures of old objects and old music in old age. This study aims to build a game application for MCI older adults on the Android platform to support improved cognitive abilities and positive emotions for older adults. This app has two games which are memory puzzle and memory exploration. This study uses a mixture of quantitative and qualitative methods through data collection questionnaires, diary entries 3E (Expressing, Experiences and Emotions), and interviews. Respondents were selected aged 50 years and above through the Mini-Mental State Examination (MMSE). The findings found that memory therapy can help older adults increase positive emotions through digital games. Through diary entries and Diary 3E, respondents’ feelings and experiences described positive emotions (happy, smile and like). The PANAS questionnaire (Positive and Negative Affect Schedule) was conducted for pre and post-testing to find positive emotions in EmoGame. Analysis of mean scores showed positive emotional factors at pre-interaction (M = 3.39, SD = 0.89) with post-interaction levels of positive emotions (M = 4.02, SD = 0.97), meaning there were significant differences in positive emotions of the older adults. The memory therapy applied in the EmoGame app is effective in helping to reduce the problem of memory decline and positive emotions for MCI older adults.

Author 1: Nita Rosa Damayanti
Author 2: Nazlena Mohamad Ali

Keywords: Digital games; therapy; older adults; mild cognitive impairment

PDF

Paper 25: A Paradigm for DoS Attack Disclosure using Machine Learning Techniques

Abstract: Cybersecurity is one of the main concerns of governments, businesses, and even individuals. This is because a vast number of attacks are their core assets. One of the most dangerous attacks is the Denial of Service (DoS) attack, whose primary goal is to make resources unavailable to legitimate users. In general, the Intrusion Detection and Prevention Systems (IDPS) hinder the DoS attack, using advanced techniques. Using machine learning techniques, this study will develop a detection model to detect DoS attacks. Utilizing the NSL-KDD dataset, the suggested DoS attack detection model was investigated using Naive Bayes, K-nearest neighbor, Decision Tree, and Support Vector Machine algorithms. The Accuracy, Recall, Precision, and Matthews Correlation Coefficients (MCC) metrics are used to compare these four techniques. In general, all techniques are performing well with the proposed model. However, The Decision Tree technique has outperformed all the other techniques in all four metrics, while the Naive Bayes technique showed the lowest performance.

Author 1: Mosleh M. Abualhaj
Author 2: Ahmad Adel Abu-Shareha
Author 3: Mohammad O. Hiari
Author 4: Yousef Alrabanah
Author 5: Mahran Al-Zyoud
Author 6: Mohammad A. Alsharaiah

Keywords: DoS attack; machine learning; NSL-KDD; IDPS systems

PDF

Paper 26: A Prediction Error Nonlinear Difference Expansion Reversible Watermarking for Integrity and Authenticity of DICOM Medical Images

Abstract: It is paramount to ensure the integrity and authenticity of medical images in telemedicine. This paper proposes an imperceptible and reversible Medical Image Watermarking (MIW) scheme based on image segmentation, image prediction and nonlinear difference expansion for integrity and authenticity of medical images and detection of both intentional and unintentional manipulations. The metadata from the Digital Imaging and Communications in Medicine (DICOM) file constitutes the authentication watermark while the integrity watermark is computed from Secure Hash Algorithm (SHA)-256. The two watermarks are combined and compressed using the Lempel Ziv (LZ) -77 algorithm. The scheme takes advantage of the large smooth areas prevalent in medical images. It predicts the smooth regions with zero error or values close to zero error, while non-smooth areas are predicted with large error values. The binary watermark is encoded and extracted in the zero-prediction error using a nonlinear difference expansion. The binary watermark is concentrated more on the Region of non-interest (RONI) than the Region of interest (ROI) to ensure a high visual quality while maintaining a high capacity. The paper also presents a separate low degradation side information processing algorithm to handle overflow. Experimental results show that the scheme is reversible and has a remarkable imperceptibility and capacity that are comparable to current works reported in literature.

Author 1: David Muigai
Author 2: Elijah Mwangi
Author 3: Edwell T. Mharakurwa

Keywords: Medical Image Watermarking (MIW); Digital Imaging and Communication in Medicine (DICOM); region of interest (ROI) and region of non-interest (RONI); prediction error (PE); nonlinear difference expansion (NDE); authenticity; integrity

PDF

Paper 27: Deep Learning Framework for Physical Internet Hubs Inbound Containers Forecasting

Abstract: This article presents a framework for physical internet hubs inbound containers forecasting based on deep learning and time series analysis. The inbound containers forecasting is essential for planning, scheduling, and resources allocation. The proposed framework consists of three main phases. First, the inbound historical transaction has been processed to find out the training window size (lags) using auto correlation function (ACF) and partial autocorrelation function (PACF). Second, the framework uses convolutional neural network (CNN) and recurrent neural network (RNN) as training networks for the historical time series data in two techniques. The proposed framework uses univariate and multivariate time series analysis to explore the maximum forecasting outcomes. Last, the framework measures the accuracy and compares the forecasting output using mean absolute error matrix (MAE) for both approaches. The experiments illustrated that RNN forecasts univariate inbound transaction with total 5.0954 MAE rather than 5.0236 for CNN. The CNN outperforms multivariate inbound containers forecasting with 0.7978 MAE. All the results has been compared with autoregressive integrated moving average (ARIMA) and support vector machine (SVR).

Author 1: El-Sayed Orabi Helmi
Author 2: Osama Emam
Author 3: Mohamed Abdel-Salam

Keywords: Physical internet hubs (π hubs); deep learning; convolutional neural network (CNN); recurrent neural network (RNN); time series forecasting

PDF

Paper 28: An Intelligent Anti-Jamming Mechanism against Rule-based Jammer in Cognitive Radio Network

Abstract: Cognitive Radio Network (CRN) has become a promising technology to overcome the problem of insufficient spectrum utilization. However, the CRN is susceptible to the well-known jamming attack, which reduces its spectrum utilization efficiency. Existing jamming identification schemes and their countermeasure typically require prior statistical information about the communication channel and jamming pattern. This is quite an impractical assumption in the real context. The prime research problem is that the existing schemes are mainly associated with higher computational costs and communication overhead. Hence, the proposed manuscript presents a non-device-centric and efficient anti-jamming mechanism in the form of higher spectrum utilization driven by reinforcement learning techniques to address this above-stated problem. The proposed anti-jamming mechanism is modeled in two phases of implementation. First, the design of the customized environment is introduced as a single wideband cognitive-communication channel where a jammer signal sweeps transversely in the entire band of interest. Secondly, an intelligent agent is designed based on a model-free off-policy algorithm that operates over the same spectrum band. The agent uses its frequency-band knowledge discovery capability to learn frequency band selection and preference strategies to detect and avoid jamming signals, maximizing its successful transmission rate. The simulation results show that the proposed anti-jamming mechanism can effectively eliminate interference and is efficient in power usage and Signal to Noise Ratio (SNR) compared to other existing advanced algorithms.

Author 1: Sudha Y
Author 2: Sarasvathi V

Keywords: Anti-jamming; agent; cognitive radio network; reinforcement learning

PDF

Paper 29: Rainfall Forecasting using Support Vector Regression Machines

Abstract: Heavy rainfall as a consequence of climate change have immensely impacted the ecology, the economy, and the lives of many. With the variety of available predictive tools, it is imperative that performance analysis of rainfall forecasting models is properly conducted as a measure for disaster preparedness and mitigation. Support Vector Regression Machine (SVRM) was utilized in predicting the rainfall of a city in a tropical country using a 4-year and 17-month rainfall dataset captured from an automated rain gauge (ARG) in Southern Philippines, involving parameter cost and gamma identification to determine the relationship between past and present values, determining optimal cost and gamma parameters to improve prediction accuracy, and forecasting model evaluation. The SVRM model that utilized Radial Basis Function (RBF) kernel function having the parameters of c=100; g=1; e=0.1; p=0.001 and the lag variable which used 12-hour report with lags up to 672-timesteps (i-672) demonstrated a Mean Square Error (MSE) of 3.461315. With close to accurate forecast between the predicted values and the actual rainfall values, the results of this study showed that SVRM has the potential to be a viable rainfall forecasting model given the proper data preparation, model kernel function selection, model parameter value selection and lag variable selection.

Author 1: Lemuel Clark Velasco
Author 2: Johanne Miguel Aca-ac
Author 3: Jeb Joseph Cajes
Author 4: Nove Joshua Lactuan
Author 5: Suwannit Chareen Chit

Keywords: Support vector regression machines; support vector machines; rainfall forecasting

PDF

Paper 30: Using Decision Tree Classification Model to Predict Payment Type in NYC Yellow Taxi

Abstract: The taxi services are growing rapidly as reliable services. The demand and competition between service providers is so high. A billion trip records need to be analyzed to raise the spirit of competition, understand the service users, and improve the business. Although decision tree classification is a common algorithm which generates rules that are easy to understand, there is no implementation for classification on taxi dataset. This research applies the decision tree classification model on taxi dataset to classify instances correctly, build a decision tree, and calculate accuracy. This experiment collected decision tree algorithm with Spark framework to present the good performance and high accuracy when predicting payment type. Applied decision tree algorithm with different aspects on NYC taxi dataset results in high accuracy.

Author 1: Hadeer Ismaeil
Author 2: Sherif Kholeif
Author 3: Manal A. Abdel-Fattah

Keywords: Big data analytics; apache spark; decision tree classification; taxi trips; machine learning

PDF

Paper 31: An Extended DBSCAN Clustering Algorithm

Abstract: Finding clusters of different densities is a challenging task. DBSCAN “Density-Based Spatial Clustering of Applications with Noise” method has trouble discovering clusters of various densities since it uses a fixed radius. This article proposes an extended DBSCAN for finding clusters of different densities. The proposed method uses a dynamic radius and assigns a regional density value for each object, then counts the objects of similar density within the radius. If the neighborhood size ≥ MinPts, then the object is a core, and a cluster can grow from it, otherwise, the object is assigned noise temporarily. Two objects are similar in local density if their similarity ≥ threshold. The proposed method can discover clusters of any density from the data effectively. The method requires three parameters; MinPts, Eps (distance to the kth neighbor), and similarity threshold. The practical results show the superior ability of the suggested method to detect clusters of different densities even with no discernible separations between them.

Author 1: Ahmed Fahim

Keywords: Cluster analysis; density-based clustering; varied density clusters; data mining; extended density-based spatial clustering of applications with noise (E-DBSCAN)

PDF

Paper 32: Unsupervised Chest X-ray Opacity Classification using Minimal Deep Features

Abstract: Data privacy has been a concern in medical imaging research. One important step to minimize the sharing of patient’s information is by limiting the use of original images in the workflow. This research aimed to use minimal deep learning features in detecting anomaly in chest X-ray (CXR) images. A total of 3,504 CXRs were processed using a pre-trained deep learning convolutional neural network to output ten discriminatory features which were then used in the k-mean algorithm to find underlying similarities between the features for further clustering. Two clusters were set to distinguish between “Opacity” and “Normal” CXRs with the accuracy, sensitivity, specificity, and positive predictive value of 80.9%, 86.6%, 71.5% and 83.1%, respectively. With only ten features required to build the unsupervised model, this would pave the way for future federated learning research where actual CXRs can remain distributed over multiple centers without sacrificing the anonymity of the patients.

Author 1: Mohd Zulfaezal Che Azemin
Author 2: Mohd Izzuddin Mohd Tamrin
Author 3: Mohd Adli Md Ali
Author 4: Iqbal Jamaludin

Keywords: Unsupervised classification; minimal deep features; convolution neural network; chest x-ray; airspace opacity

PDF

Paper 33: IoT based Speed Control for Semi-Autonomous Electric On-Road Cargo Vehicle

Abstract: The paper develops an investigative GSM enabled IoT based speed control scheme suitable for electric On-Road cargo vehicles. The design involves the bounding of the parameters that include the vehicle speed, motor speed, Truck payload, battery SoC (State of Charge), battery SoH (State of Health), real time navigation points using GPS, tire pressure, motor temperature and current consumption, driver fatigue detection and vehicle proximity detection which enters the system using GSM enabled wireless sensors and IoT based maps for arriving at the recommended speed. It engages a state-of-the-art Microcontroller based embedded system to govern the operation of the three-phase induction motor in accordance with the changes that the vehicle either experiences or becomes necessary for it to negotiate. It incorporates a close monitoring methodology for evolving a sequence of steps that enable the system to remain in operation over scheduled time frames. The results obtained from a simulation process carried out using embedded-c firmware code on ARM Core STM32 micro-controller exemplify the merits and illustrate the performance of the chosen vehicle in terms of its ability to be used in real world systems.

Author 1: P. L. Arunkumar
Author 2: M. Ramaswamy
Author 3: T. S. Murugesh

Keywords: Electric vehicle; IoT; speed control; battery SoC; battery SoH; micro-controller; embedded system; GSM; proximity sensor; payload; real time navigation; GPS

PDF

Paper 34: A Novel Approach of Hyperspectral Imaging Classification using Hybrid ConvNet

Abstract: In recent years, remote sensing applications have been booming, and with this hyperspectral imaging (HSI) has been used in many real-life applications. However, the classification of HSI is a significant problem due to the complex features of the captured hyperspectral scene. Moreover, the HSI is often inherently nonlinear and has very high-dimensional data. Recent years have seen a rise in deep learning applications for addressing nonlinear problems. However, deep learning tends to overfit when sparse or less training data is available. In this paper, the proposed work focuses on addressing the trade-off problem between classification performance and less training samples for classifying hyperspectral image data in a single training process. Thus, the study presents a hybrid multilayer learning system based on the joint approach of 2D and 3D convolutional kernels. The main reason is to utilize the spectral-spatial and spatial correlations in the learning process to achieve improved generalization of features in the training process for better HSI classification. The study outcome exhibits higher precision, recall rate, and F1-score performance. The overall accuracy is 99.9%, with a better convergence rate. The results prove that the proposed model is effective for HSI classification even with fewer training data samples.

Author 1: Soumyashree M Panchal
Author 2: Shivaputra

Keywords: Hyperspectral image; convolution neural network; classification; spatial feature; spectral feature

PDF

Paper 35: Non-Repudiation-based Network Security System using Multiparty Computation

Abstract: Security has always been a prominent concern over the network, and various essential requirements are required to cater to an efficient security system. Non-repudiation is a requirement about the non-deniability of services acting as a bridge between seamless relaying of service/data and efficient security implementation. There have been various studies carried out towards strengthening the non-repudiation system. There are certain pitfalls that render inapplicability on dynamic cases of vulnerability. The conventional two-party non-repudiation schemes have been widely explored in the existing literature. But this paper also advocates the adoption of multi-party computation, which has better feasibility toward strengthening a distributed security system. The current work presents a survey on the existing approaches of non-repudiation to investigate its effectiveness in the multi-party system. The prime aim of the proposed work is to analyze the current research progress and draw a research gap as the prominent contribution of the proposed study. The manuscript begins by highlighting the issues concerning multi-party strategies and cryptographic approaches, and the security requirements and standardization are briefly discussed. It then describes the essentials of non-repudiation and examines state-of-the-art mechanisms. Finally, the study summarizes and discusses research gaps identified through the review analysis.

Author 1: Divya K. S
Author 2: Roopashree H. R
Author 3: Yogeesh A C

Keywords: Future network; multiparty computation; nonrepudiation; security

PDF

Paper 36: An Algorithm based on Convolutional Neural Networks to Manage Online Exams via Learning Management System Without using a Webcam

Abstract: Cheating attempts in educational assessments have long been observed. Because students today are characterized by their great digital intelligence, this negative conduct has intensified throughout the emergency remote teaching time. First, this article discusses the most innovative methods for combating cheating throughout the online evaluation procedure. Then, for this aim, a Convolutional Neural Networks for Cheating Detection System (CNNCDS) is presented.. The proposed solution has the advantage of not requiring the use of a camera, it recognizes and identifies IP addresses, records and analyzes exam sessions, and prevents internet browsing during exams. The K-Nearest Neighbor (K-NN) has been adopted as a classifier while the Principal Component Analysis (PCA) was used for exploratory data analysis and for making predictive models. The CNNCDS was learned, tested, and validated by using data extracted from a face-to-face exam session. its main output is a binary students' classification in real-time (normal or abnormal). The CNNCDS surpasses the fundamental classifiers Multi-class Logistic Regression (MLR), Support Vector Machine (SVM), Random Forest (RF), and Gaussian Naive Bayes (GNB) in terms of mean accuracy (98.5%). Furthermore, it accurately detected screen pictures in an acceptable processing time, with a sensitivity average of 99.8 percent and a precision average of 1.8 percent. This strategy has been shown to be successful in minimizing cheating in several colleges. This solution is useful for higher education institutions that operate entirely online and do not require the use of a webcam.

Author 1: Lassaad K. SMIRANI
Author 2: Jihane A. BOULAHIA

Keywords: Artificial intelligence; convolutional neural network; learning assessment; online cheating; online examination; higher education; emergency remote teaching

PDF

Paper 37: A Cubic B-Splines Approximation Method Combined with DWT and IBP for Single Image Super-resolution

Abstract: The process of converting low-resolution images into high-resolution images by removing noise and estimating high-frequency information is known as image super-resolution. Aliased and decimated versions of the actual scenes are considered low-resolution images. The edges of high-resolution images produced by super-resolution from a single image are typically blurred. This paper proposes an approach to generate high-resolution image with sharp edges by combining a cubic B-Splines approximation, a discrete wavelet transform (DWT), and an iterative back-projection (IBP) edge-preserving weighted guided filter. A two-stage cubic B-Splines approximation, which includes pre-filtering and interpolation, is employed to up-sample the low-resolution image. The pre-filtering approach is used to transform pixel values to B-Splines coefficients. This approach minimizes blurring in the up-sampled image. The lost high-frequency information is then estimated using a one-level discrete wavelet transform based on the db1 wavelet. Finally, using a weighted guided filter, the resulting image is subjected to back-projection to obtain a high-resolution image. The proposed single-image super-resolution approach is applied on RGB colour images. The proposed method outperforms other selected approaches for comparison objectively in terms of PSNR and SSIM and also in visual quality.

Author 1: Victor Kipkoech Mutai
Author 2: Elijah Mwangi
Author 3: Ciira wa Maina

Keywords: Single-image super-resolution; pre-filtering; cubic B-Splines approximation; discrete wavelet transform (DWT); iterative back-projection (IBP); B-Splines coefficients

PDF

Paper 38: Supervised Learning Techniques for Intrusion Detection System based on Multi-layer Classification Approach

Abstract: The goal of this study is to discover a solution to two problems: first, the signature-based intrusion detection system SNORT can identify a new attack signature without human intervention; and second, signature-based IDS cannot detect multi-stage attacks. The interesting aspect of this study is the growing ways to address the aforementioned issues. We introduced a multi-layer classification strategy in this study, in which we employ two layers, the first of which is based on a decision tree, and the second of which includes machine learning technique fuzzy logic and neural networks. If the first layer fails to identify fresh attacks, the second layer takes over and detects new signature assaults, updating the SNORT signature automatically.

Author 1: Mansoor Farooq

Keywords: IDS; SNORT; fuzzy logic; neural networks; decision tree; Naïve Bayes

PDF

Paper 39: A Comprehensive Study of Different Types of Deduplication Technique in Various Dimensions

Abstract: In the current digital era, the growth of digital data is highly exceptional. There are various sources available for these digital data. The quantity of digital data being produced rose exponentially with time because of organizations and even by individuals, finally end up in the need of huge storage space. Cloud storage provides the storage space for such requirement. Since the storage space is utilized by many different users, having the duplicate data cannot be avoided. So it is necessary to make use of some storage optimization technique to handle such duplicate contents. Deduplication is a technique which is used to evade redundant data get stored. Among the various digital data, the possibility of having duplicate copies is high for data. In this research work, we review the benefits of having deduplication in optimizing the usage of storage space and study about the various types of deduplication techniques in different dimensions which can be used for data. It helps to select the appropriate data deduplication technique to increase their effective storage utilization and reduce the wastage of memory space because of duplicate data.

Author 1: G. Sujatha
Author 2: Jeberson Retna Raj

Keywords: Digital data; deduplication; storage optimization; cloud storage service; duplicate copies; bandwidth utilization

PDF

Paper 40: PLA Mechanical Performance Before and After 3D Printing

Abstract: PLA or polylactic acid is a thermoplastic made from renewable sources. Thanks to its environmental value compared to petroleum sourced materials, it is widely used in 3D printing industry. Due to the advantages of additive manufacturing in terms of cost and time consumption, many industries are using these technologies to re-engineer parts or assemblies to optimize their products. However, the properties given by the supplier are not conforming to the final printed product. This issue can be dangerous, especially when these products are used in the biomedical fields or toys for children or other sensitive areas. The aim of this paper is to outline the difference between the final properties and the primary ones. The samples are tested in traction following the ASTM D638 Standard. The specificities of the standard in terms of specimen dimensions and test methodologies have been respected. The results demonstrated that there is a difference between the performance of the material before and after using a 3d printer.

Author 1: Houcine SALEM
Author 2: Hamid ABOUCHADI
Author 3: Khalid ELBIKRI

Keywords: Additive manufacturing; PLA; test sample; traction; 3D printing

PDF

Paper 41: Detecting Hate Speech on Twitter Network Using Ensemble Machine Learning

Abstract: Twitter is habitually exploited now-a-days to propagate torrents of hate speeches, misogynistic, and misandry tweets that are written in slang. Machine learning methods have been explored in manifold studies to address the inherent challenges of hate speech detection in online spaces. Nevertheless, language has subtleties that can make it stiff for machines to adequately comprehend and disambiguate the semantics of words that are heavily dependent on the usage context. Deep learning methods have demonstrated promising results for automatic hate speech detection, but they require a significant volume of training data. Classical machine learning methods suffer from the innate problem of high variance that in turn affects the performance of hate speech detection systems. This study presents a voting ensemble machine learning method that harnesses the strengths of logistic regression, decision trees, and support vector machines for the automatic detection of hate speech in tweets. The method was evaluated against ten widely used machine learning methods on two standard tweet data sets using the famous performance evaluation metrics to achieve an improved average F1-score of 94.2%.

Author 1: Raymond T Mutanga
Author 2: Nalindren Naicker
Author 3: Oludayo O Olugbara

Keywords: Classical learning; deep learning; ensemble learning; hate speech; social media; twitter network; voting ensemble

PDF

Paper 42: Methodology for Infrastructure Site Monitoring using Unmanned Aerial Vehicles (UAVs)

Abstract: Monitoring a work of infrastructure allows one to know the state of this and the efficiency of the workers. The follow-up is a work carried out by the auditor, which sails to correspond with the design in planes. It takes fulfillment in the budgeted one and complies with the established times. This work uses classical topography elements, which demand time and money and the implications on the safety issue of non-construction personnel. To avoid this, this project implements a methodology capable of carrying out the task of monitoring civil work. An unmanned aerial vehicle or drone is used, which are small remotely controlled flying devices that in recent years have become an extremely useful tool in activities that human beings cannot perform or that threaten their integrity. For the realization of this work, a drone Quadcopter Phantom 3 Standard is used, responsible for taking photographs; these are loaded in the software Agisoft Metashape Professional that by photogrammetry techniques allows digital processing of images, generating a 3D vision, cloud of dots, digital surface model and distance measurement. By obtaining this information, it is possible to make a match with the work schedule and detect delays or advances in a precise way.

Author 1: Cristian Benjamin Garcia Casierra
Author 2: Carlos Gustavo Calle Sanchez
Author 3: Javier Ferney Castillo Garcia
Author 4: Felipe Munoz La Rivera

Keywords: Topography; unmanned aerial vehicle; infrastructure work monitoring; digital image processing component; construction site

PDF

Paper 43: Development of Pipe Inspection Robot using Soft Actuators, Microcontroller and LabVIEW

Abstract: Pipeline transportation is particularly significant nowadays because it can transfer liquids or gases over a long distance, usually to a market area for use, using a system of pipes. The pipeline's numerous fittings, such as elbows and tees, as well as the various sizes and types of materials utilized, make routine inspection and maintenance challenging for the technician. Therefore, the compact and portable pipe inspection robots with pneumatic actuators are required for use in industry especially in hazardous areas. Flexible pneumatic actuators with clean and safe pneumatic energy have high mobility to move in complex pipelines. High safety features such as no oil or electrical leakage, which would be dangerous if used in an explosive environment are a major factor it is widely used nowadays. As a result, the goal of this study is to propose and present the development of pipe inspection robot that employ soft actuators and are monitored by LabVIEW for usage in a variety of pipe sizes and types. This research focuses on the movement of robots in the pipeline by proposing some important mechanisms such as sliding mechanism, holding mechanism, and bending unit to move easily and effectively in the pipeline. Experiments show that with an appropriate pneumatic pressure source of 4 bar, a flexible robot using the soft pneumatic actuator can bend and move in a 2-inch diameter pipe smoothly and efficiently. It has been discovered that the proposed mechanism may readily travel pipe corners while bending in any required direction.

Author 1: Mohd Aliff
Author 2: Mohammad Imran
Author 3: Sairul Izwan
Author 4: Mohd Ismail
Author 5: Nor Samsiah
Author 6: Tetsuya Akagi
Author 7: Shujiro Dohta
Author 8: Weihang Tian
Author 9: So Shimooka
Author 10: Ahmad Athif

Keywords: Soft pneumatic actuator; pipe inspection robot; flexible actuator; microcontroller; sliding and holding mechanism

PDF

Paper 44: Deep Learning-based Detection System for Heavy-Construction Vehicles and Urban Traffic Monitoring

Abstract: In this intelligent transportation systems era, traffic congestion analysis in terms of vehicle detection followed by tracking their speed is gaining tremendous attention due to its complicated intrinsic ingredients. Specifically, in the existing literature, vehicle detection on highway roads are studied extensively while, to the best of our knowledge the identification and tracking of heavy-construction vehicles such as rollers are not yet fully explored. More specifically, heavy- construction vehicles such as road rollers, trenchers and bulldozers significantly aggravate the congestion in urban roads during peak hours because of their deadly slow movement rates accompanied by their occupation of majority of road portions. Due to these reasons, promising frameworks are very much important, which can identify the heavy-construction vehicles moving in urban traffic-prone roads so that appropriate congestion evaluation strategies can be adopted to monitor traffic situations. To solve these issues, this article proposes a new deep-learning based detection framework, which employs Single Shot Detector (SSD)-based object detection system consisting of CNNs. The experimental evaluations extensively carried out on three different datasets including the benchmark ones MIO-TCD localization dataset, clearly demonstrate the enhanced performance of the proposed detection framework in terms of confidence scores and time efficiency when compared to the existing techniques.

Author 1: Sreelatha R
Author 2: Roopa Lakshmi R

Keywords: Intelligent transportation systems; heavy-construction vehicles detection; traffic monitoring and SSD-based CNN component; deep learning

PDF

Paper 45: Classification of Autism Spectrum Disorder and Typically Developed Children for Eye Gaze Image Dataset using Convolutional Neural Network

Abstract: Autism is a neurobehavioral problem that hinders to interact with others. Autistic Spectrum Disorder (ASD) is a psychological disorder that hampers procurement of etymological, communication, cognitive, social skills and Stereotypical motor behaviors and capabilities. Recent research revealing that Autism Spectrum Disorder can be diagnosed using gaze structures which has opened up a new field where visual focus modelling could be highly used. Diagnosis of ASD becomes a difficult task due to wide range of symptoms and severity of ASD. Deep neural networks have been widely employed and have shown to perform well in a variety of visual data processing applications. In this paper, typical developed (TD) or ASD is classified using Convolution neural Networks (CNN) for the fixation maps of the corresponding observer's gaze at a given image. The objective of this paper is to observe whether eye-tracking data of fixation map could classify children with ASD and typical development (TD). We further investigated whether features on visual fixation would attain better classification performance. The proposed CNN model achieves 75.23% accuracy for validation.

Author 1: Praveena K N
Author 2: Mahalakshmi R

Keywords: Autism spectrum disorder; classification; fixation maps; eye expression; visual focus; gaze pattern; CNN

PDF

Paper 46: Technological Affordances and Teaching in EFL Mixed-ability Classes during the COVID-19 Pandemic

Abstract: With the widespread of COVID-19 in Saudi Arabia, the educational authorities issued firm directions to convert to virtual classes exploiting the available Learning Management System (LMS). However, during the academic year 2020-2021, the researchers observed that writing EFL instructors at Prince Sattam bin Abdulaziz University (PSAU), Saudi Arabia, faced diverse challenges due to having online mixed-ability classes, i.e. those classes where students have varying levels of readiness, motivation, and academic caliper. Though many previous studies explored the influence of the COVID-19 pandemic on teaching and learning practices, very few studies addressed the way technological affordances pose challenges for instructors teaching mixed-ability classes. Therefore, the present study, using mixed quantitative and qualitative research methods, sought to explore challenges that evolved due to the technological affordances of LMS to spot the persistent problems and to offer relevant solutions for upgrading, writing teaching and learning practices. The basic research design relied on an online questionnaire followed by semi-structured interviews. Findings showed that differentiated instruction proved to be the most successful strategy for teaching writing in mixed-ability online classes as it allowed the adaptation of materials, teaching and learning practices, and assessment tools to motivate low-achievers. In addition, the collaborative tools offered by the Blackboard such as the White Board, Discussion Board, Blogs, and Breakout Groups helped to meet the preferences of visual, auditory, and kinesthetic learners. Finally, further studies are recommended to explore the affordances of educational technologies regularly to identify potential benefits and limitations for offering the best teaching and learning practices.

Author 1: Waheed M. A. Altohami
Author 2: Mohamed Elarabawy Hashem
Author 3: Abdulfattah Omar
Author 4: Mohamed Saad Mahmoud Hussein

Keywords: Technological affordances; blackboard; writing teaching; EFL mixed-ability classes; differentiation instruction; COVID-19

PDF

Paper 47: Deep Learning Applications in Solid Waste Management: A Deep Literature Review

Abstract: Solid waste management (SWM) has recently received more attention, especially in developing countries, for smart and sustainable development. SWM system encompasses various interconnected processes which contain numerous complex operations. Recently, deep learning (DL) has attained momentum in providing alternative computational techniques to determine the solution of various SWM problems. Researchers have focused on this domain; therefore, significant research has been published, especially in the last decade. The literature shows that no study evaluates the potential of DL to solve the various SWM problems. The study performs a systematic literature review (SLR) which has complied 40 studies published between 2019 and 2021 in reputed journals and conferences. The selected research studies have implemented the various DL models and analyzed the application of DL in different SWM areas, namely waste identification and segregation and prediction of waste generation. The study has defined the systematic review protocol that comprises various criteria and a quality assessment process to select the research studies for review. The review demonstrates the comprehensive analysis of different DL models and techniques implemented in SWM. It also highlights the application domains and compares the reported performance of selected studies. Based on the reviewed work, it can be concluded that DL exhibits the plausible performance to detect and classify the different types of waste. The study also explains the deep convolutional neural network with the computational requirement and determine the research gaps with future recommendations.

Author 1: Sana Shahab
Author 2: Mohd Anjum
Author 3: M. Sarosh Umar

Keywords: Solid waste management; systematic literature review; deep learning; convolutional neural networks

PDF

Paper 48: Proposal of an Automated Tool for the Application of Sentiment Analysis Techniques in the Context of Marketing

Abstract: Currently, the opinions and comments made by customers on e-commerce portals regarding different products and services have great potential for identifying customer perceptions and preferences. Based on the above, there is a growing need for companies to have automated tools based on sentiment analysis through polarity analysis, which allow the examination of customer opinions to obtain quantitative indicators from qualitative information that enable decision-making in the context of marketing. In this article, we propose the construction of an automated tool for conducting opinion mining studies, which can be used in a transparent way to the algorithmic process by the marketing units of companies for decision making. The functionality of the proposed tool was verified through a case study, in which the opinions obtained from electronic commerce website concerning one of the best-selling technological products were investigated.

Author 1: Gabriel Elias Chanchi Golondrino
Author 2: Manuel Alejandro Ospina Alarcon
Author 3: Wilmar Yesid Campo Munoz

Keywords: E-commerce; marketing; opinion mining; polarity analysis; sentiment analysis

PDF

Paper 49: Affinity Degree as Ranking Method

Abstract: In machine learning, ranking is a fundamental problem that attempts to rank a list of things based on their relevance in a certain task. Ranking can be helpful, especially for future decision making. The framework for ranking has been classified into three primary approaches in machine learning: pointwise, pairwise, and listwise. However, learning to rank in all three approaches still lacks continuous learning ability, particularly when it comes to determining the degree of relevancy of ranking orders. In this paper, an affinity degree technique for ranking is proposed as another potential machine learning framework. The definition and attributes of the affinity degree technique are discussed, as well as the results of an experiment adopting the affinity degree approach as a ranking mechanism. The experiment's performance is measured using assessment metrics such as Mean Average Precision (MAP).

Author 1: Rosyazwani Mohd Rosdan
Author 2: Wan Suryani Wan Awang
Author 3: Samhani Ismail

Keywords: Affinity; affinity degree; rank; machine learning

PDF

Paper 50: Developing a Credit Card Fraud Detection Model using Machine Learning Approaches

Abstract: The growing application and usage of e-commerce applications have given an exponential rise to the number of online transactions. Though there are several methods for completing online transactions, however, credit cards are most commonly used. The increased number of transactions has given the opportunity to the fraudsters to mislead the customers and make them execute fraudulent transactions. Therefore, there is a need for such a method that can automatically classify detect fraudulent transactions. This research study aims to develop a credit-card fraud detection model that can effectively classify an online transaction as fraudulent or genuine. Three supervised machine learning approaches have been applied to develop a credit-card fraud classifier. These techniques include logistic regression, artificial intelligence and support vector machine. The classification accuracy achieved by all the classifiers is almost similar. This research has used the confusion matrix and area under the curve to demonstrate the score of the different performance measures and evaluate the overall performance of the classifiers. Several performance measures such as accuracy, precision, recall, F1-measure, Matthews correlation coefficient, receiver operating characteristic curve have been computed and analysed to evaluate the performance of the credit-card fraud detection classifiers. The analysis demonstrates that the support vector machine-based classifier outperforms the other classifiers.

Author 1: Shahnawaz Khan
Author 2: Abdullah Alourani
Author 3: Bharavi Mishra
Author 4: Ashraf Ali
Author 5: Mustafa Kamal

Keywords: Credit card fraud detection; neural network; support vector machine; logistic regression; performance measures

PDF

Paper 51: Bayesian Hyperparameter Optimization and Ensemble Learning for Machine Learning Models on Software Effort Estimation

Abstract: In recent decades, various software effort estimation (SEE) algorithms have been suggested. Unfortunately, generating high-precision accuracy is still a major challenge in the context of SEE. The use of traditional techniques and parametric approaches is largely inaccurate because they produce biased and subjective accuracy. Meanwhile, none of the machine learning methods performed well. This study applies the AdaBoost ensemble learning method and random forest (RF), on the other hand the Bayesian optimization method is applied to determine the hyperparameters of this model. The PROMISE repository and the ISBSG dataset were used to build the SEE model. The developed model was comprehensively compared with four machine learning methods (classification and regression tree, k-nearest neighbor, multilayer perceptron, and support vector regression) under 3-fold cross validation (CV). It can be seen that the RF method based on AdaBoost ensemble learning and bayesian optimization outperforms this approach. In addition, the AdaBoost-based model assigns a feature importance rating, which makes it a promising tool in software effort prediction.

Author 1: Robert Marco
Author 2: Sakinah Sharifah Syed Ahmad
Author 3: Sabrina Ahmad

Keywords: Bayesian optimization; adaboost ensemble learning; random forest; software effort estimation

PDF

Paper 52: A Robust Reversible Data Hiding Framework for Video Steganography Applications

Abstract: Reversible Data Hiding (RDH) is a special form of data hiding approach for data integrity and confidentiality protection where the secret image bits (SI) are embedded into Cover Media (CM) by altering its intrinsic pixel attributes. However, in RDH the CM along with the secret message is recovered at the end of computing phase. However, despite of its potential use-cases for enhancing the embedding performance, when it comes to security for various network standards, the traditional RDH mechanisms cannot fully comply with the standards for different set of attacks during the bit-stream transmission scenarios. Therefore, the proposed study contributes towards a computational framework of a robust RDH framework for Video Steganography (VS) which is modeled and simulated under various attack effects and the observation outcome are produced in before and after attack situations to justify the improvement over Embedding Capacity (EC) and Peak Signal-to-Noise Ratio (PSNR) performance for both CM and secret message unlike traditional difference expansion-based methods (DE). The outcome of the study shows that the formulated RDH method not only achieves better reversibility at lower cost of computing but also ensures effective PSNR and imperceptibility outcome for both CM and secret image.

Author 1: Manjunath Kamath K
Author 2: R. Sanjeev Kunte

Keywords: Reversible data hiding; data integrity; embedding capacity; video steganography

PDF

Paper 53: Automated Feature Extraction for Predicting Multiple Sclerosis Patient Disability using Brain MRI

Abstract: Predicting Multiple Sclerosis (MS) patient's disability level is an important issue as this could help in better diagnoses and monitoring the progression of the disease. Expanded Disability Status Scale (EDSS) is a common protocol used to manually score the disability level. However, it is time-consuming requires expert knowledge and exposure to inter-and intra-subject variation. Many previous studies focused on predicting patients' disability from multiple MRI scans and manual or semi-automated features extraction. Furthermore, all of them are required patient follow up. This study aims to predict MS patients' disability using fully automated feature extraction, single MRI scan, single MRI protocols and without patient follow-up. Data from 65 MS patients were used in this study. They were collected from multiple centers in Iraq and Saudi Arabia. Automated brain abnormalities segmentation, automated brain lobes, and brain periventricular are segmentation have been used to extract large scan features. A linear regression algorithm has been used to predict different types of MS patient disability. Initially, weak performance was found until MS patients were divided into four groups according to the MRI-Tesla model and the condition of the patient with a lesion in the spinal cord or not. The best performance was with an average RMSE of 0.6 to predict the EDSS with a step of 2. These results demonstrate the possibility of predicting with fully automated feature extraction, single MRI scan, single MRI protocols and without patient follow-up.

Author 1: Ali M. Muslim
Author 2: Syamsiah Mashohor
Author 3: Rozi Mahmud
Author 4: Gheyath Al Gawwam
Author 5: Marsyita binti Hanafi

Keywords: Multiple sclerosis; expanded disability status scale prediction; multiple sclerosis disability; magnetic resonance imaging

PDF

Paper 54: Random and Sequence Workload for Web-Scale Architecture for NFS, GlusterFS and MooseFS Performance Enhancement

Abstract: The problem in the data storage method that can support the data processing speed in the network is one of the key problems in big data. As computing speed increases and cluster size increases, I/O and network processes related to intensive data usage cannot keep up with the growth rate and data processing speed. Data processing applications will experience latency issues from long I/O. Distributed data storage systems can use Web scale technology to assist centralized data storage in a computing environment to meet the needs of data science. By analyzing several distributed data storage models, namely NFS, GlusterFS and MooseFS, a distributed data storage method is proposed. The parameters used in this study are transfer rate, IOPS and CPU resource usage. Through testing the sequential and random reading and writing of data, it is found that GlusterFS has faster performance and the best performance for sequential and random data reading when using 64k block data storage. MooseFS uses 64k power storage blocks to obtain the best performance in random data read operations. Using 32k data storage blocks, NFS achieves the best results in random writes. The performance of a distributed data storage system may be affected by the size of the data storage block. Using a larger data storage block can achieve faster performance in data transmission and performing operations on data.

Author 1: Mardhani Riasetiawan
Author 2: Nashihun Amien

Keywords: component; network storage; container; NFS; GlusterFS; MooseFS; random workload; sequence workload

PDF

Paper 55: A Data Security Algorithm for the Cloud Computing based on Elliptic Curve Functions and Sha3 Signature

Abstract: The rapid development of distributed system technologies enforces numerous challenges. For example, one of the most critical challenges facing cloud computing is ensuring the security of confidential data during both transfer and storage. Indeed, many techniques are used to enhance data security on cloud computing storage environment. Nevertheless, the most significant method for data protection is encryption. Thus, it has become an interesting topic of research and different encryption algorithms have been put forward in the last few years in order to provide data security, integrity, and authorized access. However, they still have some limitations. In this paper, we will study the security concept in Cloud Computing applications. Then, an ECC (Elliptical Curve Cryptography) based algorithm is designed and tested to ensure cloud security. The experimental results demonstrate the efficiency of the proposed algorithm which presents a strong security level and reduced execution time compared to widely used existing techniques.

Author 1: Sonia KOTEL
Author 2: Fatma SBIAA

Keywords: Cloud; IaaS simulation upon SimGrid (SCHIaas); elliptic curve encryption; one-time pad symmetrical encryption method (OTP); confidentiality; integrity

PDF

Paper 56: Sentiment Analysis on Customer Satisfaction of Digital Banking in Indonesia

Abstract: Southeast Asia, including Indonesia, is seeing an increase in digital banking adoption, owing to changing customer expectations and increasing digital penetration. The pandemic Covid-19 has hastened this tendency for digital transformation. However, customer satisfaction should not be left unmanaged during this transition. This research aims to obtain customer satisfaction of digital banking in Indonesia based on sentiment analysis from Twitter. Data collected were related to three digital banks in Indonesia, namely Jenius, Jago, and Blu. Total of 34,605 tweets were collected and analyzed within the period of August 1st 2021 to October 31st 2021. Sentiment analysis was conducted using nine standalone classifiers, Naïve Bayes, Logistic Regression, K-Nearest Neighbours, Support Vector Machines, Random Forest, Decision Tree, Adaptive Boosting, eXtreme Gradient Boosting and Light Gradient Boosting Machine. Two ensemble methods were also used for this research, hard voting and soft voting. The results of this study show that SVM among other stand-alone classifiers has the best performance when used to predict sentiments with value for F1-score 73.34%. Ensemble method performed better than using stand-alone classifier, and soft voting with 5-best classifiers performed best overall with value for F1-score 74.89%. The results also show that Jago sentiments were mainly positive, Jenius sentiments mostly were negative and for Blu, most sentiments were neutral.

Author 1: Bramanthyo Andrian
Author 2: Tiarma Simanungkalit
Author 3: Indra Budi
Author 4: Alfan Farizki Wicaksono

Keywords: Sentiment analysis; ensemble method; customer satisfaction; digital bank

PDF

Paper 57: System Architecture for Brain-Computer Interface based on Machine Learning and Internet of Things

Abstract: Brain functions are required to be read for curing neurological illness. Brain-Computer Interface (BCI) connects the brain to the digital world for brain signals receiving, recording, processing, and comprehending. With a Brain-Computer Interface (BCI), the information from the user’s brain is fed into actuation devices, which then carry out the actions programmed into them. The Internet of Things (IoT) has made it possible to connect a wide range of everyday devices. Asynchronous BCIs can benefit from an improved system architecture proposed in this paper. Individuals with severe motor impairments will particularly get benefit from this feature. Control commands were translated using a rule-based translation algorithm in traditional BCI systems, which relied only on EEG recordings of brain signals. Examining BCI technology’s various and cross-disciplinary applications, this argument produces speculative conclusions about how BCI instruments combined with machine learning algorithms could affect the forthcoming procedures and practices. Compressive sensing and neural networks are used to compress and reconstruct ECoG data presented in this article. The neural networks are used to combine the classifier outputs adaptively based on the feedback. A stochastic gradient descent solver is employed to generate a multi-layer perceptron regressor. An example network is shown to take a 50% compression ratio and 89% reconstruction accuracy after training with real-world, medium-sized datasets as shown in this paper.

Author 1: Shahanawaj Ahamad

Keywords: Brain-computer interface; machine learning; internet of things; EEG; system architecture

PDF

Paper 58: Wave Parameters Prediction for Wave Energy Converter Site using Long Short-Term Memory

Abstract: Forecasting the behaviour of various wave parameters is crucial for the safety of maritime operations as well as for optimal operations of wave energy converter (WEC) sites. For coastal WEC sites, the wave parameters of interest are significant wave height (Hs) and peak wave period (Tp). Numerical and statistical modeling, along with machine and deep learning models, have been applied to predict these parameters for the short and long-term future. For near-future prediction of Hs and Tp, this study investigates the possibility of optimally training a Long Short-Term Memory (LSTM) model on historical values of Hs and Tp only. Additionally, the study investigates the minimum amount of training data required to predict these parameters with acceptable accuracy. The Root Mean Square Error (RMSE) measure is used to evaluate the prediction ability of the model. As a result, it is identified that LSTM can effectively predict Hs and Tp given their historical values only. For Hs, it is identified that a 4-year dataset, 20 historical inputs, and a batch size of 256 produce the best results for three, six, twelve, and twenty-four-hour prediction windows at half-hourly step. It is also established that the future values of Tp can be optimally predicted using a 2-year dataset, 10 historical inputs, and a 128-batch size. However, due to the much dynamic nature of the peak wave period, it is discovered that the LSTM model yielded relatively low prediction accuracy as compared to Hs.

Author 1: Manzoor Ahmed Hashmani
Author 2: Muhammad Umair
Author 3: Horio Keiichi

Keywords: Wave energy converter; significant wave height; peak wave period; LSTM

PDF

Paper 59: Dynamic User Activity Prediction using Contextual Service Matching Mechanism

Abstract: The significance of context-based services is significantly increasing with the advancement of integrated technologies of sensors and ubiquitous technologies. The existing approaches are reviewed to find out that identification of user's activity has more scope of improvement. After reviewing the current literature towards context-based methodologies, it is found that existing methods are devoid of considering dynamic context; while the modelling perspective is mainly towards considering predefined and static contextual information. Further, existing models doesn't have any inclusion of potential belief system nor any incorporation of service matching. Further, practical world case-studies is characterized by complex activity of user while it is quite challenging to extract the accurate contextual information associated with complex user activity. From the practical deployment scenario, the existing system offers less supportability toward collaborative network, which is highly essential to be considered for constraint modelling for user activity detection. Therefore, the proposed manuscript contributes a solution towards existing research problems by introducing a Dynamic User Activity Prediction using Contextual Service Matching Mechanism. A mixed research methodology is used to prove how service matching mechanism is important in contextual service discovery using multimodal activity data. The first contributory solution towards addressing the research problem is by introducing a novel and simplified belief system that considers both static contextual parameters as well as dynamic activity-based contextual parameter. The second contributory solution towards existing problem is to develop a novel service matching module that takes the input from service reposit, user calendar events, and collaborative units for assisting in similarity-based recommendation system. The model considers Hidden Markov Model for activity determination considering states of activity. With a combined usage of user activity context, feature management, and collaborative model, the proposed system offers better granularity in investigating user activity. The experimental and simulation analysis of the proposed outcome shows the enhanced accuracy performance of proposed system under different test environment. The study also investigates the impact of the service matching mechanism as well as relevance feedback on the accuracy to find that the proposed system excels better accuracy.

Author 1: M. Subramanyam
Author 2: S. S. Parthasarathy

Keywords: Contextual information; service discovery; prediction; ubiquitous computing; user activity

PDF

Paper 60: Progressive 3-Layered Block Architecture for Image Classification

Abstract: Convolutional Neural Networks (CNNs) have been used to handle a wide range of computer vision problems, including image classification and object detection. Image classification refers to automatically classifying a huge number of images and various techniques have been developed for accomplishing this goal. The focus of this article is to enhance image classification accuracy implemented on CNN models by using the concept of transfer learning and progressive resizing with split and train strategy. Furthermore, the Parametric Rectified Linear Unit (PReLU) activation function, which generalizes the standard traditional rectified unit, has also been applied on dense layers of the model. PReLU enhances model fitting with almost little significant computational cost and low over-fitting hazard. A “Progressive 3-Layered Block Architecture" model is proposed in this paper which considers the fine-tuning of hyperparameters and optimizers of the Deep network to achieve state-of-the-art accuracy on benchmark datasets with fewer parameters.

Author 1: Munmi Gogoi
Author 2: Shahin Ara Begum

Keywords: CNN; transfer learning; progressive resizing; PReLU; deep network

PDF

Paper 61: Face Recognition using Principal Component Analysis and Clustered Self-Organizing Map

Abstract: Face recognition is one of the cornerstones of the face processing schemes that composed the contemporary intelligent vision-based interactive systems between computers and humans. Instead of using neurons of the Self-Organized Map (SOM) neural network to cluster the facial data, in this work, we applied an agglomerative hierarchical clustering to cluster the neurons of the SOM network, which in turns, used to cluster the facial dataset. In prior, Principal Component Analysis (PCA) is employed to reduce the dimension of the facial data as well as to establish the initial state of SOM neurons. The design of the clustered-SOM recognition engine involves post-training steps that labeled the clustered SOM neurons resulting in a supervised SOM network. The effectiveness of the proposed model is demonstrated using the well-known ORL database. Using five images per person for SOM training, the proposed recognizer results in a recognition rate of 94.7%, whereas using nine images raise the recognition rate up to 99.33%. The facial recognizer has attained a notable reliability and robustness against the additive white Gaussian noise, where increasing the level of noise variance from 0 to 0.09, the recognition rate decreased only by 8%. Furthermore, time cost is analyzed, where using 200 images for training takes less than 4 seconds to be performed, whereas testing using a new set of 200 images takes less than 0.013 seconds which is competitive to many artificial intelligence and machine learning based schemes.

Author 1: Jasem Almotiri

Keywords: Artificial intelligence; machine learning; clustering; agglomerative hierarchical clustering; face recognition; neural network; self-organizing map; principal component analysis

PDF

Paper 62: Performance Evaluation of Safe Avoidance Time and Safety Message Dissemination for Vehicle to Vehicle (V2V) Communication in LTE C-V2X

Abstract: VANET has many opportunities to manage vehicle safety on the road efficiently. The standards from European Telecommunications Standards Institute (ETSI) for Intelligent Transport System (ITS) provide necessary upper-layer specifications for safety message dissemination between vehicles using Cooperative Aware Messages (CAM) and Decentralized Event Notification Message (DENM). Besides, mobile radio technology of Long-Term Evolution (LTE) in Release-14 comes with two modes of communication, which is mode 3 and mode 4 to support vehicle to vehicle communications. The relationship between vehicle time gap, speed, and UE transmit power significantly impacts the Packet Delivery Ratio (PDR) and throughput. With higher vehicle moving speeds, longer safe distances must be kept in ensuring safety. However, at longer safe distances, we have proven that communication may be lost because CAM messages cannot be exchanged successfully. As a result, no vehicle safety can be guaranteed using V2V communication. This may get worse in urban or cities environment where interference is dominant. Simulation results provide evidence that variable distance between vehicles cannot be ignored to ensure vehicle safety with successful message communication among them.

Author 1: Hakimah Abdul Halim
Author 2: Azizul Rahman Mohd Shariff
Author 3: Suzi Iryanti Fadilah
Author 4: Fatima Karim

Keywords: Time gap; safe distance; collision; VANET; CAM

PDF

Paper 63: Design and Implementation of a Low-cost CO₂ Monitoring and Control System Prototype to Optimize Ventilation Levels in Closed Spaces

Abstract: High concentrations ofCO₂ levels are significantly present in closed environments that do not have proper venti-lation. Such high concentrations generate negative health con-sequences such as dizziness, headaches and various respiratory problems. For this reason, the design and implementation of a low-costCO₂ monitoring and control prototype is proposed to optimize ventilation levels in closed spaces. The parameters that the proposed device measures are concentration of carbon dioxide, humidity and temperature. A digital PID controller was implemented, with the use of C++ programming language and an exhaust fan to stabilize carbon dioxide levels within a closed space. The aforementioned parameters can be viewed in two ways: The first way is locally through a LCD screen and LED indicators, and the second one, remotely using the free Arduino IoT Cloud platform. The closed environment was emulated using a cardboard box and in the tests it was obtained that the prototype manages to keep theCO₂ concentration levels below the established limit. However, this can be further improved by using more precise sensors for more accurate results. It is expected that this model can be successfully scaled to closed spaces such as classrooms and offices.

Author 1: Ramces Cavallini-Rodriguez
Author 2: Jesus Espinoza-Valera
Author 3: Carlos Sotomayor-Beltran

Keywords: CO₂ Monitoring; IoT; Low-cost Indoor Ventilation System; NodeMCU; Open Source Software

PDF

Paper 64: BCSM: A BlockChain-based Security Manager for Big Data

Abstract: The amount of data generated globally is increasing rapidly. This growth in big data poses security and privacy issues. Organizations that collect data from numerous sources could face legal or business consequences resulting from a security breach and the exposure of sensitive information. The traditional tools used for decades to handle, manage, and secure data are not suitable anymore in the case of big data. Furthermore, most of the current security tools rely on third-party services, which have numerous security problems. More research must investigate protecting user-sensitive information which can be abused and altered from several sides. Blockchain is a promising technology that provides decentralized backend infrastructure. Blockchain keeps track of transactions indefinitely and protects them from alteration. It provides a secure, tamper-proof database that may be used to track the past state of the system. In this paper, we present our big data security manager based on Hyperledger Fabric, which provides end-to-end big data security, including data storage, transmitting, and sharing as well as access control and auditing mechanisms. The manager components and modular architecture are illustrated. The metadata and permissions related to stored datasets are stored in the blockchain to be protected. Finally, we have tested the performance of our solution in terms of transaction throughput and average latency. The performance metrics are provided by Hyperledger Caliper, a benchmark tool for analyzing Hyperledger blockchain performance.

Author 1: Hanan E. Alhazmi
Author 2: Fathy E. Eassa

Keywords: Big data security; blockchain; access control; hy-perledger fabric

PDF

Paper 65: A Review-based Context-Aware Recommender Systems: Using Custom NER and Factorization Machines

Abstract: Recommender Systems depend fundamentally on user feedback to provide recommendation. Classical Recom-menders are based only on historical data and also suffer from several problems linked to the lack of data such as sparsity. Users’ reviews represent a massive amount of valuable and rich knowledge information, but they are still ignored by most of current recommender systems. Information such as users’ preferences and contextual data could be extracted from reviews and integrated into Recommender Systems to provide more accurate recommendations. In this paper, we present a Context Aware Recommender System model, based on a Bidirectional Encoder Representations from Transformers (BERT) pretrained model to customize Named Entity Recognition (NER). The model allows to automatically extract contextual information from reviews then insert extracted data into a Contextual Machine Factorization to compte and predict ratings. Empirical results show that our model improves the quality of recommendation and outperforms existing Recommender Systems.

Author 1: Rabie Madani
Author 2: Abderrahmane Ez-zahout

Keywords: Recommender systems; context aware recommender systems; factorization machines; bidirectional encoder representa-tions from transformers; named entity recognition

PDF

Paper 66: A Computer Vision-based System for Surgical Waste Detection

Abstract: The world population is going through a difficult time due to the pandemic of COVID-19 while other disasters prevail. However, a new environmental catastrophe is coming because surgical masks and gloves are putting down anywhere, leading to the massive spreading of COVID-19 and environmental disasters. A significant number of masks and gloves are not properly managed. They are scattered around us such as roads, rivers, beaches, oceans and other places. Since these types of waste are turned into microplastics and chemicals are deadly harmful to the environment, human health and other species, especially for the aquatic animals on this planet. During the outbreaks of corona pandemic, surgical waste in the open place or seawater can create a fatal contagious environment. Putting them in a particular area can protect us from spreading infectious diseases. This study proposed a system that can detect surgical masks, gloves and infectious/biohazard symbols to put down infectious waste in a specific place or a container. Among the various types of surgical waste, this study prefers mask and gloves since it is currently the most widely used element due to the COVID-19. A novel dataset is created named MSG (Mask, Bio-hazard Symbol and Gloves), containing 1153 images and their corresponding annotations. Different versions of the You Only Look Once (YOLO) are applied as the architecture of this study; however, the YOLOX model outperforms.

Author 1: Md. Ferdous
Author 2: Sk. Md. Masudul Ahsan

Keywords: COVID-19; You Only Look Once (YOLO); surgical waste; deep learning; image dataset; real-time detection

PDF

Paper 67: Human-Computer Interaction in Mobile Learning: A Review

Abstract: Mobile learning mainly concerns mobility and high-quality education, regardless of location or time. Human-computer interaction comprises the concepts and methods in which humans interact with computers, including designing, im-plementing, and evaluating computer systems that are accessible and provide an intuitive user interface. Some studies showed that mobile learning could help overcome multiple limitations and improve learning in educational systems. The study investigates the HCI design challenges, including the guidelines and methods in mobile HCI for education. An existing mobile learning tool was discussed on the current and future design enhancements of Udemy. Next is the further discussion on future mobile learning to provide the possible improvements for learners based on the challenges of mobile HCI in education.

Author 1: Nurul Amirah Mashudi
Author 2: Mohd Azri Mohd Izhar
Author 3: Siti Armiza Mohd Aris

Keywords: Human-computer interaction; education technol-ogy; digital technology; mobile learning; e-learning

PDF

Paper 68: A Secure and Trusted Fog Computing Approach based on Blockchain and Identity Federation for a Granular Access Control in IoT Environments

Abstract: Fog computing is a new computing paradigm that is an extension of the standard cloud computing model, which can be adopted as a cost effective strategy for managing connected objects , by enabling real-time computing and communication for analytical and decision making. Nonetheless, even though Fog-based Internet of Things networks optimize the standard architecture by moving computing, storage, communication, and control decision closer to the edge network, the technology becomes open to malicious attackers and remains many business risks that are not yet resolved. In fact, access control, privacy as well as trust risks present major challenges in Internet of Things environments based on Fog computing due to the large scale distributed nature of devices at the Fog layer. In addition, the traditional authentication methods are not adequate in Fog-based Internet of Things contexts since they consume significantly more computation power and incur high latency. To deal with these gaps, we present in this paper a secure and trusted Fog Computing approach based on Blockchain and Identity Federation technologies for a granular access control in IoT environments. The proposed scheme uses Smart Contract concept and Attribute-Based Access Control model to ensure the level of security and scalability required for data integrity without resorting to a central authority to make an access decision.

Author 1: Samia EL HADDOUTI
Author 2: Mohamed Dafir ECH-CHERIF EL KETTANI

Keywords: Access control; blockchain; fog computing; identity federation; IoT; smart contracts

PDF

Paper 69: Intraday Trading Strategy based on Gated Recurrent Unit and Convolutional Neural Network: Forecasting Daily Price Direction

Abstract: Forex or FX is the short form of the Foreign Exchange Market, it is known as the largest financial market in the world where Investors can buy a certain amount of currency and hold it on until the exchange rate moves, then sell it to make money. This operation is not easy as it looks; due to the forte fluctuation of this market, investors find it a risky area to trade. A successful strategy in Forex should reduce the rate of risks and increase the profitability of investment by considering economic and political factors and avoiding emotional investment. In this article, we propose a trading strategy based on machine learning algorithms to reduce the risks of trading on the forex market and increase benefits at the same time. For that, we use an algorithm that generates technical indicators and technical rules containing information that may explain the movement of the stock price, the generated data is fed to a machine-learning algorithm to learn and recognize price patterns. Our algorithm is the combination of two deep learning algorithms, Gated Recurrent Unit “GRU” and Convolutional Neural Network “CNN”; it aims to predict the next day signal (BUY, HOLD or SELL) The model performance is evaluated for USD/EUR by different metrics generally used for machine learning algorithms, another method used to evaluate the profitability by comparing the returns of the strategy and the returns of the market. The proposed system showed a good improvement in the prediction of the price.

Author 1: Nabil MABROUK
Author 2: Marouane CHIHAB
Author 3: Zakaria HACHKAR
Author 4: Younes CHIHAB

Keywords: Forex; trading; machine learning; deep learning; random forest; technical indicators; technical rules; convolutional neural network; gated recurrent unit

PDF

Paper 70: Remote Healthcare Monitoring using Expert System

Abstract: With the introduction of the novel coronavirus and the ensuing epidemic, health care has become a primary priority for all governments. In this context, the best course of action is to implement an Internet of Things (IoT)-based remote health monitoring system. As a result, IoT systems have attracted significant attention in academia and industry, and this trend is likely to continue as wearable sensors and smartphones become more prevalent. Even if the doctor is a substantial distance away, IoT health monitoring enables the prevention of illness and the accurate diagnosis of one’s current state of health through the use of a portable physiological monitoring framework that continually monitors the patient’s systolic blood pressure, blood glucose, oxygen saturation, and diastolic blood pressure. The expert system generates a diagnosis of the patient’s health status based on the sensor data. Once the patient’s sensor data is transmitted to the cloud via a WiFi module, the expert system uses it to diagnose the patient’s health status in order to facilitate any medical attention or critical care that may be required for his condition. The simulation is carried out in Matlab, and the results of the study are presented to demonstrate the suggested system’s significance.

Author 1: Prajoona Valsalan
Author 2: Najam ul Hasan
Author 3: Imran Baig
Author 4: Manaf Zghaibeh

Keywords: Internet of Things (IoT); remote health care mon-itoring; wearable sensors; fuzzy logic

PDF

Paper 71: An End-to-End Method to Extract Information from Vietnamese ID Card Images

Abstract: Information extraction from ID cards plays an important role in many daily activities, such as legal, banking, insurance, or health services. However, in many developing countries, such as Vietnam, it is mostly carried out manually, which is time-consuming, tedious, and may be prone to errors. Therefore, in this paper, we propose an end-to-end method to extract information from Vietnamese ID card images. The proposed method contains three steps with four neural networks and two image processing techniques, including U-Net, VGG16, Contour detection, and Hough transformation to pre-process input card images, CRAFT, and Rebia neural network for Optical Character Recognition, and Levenshtein distance and regular expression to post-process extracted information. In addition, a dataset, including 3.256 Vietnamese ID cards, 400k manual annotated text, and more than 500k synthetic text, was built for verifying our methods. The results of an empirical experiment conducted on our self-collected dataset indicate that the proposed method achieves a high accuracy of 94%, 99.5%, and 98.3% for card segmentation, classification, and text recognition.

Author 1: Khanh Nguyen-Trong

Keywords: Optical character recognition; U-Net network; VGG16 network; CRAFT network; rebia network

PDF

Paper 72: A New Text Summarization Approach based on Relative Entropy and Document Decomposition

Abstract: In the era of the fourth industrial revolution, the rapid relay on using the Internet made online resources explosively grow. This revolution emphasized the demand for new approaches to utilize the use of online resources such as texts. Thus, the difficulty to compare unstructured resources (text) is urging the demand of proposing a new approach, which is the core of this paper. In fact, text summarization technology is a vital part of text processing, therefore. The focus is on the semantic information not just on the basic information. It requires mining topic features in order to obtain topic-words and topic-sentences relationships. This automatic text summarization is document decomposition according to relative entropy analysis; which means measuring the difference of the probability distribution to measure the correlation between sentences. This paper introduced a new method for document decomposition, which categorizes the sentences into three types of content. The performance demonstrated the efficiency of using the relative entropy of the topic probability distribution over sentences, which enriched the horizon of text processing and summarization research field.

Author 1: Nawaf Alharbe
Author 2: Mohamed Ali Rakrouki
Author 3: Abeer Aljohani
Author 4: Mashael Khayyat

Keywords: Natural language processing; text summarization; extractive methods; relative entropy

PDF

Paper 73: A Game Theory-based Virtual Machine Placement Algorithm in Hybrid Cloud Environment

Abstract: This paper deals with the problem of virtual ma-chine placement in hybrid cloud situation from an economic and QoS perspectives. Because excessive investment of resources in cloud computing environment will result in resources waste, and too few resources can generate QoS issues. This paper uses a game theory model to describe the problem and find the balance between these contradictions. Based on this model, a virtual machine placement algorithm for scheduling virtual resources is proposed. Compared to the traditional game theory, our LBOGT algorithm proposes a game between tripartite sides: users, individual providers and provider groups. Experiments show that our proposed algorithm reduces physical machines energy consumption by 6.16%, and increases by 10.6% in profit provider under the premise of users’ QoS.

Author 1: Nawaf Alharbe
Author 2: Mohamed Ali Rakrouki

Keywords: Cloud computing; virtual machine placement; game theory; quality of service; load balancing; energy consumption

PDF

Paper 74: Ensuring Privacy Preservation Access Control Mechanism in Cloud based on Identity based Derived Key

Abstract: Cloud computing is a dominant technology that involves massive amounts of data storage and access via the internet. Because there is a large amount of data stored in data centers, it is critical to implement appropriate access control mechanisms over data stored in a cloud. Today, there are numerous access control mechanisms available to provide confidentiality, privacy, and data origin authentication in a cloud environment. The available access control techniques may have a higher computational overhead and lack security concerns. In this paper, we designed and implemented a privacy-preserving access control in cloud computing using derived key identity-based encryption. The proposed method may reduce computational overhead while generating the key while also increasing the robustness of cryptographic keys. During the key generation pro-cess, the trusted key canter (TKC) is involved. The experimental results show that the proposed method reduces computational overhead and provides an easy way to implement an access control mechanism in a cloud environment.

Author 1: Suresha D
Author 2: K Karibasappa
Author 3: Shivamurthy

Keywords: Access control; cloud storage; confidentiality; data origin authentication; key derivation

PDF

Paper 75: Multiple Hydrophone Arrays based Underwater Localization with Matching Field Processing

Abstract: Matched field processing technology (MFP) is a general passive localization method for underwater sound source due to its advantages in ultra-long distance positioning. In this paper, assume the total number of hydrophones remains unchanged, a single hydrophone array is divided into multiple hydrophone sub-arrays for independent positioning, and the positioning results of sub-arrays are fused to reduce the impact of noise and improve the robustness of the positioning system. Based on the traditional Bartlett processor, we derive the formula for average positioning error which varies with signal to noise ratio (SNR) and the number of hydrophones. The formula is used to decide the optimal structure of sub-arrays, i.e., the number of sub-arrays and the number of hydrophones in each sub-array. Experiments and simulations proves that multiple sub-arrays can improve the positioning accuracy compared with the single hydrophone array in the noisy environment. The average positioning errors produced by the experiments are consistent with the numerical ones based on the theoretical analysis.

Author 1: Shuo Jin
Author 2: Xiukui Li

Keywords: Matched Field Processing (MFP); hydrophone ar-ray; source localization; underwater acoustic

PDF

Paper 76: High-quality Voxel Reconstruction from Stereoscopic Images

Abstract: Volumetric reconstruction from one or multiple RGB images has shown significant advances in recent years, but the approaches used so far do not take advantage of stereoscopic features such as distance blur, perspective disparity, textures, etc. that are useful to shape the object volumes. Our study is to evaluate a convolutional neural network architecture for reconstruction of 128³ voxel models from 960 pairs of stereoscopic images. The preliminary results show an 80% of coincidence with the original models in 2 categories using the Intersection over Union metric. These results indicate that good reconstructions can be made from a small dataset. This will reduce the time and memory usage for this task.

Author 1: Arturo Navarro
Author 2: Manuel Loaiza

Keywords: Voxel reconstruction; stereoscopy; convolutional neural networks; disparity maps

PDF

Paper 77: Structural Information Retrieval in XML Documents: A Graph-based Approach

Abstract: Although retrieval engines are becoming more and more functional and efficient, they still have the drawback of not being able to locate the relevant documentary granularity, which results in ignoring the structural aspect. In the context of XML document, Information Retrieval Systems allow to return the user’s documentary granules. Several studies have used graphs to represent XML documents. However, in the scope of this research, the semi-structured document’s structure and that of a user’s query can be seen as arborescences composed of a hierarchy of nested elements. By using graph theory, by calculating the structural proximity and especially the intersection between these two arborescences. The article presents a model for structural information retrieval based on graphs. A collection of multimedia documents are randomly extracted from INEX (Initiative for the Evaluation of XML Retrieval) 2010 to validate the approach. The first results shows the interest of such an approach.

Author 1: Imane Belahyane
Author 2: Mouad Mammass
Author 3: Hasna Abioui
Author 4: Assmaa Moutaoukkil
Author 5: Ali Idarrou

Keywords: Semi-structured document; XML document; largest common sub-graph; structural Information retrieval

PDF

Paper 78: Dynamic Support Range based Rare Pattern Mining over Data Streams

Abstract: Rare itemset mining is a relatively recent topic of study in data mining. In certain application domains, such as online banking transaction analysis, sensor data analysis, and stock market analysis, rare patterns are patterns with low support and high confidence that are extremely interesting when compared to frequent patterns. Numerous applications generate large amounts of continuous data streams. We require efficient algorithms capable of processing data streams in order to analyze them and find unique patterns. The strategies developed for static databases cannot be used to data streams. As a result, we require algorithms created expressly for data stream processing in order to extract critical unique patterns. Rare pattern mining is still in its infancy, with only a few ways available. To address this is developed the Dynamic Support Range-based Hybrid-Eclat Algorithm (DSRHEA), an Eclat-based technique for mining unique patterns from a data stream using bit-set vertical mining with two item-based optimizations. The detected patterns are kept in a prefix-based rare pattern tree that uses double hashing to maintain the unusual pattern in the data stream. Testing showed that the proposed method did well in terms of how long it took to run ,how many rare patterns it made and accuracy.

Author 1: Sunitha Vanamala
Author 2: L. Padma Sree
Author 3: S. Durga Bhavani

Keywords: Depth first search; Hybrid-Eclat algorithm; SRP-tree; itemset; frequent-pattern support; rare-pattern support; pivot; data stream; rare itemset; infrequent itemset

PDF

Paper 79: Energy Efficient Hop-by-Hop Retransmission and Congestion Mitigation of an Optimum Routing and Clustering Protocol for WSNs

Abstract: In the past few decades, wireless sensor networks, which take a growing number of applications in the surroundings further than the human reach, have risen in popularity. Various routing pseudo codes have been suggested for network optimization, emphasizing energy efficiency, network longevity, and clustering processes. To the existing load balancing energy-efficient sleep awake, aware smart sensor network routing protocol, the modified load-balancing efficient-energy sleep active alert smart routing system for wireless sensor networks is presented in this paper, which takes network homogeneity into account. The modified protocol is the optimum clustering and routing protocol in wireless sensor networks (OCRSN), which simulates an enhanced network coupled node pair model. Our suggested modified approach studies and enhances factors such as network stability, network lifetime, and cluster monitor mechanism choice. The significance of typically combining sensor endpoints is applied to maximize energy efficiency. The proposed protocol significantly improved network parameters in simulations, showing that it could be a valuable option for WSNs. In wireless sensor networks, in addition to memory considerations and dependable transportation, this paper presents a hop-by-hop re-transmission strategy and congestion mitigation, which is the major contribution of this paper. It is a very consistent method based on a pipe flow model. After performing additional optimized overhead to improve the network lifespan of wireless sensors networks, the current algorithm can be paralleled to the less energy adaptive clustering hierarchy protocol. The optimal clustering in multipath and multihop technique intends to minimize the energy consumption highlighted for a circular area enclosed by a sink by replacing one-hop communication with efficient multihop communication. The optimum quantity of clusters is determined, and the energy consumption is reduced by splitting the network into clusters of nearly equal size. The obtained simulation results will demonstrate the increase in the network lifetime compared to previous clustering strategies such as Low Energy Adaptive Clustering Hierarchy.

Author 1: Prakash K Sonwalkar
Author 2: Vijay Kalmani

Keywords: Wireless sensor network; network lifetime; clustering strategies; clustering process

PDF

Paper 80: A Novel Approach for Small Object Detection in Medical Images through Deep Ensemble Convolution Neural Network

Abstract: Small objects detection in medical image becomes an interesting field of research that helps the medical practitioners to focus on in-depth evaluation of diseases. The accurate localization and classification of objects face tremendous difficulty due to lower intensity of the images and distraction of pixel points that vary the decision on identifying the shape, structure etc. In many real-time cases, detection and classification of tiny objects in the medically treated images becomes mandatory. The proposed system is designed in the same criteria in which the semantic segmentation of tiny objects in the medical images is considered. The system design focused on implementing the model for different kinds of human organs such as lung and liver. The axial CT or PET images of Lung and Liver are considered as the prime input for the given system. Detection of tiny objects in the CT-PET images, segmenting it from the background and classification of segmented part as Tumor or Nodule is discussed. The preprocessed images are feature extracted after the morphology segmentation that determines the structural features of the tiny object being segmented. The feature vectors are nothing but the feature points from Kaze feature extraction and Morphology segmented image. These two inputs are fetched to the Deep ensemble Convolution neural network (DECNN) to obtain the dual classification results. Performing the quantitative measurements to evaluate the decision making system for nodule or tumor class is determined. The performance measure is done using accuracy, precision, recall and F1Score.

Author 1: J. Maria Arockia Dass
Author 2: S. Magesh Kumar

Keywords: Medical image processing; convolution neural network; lung tumor detection; early prediction; image enhancement

PDF

Paper 81: Software Reliability Prediction by using Deep Learning Technique

Abstract: The importance of software systems and their impact on all sectors of society is undeniable. Furthermore, it is increasing every day as more services get digitized. This necessitates the need for evolution of development and quality processes to deliver reliable software. For reliable software, one of the important criteria is that it should be fault-free. Reliability models are designed to evaluate software reliability and predict faults. Software reliability prediction is always an area of interest in the field of software engineering. Prediction of software reliability can be done using numerous available models but with the inception of computational intelligence techniques, researchers are exploring new techniques such as machine learning, genetic algorithm, deep learning, etc. to develop better prediction models. In the current study, a software reliability prediction model is developed using a deep learning technique over twelve real datasets from different repositories. The results of the proposed model are analyzed and found quite encouraging. The results are also compared with previous studies based on various performance metrics.

Author 1: Shivani Yadav
Author 2: Balkishan

Keywords: Software reliability; deep learning; performance metrics; prediction; dense neural network; fault prediction

PDF

Paper 82: Performance Evaluation of Temporal and Frequential Analysis Approaches of Electromyographic Signals for Gestures Recognition using Neural Networks

Abstract: Now-a-days, human-machine interfaces are increasingly intuitive and straightforward to design, but there is difficulty capturing electromyographic signal data using the least amount of hardware. This work takes the signals of a human forearm as input parameters describing a series of five gestures, using a dataset of 8 channels of electromyographic signals, using as a capture device a Thalmic Labs Inc. handle called Myo armband. The aim is to compare the performance of the artificial neural network using data in the time domain as input to the learning system. The same data are pre-processed to the frequency domain, looking for an improvement in the neural network's performance since transforming the input signals of the system to the frequency domain minimizes the problems inherent to this type of signal. This transformation is achieved using the fast Fourier transform. Consequently, it seeks to reach a neural network architecture that recognizes the gestures captured with the Myo armband in a high percentage of performance to be used in stand-alone applications, using the TensorFlow libraries of Python for its design. As a result, a comparison of the neural network trained with data in time versus the same data expressed in the frequency domain is obtained, seen from the increase in performance and the percentage of gesture detection.

Author 1: Edwar Jacinto Gomez
Author 2: Fredy H. Martinez Sarmiento
Author 3: Fernando Martinez Santa

Keywords: Neural networks; electromyographic signals; Myo armband; tensorflow; fast fourier transform

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org