The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 5

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Rain Attenuation in 5G Wireless Broadband Backhaul Link and Develop (IoT) Rainfall Monitoring System

Abstract: Climate change is the cause of more frequent and intense rainfall where they affect wireless communications because they cause severe weakening of the power of the emitted signal. These losses reduce network coverage and, therefore, system availability. The proposed solution is to integrate an Internet of Things (IoT) rainfall monitoring system where it will be able to collect real-time data on the height of rain that erupts in a particular place. This data will help areas where base stations install and the distance of the link that may need to be changed to reduce rainfall's harmful effects. So, the prediction of attenuation due to rain is an essential parameter in both terrestrial and satellite connections. The present study uses the ITU-R P 838 and ITU-R P 530 models to theoretically calculate losses in a 5G wireless broadband link with 99.9% link availability. This study conducts three frequency bands, 24 GHz, 28 GHz, and 38GHz, in Palo Alto, California. The travel distance is 5km, while the rainfall rate for the analyzed area is in zone D. The results show that the attenuations are proportional to the frequency, polarization, and rainfall rate.

Author 1: Konstantinos Zarkadas
Author 2: George Dimitrakopoulos

Keywords: Rain attenuation; Internet of Things; wireless broadband

PDF

Paper 2: Development of Wearable Heart Sound Collection Device

Abstract: In recent years, the mortality rate of cardiovascular diseases and the younger generation have attracted people's attention. At the same time, there is an increasing demand for devices that can monitor the physiological parameters of the heart. In this research, a wearable devices was designed and developed for heart sound collection. Microphones wrapped in urethane resin holders were directly fixed on the vest for heart sound collection. The device has received many positive reviews in terms of comfort. The cumulative contribution rate of the two common factors (material factor and clothing design factor) obtained through factor analysis was 75.371%, which was the main factor affecting the experience of using the device. Finally, the heart sounds of 11 healthy young people were collected and input into the completed convolutional neural network for detection, and an accuracy rate of 71.3% was obtained. Therefore, it can be concluded that the device improves the user experience and has a good effect on heart sound collection and detection.

Author 1: Ximing HUAI
Author 2: Shota Notsu
Author 3: Dongeun Choi
Author 4: Panote Siriaraya
Author 5: Noriaki Kuwahara

Keywords: Cardiovascular diseases; wearable devices; heart sound collection; convolutional neural network

PDF

Paper 3: Matters of Neural Network Repository Designing for Analyzing and Predicting of Spatial Processes

Abstract: The article is devoted to solving the scientific problem of accumulating and systematizing models and machine learning algorithms by developing a repository of deep neural network models for analyzing and predicting of spatial processes in order to support the process of making managerial decisions in the field of ensuring conditions for sustainable development of regions. The issues of architecture development and software implementation of a repository of deep neural network models for spatial data analysis are considered, based on a new ontological model, which makes it possible to systematize models in terms of their application for solving design problems. An ontological model of a deep neural network repository for spatial data analysis is decomposed into the domain of deep machine learning models, problems being solved and data. Special attention is paid to the problems of storing data in the repository and the development of a subsystem for visualizing neural networks using a graph model. The authors have shown that for organizing a repository of deep neural network models, it is advisable to use a scientifically grounded set of database management systems integrated into a multi-model storage, characterizing the domains of using relational and NoSQL storages.

Author 1: Stanislav A. Yamashkin
Author 2: Anatoliy A. Yamashkin
Author 3: Ekaterina O. Yamashkina
Author 4: Anastasiya A. Kamaeva

Keywords: Repository; deep learning; artificial neural network; spatial data; visual programming

PDF

Paper 4: Bluetooth-based WKNNPF and WKNNEKF Indoor Positioning Algorithm

Abstract: Indoor Positioning System (IPS) in generally perform as a network of devices that always located the objects or people inside a building wirelessly. An IPS has direction relies nearby anchors and also can be entirely local to your smartphone. With the rapid growth and sharp increase in Indoor Positioning System (IPS) demand in the world, there are a lot of researchers trying to invent new algorithm to develop IPS. This paper proposed the Bluetooth-Base Indoor Positioning Algorithm. The RF characteristics such as RSSI and WLAN RSSI fingerprinting system normally formed by two phases, fist is offline phase and second is online phase. Fingerprinting system handling both off-line and online data and estimate the user’s location. Our algorithm design is a collection of Weighted K-Nearest Neighbors (WKNN) and Filtering algorithms by KALMAN Filter. Finally, to avoid the problems of IPS and get a better accurate we proposed two algorithms: Weighted K-Nearest Neighbors Particle Filter (WKNNPF) and Weighted K-Nearest Neighbors Extended Kalman Filter (WKNNEKF) compare to KNN and WKNN result. After comparing we found that the result of WKNNPF and WKNNEKF is better result than KNN and WKNN. The Probability in 3M of WKNN is about 79%, WKNNEKF is about 89%, and WKNNPF is about 95.1%. Among one of the proposed algorithms WKNNPF is better than WKNNEKF on accuracy 1.7-2 meters with 42.2m/s response time.

Author 1: Sokliep Pheng
Author 2: Ji Li
Author 3: Luo Xiaonan
Author 4: Yanru Zhong

Keywords: Indoor Positioning System (IPS); Bluetooth low energy; WLAN; RSSI; WKNNPF; WKNNEKF; KNN; WKNN

PDF

Paper 5: Exploring Machine Learning Techniques for Coronary Heart Disease Prediction

Abstract: Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events from clinical data was performed. Four machine learning classifiers, namely Logistic Regression, Support Vector Machine (SVM), K- Nearest Neighbor (KNN), and Multi-Layer Perceptron (MLP) Neural Networks were identified and applied to a dataset of 462 medical instances and 9 features as well as the class feature from the South African Heart Disease data retrieved from the KEEL repository. The dataset consists of 302 records of healthy patients and 160 records of patients who suffer from CHD. In order to handle the imbalanced classification problem, the K-means algorithm along with Synthetic Minority Oversampling TEchnique (SMOTE) was used in this study. The empirical results of applying the four machine learning classifiers on the oversampled dataset have been very promising. The results reported using different evaluation metrics showed that SVM has achieved the highest overall prediction performance.

Author 1: Hisham Khdair
Author 2: Naga M Dasari

Keywords: Coronary heart disease; machine learning; prediction; classification

PDF

Paper 6: A Survey of Specification-based Intrusion Detection Techniques for Cyber-Physical Systems

Abstract: Cyber-physical systems (CPS) integrate computa-tion and communication capabilities to monitor and control physical systems. Even though this integration improves the performance of the overall system and facilitates the application of CPS in several domains, it also introduces security challenges. Over the years, intrusion detection systems (IDS) have been de-ployed as one of the security controls for addressing these security challenges. Traditionally, there are three main approaches to IDS, namely: anomaly detection, misuse detection and specification-based detection. However, due to the unique attributes of CPS, the traditional IDS need to be modified or completely replaced before it can be deployed for CPS. In this paper, we present a survey of specification-based intrusion detection techniques for CPS. We classify the existing specification-based intrusion detection techniques in the literature according to the following attributes: specification source, specification extraction, specifi-cation modelling, detection mechanism, detector placement and validation strategy. We also discuss the details of each attribute and describe our observations, concerns and future research directions. We argue that reducing the efforts and time needed to extract the system specification of specification-based intrusion detection techniques for CPS and verifying the correctness of the extracted system specification are open issues that must be addressed in the future.

Author 1: Livinus Obiora Nweke

Keywords: Cyber-physical systems; intrusion detection systems; specification-based intrusion detection mechanism; security

PDF

Paper 7: Exploring Factors Associated with Subjective Health of Older-Old using ReLU Deep Neural Network and Machine Learning

Abstract: Resolving the health issues of the elderly has emerged as an important task in the current society. This study developed models that could predict the subjective health of the older-old based on gradient boosting machine (GBM), naive Bayes model, classification and regression trees (CART), deep neural network, and random forest by using the health survey data of the elderly and compared their prediction performance (i.e., accuracy, sensitivity, specificity) the models. This study analyzed 851 older-old people (≥75 years old) who resided in the community. This study compared the accuracy, sensitivity, and specificity of the developed models to evaluate their prediction performance. This study conducted 5-fold cross-validation to validate the developed models. The results of this study showed that the deep neural network with an accuracy of 0.75, a sensitivity of 0.73, and a specificity of 0.81 was the model with the best prediction performance. The normalized importance of variables derived from deep neural network analysis showed that depression, subjective stress recognition, the number of accompanying chronic diseases, subjective oral conditions, and the number of days walking more than 30 minutes were major predictors for the subjective health of the older-old. Further studies are needed to identify factors associated with the subjective health of the older-old with considering the age-period-cohort effects.

Author 1: Haewon Byeon

Keywords: Gradient boosting machine; classification and regression trees; Naive Bayes model; deep learning; subjective health

PDF

Paper 8: Propose Vulnerability Metrics to Measure Network Secure using Attack Graph

Abstract: With the increase in using computer networking, the security risk has also increased. To protect the network from attacks, attack graph has been used to analyze the vulnerabilies of the network. However, properly securing networks requires quantifying the level of security offered by these actions, as you cannot enhance what you cannot measure. Security metrics provide a qualitative and quantitative representation of a system's or network's security level. However, using existing security metrics can lead to misleading results. This work proposed three metrics, which is the Number of Vulnerabilities (NV), Mean Vulnerabilities on Path (MVoP), and the Weakest Path (WP). The experiment of this work used two networks to test the metrics. The results show the effect of these metrics on finding the weaknesses of the network that the attacker may use.

Author 1: Zaid. J. Al-Araji
Author 2: Sharifah Sakinah Syed Ahmad
Author 3: Raihana Syahirah Abdullah

Keywords: Attack graph; security metrics; attack path; path analysis; attack graph uses

PDF

Paper 9: Creativity Training Model for Game Design

Abstract: The popularity of digital games is increasing with a global market value of RM197.6 billion. However, the game produced locally still has no impact. One reason is that there is no emphasis on the game design process in the game development education program. Games designed have a problem in terms of creativity, and there is still no specific method of training creative thinking. This study aims to identify and validate game design's creativity components and develop a Creativity Training Model for Game Design (LK2RBPD Model) verified through the Game Design Document Tool (GDD Tool) prototype. This research has four main phases: the requirements planning, design, development, implementation, and testing phases. In the requirements analysis phase, the component of LK2RBPD Model was identified. The LK2RBPD Model contains elements from industry practices of game designing, creative and innovative thinking skills, creativity dimensions, Sternberg Creativity, and Cultural Activity theories. The GDD Tool prototype implementing the model was developed and tested. The LK2RBPD Model was evaluated using questionnaire survey, SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, and verification of ideas in the GDD Tool prototype. Evaluation using a five-point Likert scale shows that GDD Tool prototype is effective in implementing 19 components. Expert verification on the results of game design ideas and creativity building using Cohen Kappa calculations is 0.94, indicating an excellent agreement. The results show that the LK2RBPD Model can be effectively used to train creativity in game design. This research's contributions are LK2RBPD Model, creative game design ideation process guideline, and GDD Tool prototype design.

Author 1: Raudyah Md Tap
Author 2: Nor Azan Mat Zin
Author 3: Hafiz Mohd Sarim
Author 4: Norizan Mat Diah

Keywords: Creativity training; game design; creative ideas; creative thinking

PDF

Paper 10: Spiritual User Experience (iSUX) for Older Adult Users using Mobile Application

Abstract: The increasing number of aging populations worldwide versus vast developments in mobile technology creates questions on how older adults adapt and apply mobile technology in their daily life. This research focused on spiritual user experience for older adult users because older adults are claimed to be more spiritually inclined as they aged. Despite high profile calls for research in the area of spirituality, the research pertaining spirituality in HCI is still in infancy state. Recent literatures discover most studies focus on design for spiritual user experience and evaluation of spiritual application for adult users, but fundamental of spirituality and its elements from the view of user experience is limited. Therefore, this study employs qualitative method approach within an interpretive paradigm to propose a model for Spiritual User Experience from the perspective of Islamic older adult users. The Geneva Emotional Musical Scale (GEMS) was adopted as a theoretical lens in order to gain deeper insights on the spirituality elements. A single case study was conducted with the total of 11 participants to research on the spirituality user experience elements among older adults. The triangulation of qualitative data collection through 3E diary, interviews and observations was conducted. All data were analyzed verbatimly using thematic analysis. Six themes emerged from the analysis which are effectiveness, efficiency, learnability, satisfaction, sublimity and vitality. These themes are further categorized into 10 attributes; effectiveness (accessibility features), efficiency (simplicity and portability), learnability, satisfaction (attractiveness and reliability), sublimity (transcendence and peacefulness) and vitality (energy and joyful activation). These are embedded into a model known as Spiritual User Experience (iSUX) which are evaluated by the Islamic religious experts, user experience expert and older adult’s representatives. This model could be a reference for spiritual model development apps among developers and provide understanding for researchers in the HCI domain. In conclusion, the Spiritual User Experience (iSUX) is hope to increase the understanding of spirituality from the domain of user experience.

Author 1: Nahdatul Akma Ahmad
Author 2: Zirawani Baharum
Author 3: Azaliza Zainal
Author 4: Fariza Hanis Abdul Razak
Author 5: Wan Adilah Wan Adnan

Keywords: Techno-spiritual; user experience; human computer interaction; Geneva emotional musical scale; 3e diary; older people

PDF

Paper 11: Reversible Data Hiding using Block-wise Histogram Shifting and Run-length Encoding

Abstract: Histogram shifting-based Reversible Data Hiding (RDH) is a well-explored information security domain for secure message transmission. In this paper, we propose a novel RDH scheme that considers the block-wise histograms of the image. Most of the existing histogram shifting techniques will have additional overhead information to recover the overflow and/or the underflow pixels. In the new scheme, the meta-data that is required for a block is embedded within the same block in such a way that the receiver can perform image recovery and data extraction. As per the proposed data hiding process, all the blocks need not be used for data hiding, so we have used marker information to distinguish between the blocks which are used to hide data and the blocks which are not used for data hiding. Since the marker information needs to be embedded within the image, we have compressed the marker information using run-length encoding. The run-length encoded sequence is represented by an Elias gamma encoding procedure. The compression on the marker information ensures a better Embedding Rate (ER) for the proposed scheme. The proposed RDH scheme will be useful for secure message transmission also where we are also concerned about the restoration of the cover image. The proposed scheme's experimental analysis is conducted on the USC-SIPI image dataset maintained by the University of Southern California, and the results show that the proposed scheme performs better than the existing schemes.

Author 1: Kandala Sree Rama Murthy
Author 2: V. M. Manikandan

Keywords: Histogram shifting; run-length encoding; secure message transmission; overflow; Elias gamma

PDF

Paper 12: A Bird’s Eye View of Natural Language Processing and Requirements Engineering

Abstract: Natural Language Processing (NLP) has demonstrated effectiveness in many application domains. NLP can assist software engineering by automating various activities. This paper examines the interaction between software requirements engineering (RE) and NLP. We reviewed the current literature to evaluate how NLP supports RE and to examine research developments. This literature review indicates that NLP is being employed in all the phases of the RE domain. This paper focuses on the phases of elicitation and the analysis of requirements. RE communication issues are primarily associated with the elicitation and analysis phases of the requirements. These issues include ambiguity, inconsistency, and incompleteness. Many of these problems stem from a lack of participation by the stakeholders in both phases. Thus, we address the application of NLP during the process of requirements elicitation and analysis. We discuss the limitations of NLP in these two phases. Potential future directions for the domain are examined. This paper asserts that human involvement with knowledge about the domain and the specific project is still needed in the RE process despite progress in the development of NLP systems.

Author 1: Assad Alzayed
Author 2: Ahmed Al-Hunaiyyan

Keywords: Automated text understanding; natural language processing; requirements engineering; requirements elicitation

PDF

Paper 13: An Evaluation of Automatic Text Summarization of News Articles: The Case of Three Online Arabic Text Summary Generators

Abstract: Digital news platforms and online newspapers have multiplied at an unprecedented speed, making it difficult for users to read and follow all news articles on important, relevant topics. Numerous automatic text summarization systems have thus been developed to address the increasing needs of users around the world for summaries that reduce reading and processing time. Various automatic summarization systems have been developed and/or adapted in Arabic. The evaluation of automatic summarization performance is as important as the summarization process itself. Despite the importance of assessing summarization systems to identify potential limitations and improve their performance, very little has been done in this respect on systems in Arabic. Therefore, this study evaluated three text summarizers AlSummarizer, LAKHASLY, and RESOOMER using a corpus built of 40 news articles. Only articles written in Modern Standard Arabic (MSA) were selected as this is the formal and working language of Arab newspapers and news networks. Three expert examiners generated manual summaries and examined the linguistic consistency and relevance of the automatic summaries to the original news articles by comparing the automatic summaries to the manual (human) summaries. The scores for the three automatic summarizers were very similar and indicated that their performance was not satisfactory. In particular, the automatic summaries had serious problems with sentence relevance, which has negative implications for the reliability of such systems. The poor performance of Arabic summarizers can mainly be attributed to the unique morphological and syntactic characteristics of Arabic, which differ in many ways from English and other Western languages (the original language/s of automatic summarizers), and are critical in building sentence relevance and coherence in Arabic. Thus, summarization systems should be trained to identify discourse markers within the texts and use these in the generation of automatic summaries. This will have a positive impact on the quality and reliability of text summarization systems. Arabic summarization systems need to incorporate semantic approaches to improve performance and construct more coherent and meaningful summaries. This study was limited to news articles in MSA. However, the findings of the study and their implications can be extended to other genres, including academic articles.

Author 1: Fahad M. Alliheibi
Author 2: Abdulfattah Omar
Author 3: Nasser Al-Horais

Keywords: AlSummarizer; Arabic; automatic summarization; discourse markers; extraction; LAKHASLY; news articles; RESOOMER; sentence relevance

PDF

Paper 14: Online Parameter Estimation of DC-DC Converter through OPC Communication Channel

Abstract: System identification is a very powerful tool for determining the system model and parameters from sets of observable input and output data. Once the system parameters are obtained, the system dynamic behavior, including all the system characteristics (time constant, overshoot, settling time, etc.) can be accessed and evaluated. Despite the difficulty and communication channel lag, online parameter estimation outperforms offline system identification due to the ability to remotely monitor and control the system as well as improve the system's controller, making it more accurate and reliable. With the extreme development in technology, the importance of combining wireless networks with closed automatic control systems has emerged. This connection facilitates communication processes between the different units in the control for remotely controlled of the output. However, there are some errors affecting such system resulted from communication channel, A/D and D/A conversion process, identification process, or the existence of adaptive weight Gaussian noise. In this paper, the errors were investigated using real system, and then a suitable controller was tuned and optimized in order to reduce and eliminate various errors. The results show excellent dynamic behavior of the system under transmitting and receiving process.

Author 1: Mohammad A Obeidat
Author 2: Malek Al Anani
Author 3: Ayman M Mansour

Keywords: Online parameters estimation; Open Platform Communication; OPC; communication channel; ARMAX model (autoregressive-moving average with exogenous terms); DC-DC Converter; chopper circuit

PDF

Paper 15: Image Contrast Optimization using Local Color Correction and Fuzzy Intensification

Abstract: Global image enhancement techniques are used to enhance contrast in images but these techniques are found to be under-enhanced or over-enhanced in differently illuminated regions of the image. Local color correction methods work on local pixel regions to optimize the color contrast enhancement but they also have been found to show a lag while covering pixel regions which are overexposed, compared to those which are underexposed causing local artifacts. In this work, we overcome the shortcomings of both the local color correction and global color correction. This method uses local color correction in the Hue Saturation Luminance (HSL) domain, and fuzzy intensification operators are used to control the color fidelity of the local color corrected images. Thus, is able to sort out the problem of overexposed and underexposed regions and provide optimized contrast enhancement in colored images. Several experiments have been performed to analyze the performance of the proposed method and feasibility as compared to existing techniques. Performance parameters such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM) and Naturalness Image Quality Evaluator (NIQE) is evaluated and the comparison with some existing techniques of contrast enhancement of color images is performed. The obtained result have good contrast and approve the better performance of the proposed method in support of the quantitative measure of perceptual appearance of the processed images and low computational time.

Author 1: Avadhesh Kumar Dixit
Author 2: Rakesh Kumar Yadav
Author 3: Ramapati Mishra

Keywords: Contrast enhancement; local color correction; fuzzy operators; optimization

PDF

Paper 16: Exploring Factors Associated with the Social Discrimination Experience of Children from Multicultural Families in South Korea by using Stacking with Non-linear Algorithm

Abstract: The number of children from multicultural families is increasing rapidly along with quickly increasing multicultural families. However, there are not enough surveys and basic researches for understanding the characteristics of multicultural children and issues such as social discrimination. This study discovered the machine learning model with the best performance for predicting the social discrimination experience of children from multicultural families by comparing the prediction performance (accuracy) of individual prediction models and stacking ensemble models. This study analyzed 19,431 adolescents (between 19 and 24 years old: 9,835 males and 9,596 females) among the children of marriage immigrants. This study used random forest (RF), rotation forest, artificial neural network (ANN), and support vector machine (SVM) for the base model. Logistic regression algorithm was applied for the meta model. Each machine learning model was built through 5-fold cross-validation. Root-mean-square-error (RMSE), index of agreement (IA), and variance of errors (Ev) were used to evaluate the prediction performance of the developed models. The results of this study indicated that the prediction performance of the rotation forest-logistic regression model had the best performance. The future studies need to explore stacking ensemble models with the best performance through combining a base model and a meta model by using various machine learning algorithms such as clustering and boosting.

Author 1: Haewon Byeon

Keywords: Stacking ensemble; meta model; root-mean-square-error; index of agreement; rotation forest

PDF

Paper 17: Predicting DOS-DDOS Attacks: Review and Evaluation Study of Feature Selection Methods based on Wrapper Process

Abstract: Now-a-days, Cybersecurity attacks are becoming increasingly sophisticated and presenting a growing threat to individuals, private and public sectors, especially the Denial Of Service attack (DOS) and its variant Distributed Denial Of Service (DDOS). Dealing with these dangerous threats by using traditional mitigation solutions suffers from several limits and performance issues. To overcome these limitations, Machine Learning (ML) has become one of the key techniques to enrich, complement and enhance the traditional security experiences. In this context, we focus on one of the key processes that improve and optimize Machine Learning DOS-DDOS predicting models: DOS-DDOS feature selection process, particularly the wrapper process. By studying different DOS-DDOS datasets, algorithms and results of several research projects, we have reviewed and evaluated the impact on used wrapper strategies, number of DOS-DDOS features, and many commonly used metrics to evaluate DOS-DDOS prediction models based on the optimized DOS-DDOS features. In this paper, we present three important dashboards that are essential to understand the performance of three wrapper strategies commonly used in DOS-DDOS ML systems: heuristic search algorithms, meta-heuristic search and random search methods. Based on this review and evaluation study, we can observe some of wrapper strategies, algorithms, DOS-DDOS features with a relevant impact can be selected to improve the DOS-DDOS ML existing solutions.

Author 1: Kawtar BOUZOUBAA
Author 2: Youssef TAHER
Author 3: Benayad NSIRI

Keywords: DOS-DDOS attacks; feature selection; wrapper process; machine learning

PDF

Paper 18: Intelligent Data Aggregation Framework for Resource Constrained Remote Internet of Things Applications

Abstract: Internet of Things (IoT) is a technology that can connect everything to the Internet. IoT can be used in a wide range of applications which includes remote applications like Underwater networks. Remote applications involve the deployment of several low-power, low-cost interconnected sensor nodes in the specific region. With a massive amount of devices connected to the IoT and the considerable amount of data associated with it, there remain concerns about data management. Also, the amount of data generated in an extensive IoT-based remote sensing network is usually enormous for the servers to process, and many times data generated are redundant. Hence there is a need for designing a framework that addresses both aggregations of data and security-related issues at various aggregation points. In this paper, we are proposing an intelligent data aggregation mechanism for IoT-based remote sensing networks. This method avoids redundant data transmission by adapting spatial aggregation techniques. The proposed method was tested through simulations, and the results prove the efficiency of the proposed work.

Author 1: Abhijith H V
Author 2: H S Ramesh Babu

Keywords: Wireless sensor networks; Internet of Things; intelligent boundary determination; sensor nodes; data aggregation

PDF

Paper 19: Investigative Study of the Effect of Various Activation Functions with Stacked Autoencoder for Dimension Reduction of NIDS using SVM

Abstract: Deep learning is one of the most remarkable artificial intelligence trends. It remains behind numerous recent achievements in various domains, such as speech processing, and computer vision, to mention a few. Likewise, these achievements have sparked great attention in utilizing deep learning for dimension reduction. It is known that the deep learning algorithms built on neural networks contain number of hidden layers, activation function and optimizer, which make the computation of deep neural network challenging and, sometimes, complex. The reason for this complexity is that obtaining an outstanding and consistent result from such deep architecture requires identifying number of hidden layers and suitable activation function for dimension reduction. To investigate the aforementioned issues linear and non-linear activation functions are chosen for dimension reduction using Stacked Autoencoder (SAE) when applied to Network Intrusion Detection Systems (NIDS). To conduct experiments for this study various activation functions like linear, Leaky ReLU, ELU, Tanh, sigmoid and softplus have been identified for the hidden and output layers. Adam optimizer and Mean Square Error loss functions are adopted for optimizing the learning process. The SVM-RBF classifier is applied to assess the classification accuracies of these activation functions by using CICIDS2017 dataset because it contains contemporary attacks on cloud environment. The performance metrics such as accuracy, precision, recall and F-measure are evaluated along with theses classification time is being considered as an important metric. Finally it is concluded that ELU is performed with low computational overhead with negligible difference of accuracy that is 97.33% when compared to other activation functions.

Author 1: Nirmalajyothi Narisetty
Author 2: Gangadhara Rao Kancherla
Author 3: Basaveswararao Bobba
Author 4: K.Swathi

Keywords: Auto-encoder; cloud computing; dimension reduction; intrusion detection system; machine leaning

PDF

Paper 20: GAAR: Gross Anatomy using Augmented Reality Mobile Application

Abstract: Covid-19 pandemic has forced the teaching and learning activity into real time and real-world education meetings. The traditional physical and face-to-face meetings are avoided in accordance to reducing the close physical contacts among individuals. Thus, a new paradigm shift towards teaching and learning needs to be highly enforced. Teaching and learning on medical field especially require for real world anatomy against human or living things body. In response to providing the facility for medical teachers and learners, Gross Anatomy Augmented Reality (GAAR) is introduced. GAAR is an android mobile Augmented Reality (AR) learning tool to assist the educators and learners in internalizing 3D human anatomy with more fun and interactivity. The AR methodology is implied to attract the personal impacts and feelings towards operating of close to ‘real’ organ during anatomy practices. Traditional learning methods are changed with AR technology through small digital device. This application may be able to show the students the actual form of human gross anatomy and assist the teachers or educators’ in explaining the sciences behind human body in more interactive and interesting. Furthermore, this application uses a 3-dimensional object, video and interactive info so that students are interested in using this application. The AR for education and learning is vital in bridging the digital divide among all generations through the conversion of static pictures into real-like 3D animation. The implementation results show that, through the real visualization, small to adult learners can imitate the real truth on human organ and how this can motivate them to take care of their bodies that would lead to a healthier living styles as well as easy memorizing of the subject contents.

Author 1: Wan Aezwani Wan Abu Bakar
Author 2: Mustafa Man
Author 3: Mohd Airil Solehan
Author 4: Ily Amalina Ahmad Sabri

Keywords: Augmented reality; gross anatomy; learning tool; android mobile application; 3D human anatomy

PDF

Paper 21: GRASP Combined with ILS for the Vehicle Routing Problem with Time Windows, Precedence, Synchronization and Lunch Break Constraints

Abstract: In this era of pandemic especially with COVID-19, many hospitals and care structures are at full capacity regarding availability of beds. This problem leads to ensure giving specific cares to people in need either in illness or disability in their own homes. Home Health Care (HHC) proposes this kind of services for patients demanding it. These services have to be done at the request of the patient which appears to be the client in a way that gives satisfaction to the requester of the service. Often, these demands are bound by a specific time that the workforce (caregivers) are obligated to respect in addition to the precedence (priority) constraint. The main purpose of the HHC structures is to provide a service that is good in term of quality, minimize the overall costs and shorten the losses. To reduce the costs of these HHC structures, it is mandatory to find comprehensible and logical ways to do it, for it is not permissible to touch the caregiver salary, HHC structures find themselves in the obligation to optimize by other means such as reducing the travel cost. Note that these structures give cares in one's home, which means that the travel aspect is important and is considered the core spending charges of the institution. Another fact is the satisfaction of the patients toward caregivers; this is an essential element to optimize in order to obtain a good quality service, to give a realistic aspect for the problem the lunch break of caregivers is introduced as a parameter. For those arguments, a conception of an efficient planning of caregivers involves using decision tools and optimization methods. A caregiver (vehicle) is attributed to a patient (customer) to do a number of cares with several options in accordance to the customer wishes like time windows requirement often specified by the client, the priority or precedence constraints are usually performed if a care have to be performed before another and could need the intervention of more than one caregiver and must have at least one lunch break a day and it is not always taken at a set time of the day and must be versatile to optimize customer demand satisfaction. To resolve this issue which is called VRPTW-SPLB, a mathematical model of the problem is proposed and explained as a Mixed Integer Linear Programming (MILP) and a greedy heuristic based on a Greedy Randomized Adaptive Procedure (GRASP) is proposed, two strategies based on local search and two metaheuristics, and a metaheuristic resultant of an hybridization of the two metaheuristics. At the end of the paper, results are shown on a benchmark extracted from the literature.

Author 1: Ettazi Haitam
Author 2: Rafalia Najat
Author 3: Jaafar Abouchabaka

Keywords: Optimization; VRP; home health care; ILS; tabu search; metaheuristics

PDF

Paper 22: Development and Usability Testing of a Consultation System for Diabetic Retinopathy Screening

Abstract: This study aims to develop a novel web-based decision support system for diabetic retinopathy screening and classification of eye fundus images for medical officers. The research delivers diabetic retinopathy information with a web-based environment according to the needs of the users. The proposed research also intends to evaluate the developed system usability to the target users. The complex characteristics of diabetic retinopathy signs contribute to the difficulty in detecting diabetic retinopathy. Therefore, professional and skilled retinal screeners are required to produce accurate diabetic retinopathy detection and diagnosis. The proposed system assists the communication and consultation among the medical experts in the hospital and the primary health cares located at the health clinics. The agile software development model is the methodology used for the development of this research project. The project collaborates with the Department of Ophthalmology, Hospital Melaka, Malaysia for the medical content expertise and testing. Representative medical officers from Hospital Melaka and all the public health clinics in Melaka were involved in the preliminary study and system testing. This research study consists of a web development producing an interactive web-based application of diabetic retinopathy consultation which comprises image processing and editing features as a core of the system. It is envisaged that this research project will contribute to the management of diabetic retinopathy screening among medical officers.

Author 1: Nurul Najihah A’bas
Author 2: Sarni Suhaila Rahim
Author 3: Mohamad Lutfi Dolhalit
Author 4: Wan Sazli Nasarudin Saifudin
Author 5: Nazreen Abdullasim
Author 6: Shahril Parumo
Author 7: Raja Norliza Raja Omar
Author 8: Siti Zakiah Md Khair
Author 9: Khavigpriyaa Kalaichelvam
Author 10: Syazwan Izzat Noor Izhar

Keywords: Consultation; diabetic retinopathy; eye screening; image editing; image processing; web development; testing

PDF

Paper 23: Secure Data Transmission Framework for Internet of Things based on Oil Spill Detection Application

Abstract: Internet of Things (IoT) is a leading technology which can interlink anything to Internet and makes everything to intelligent and smart. IoT is not just a single technology. IoT is a combination of various technologies like communication, data analytics, sensors and actuators, cloud computing, artificial intelligence, machine learning, etc. Applications of IoT are spread across various domains. IoT is most suitable for remote applications like underwater networks. One such application is oil spill detection in ocean. Oil spill in an ocean is a critical challenge that causes damages to marine ecosystem. Detection of oil spills in a real time manner helps to resolve the problem quickly to minimize the damage. IoT can be used to detect the oil spill by making use of sensors deployed at various locations of ocean. With a massive amount of sensors deployed and the huge amount of data associated with it, there remain concerns about the data management. Also amount of data generated in IoT based remote sensing network is usually enormous for the servers to process and many times data generated are redundant. Hence there is a need for designing a framework which addresses both aggregation of data and security related issues at various aggregation points. In this paper we are proposing a secure data transmission framework for detecting oil spill through IoT, which avoids redundant data transmission through data aggregation and ensures secure data transmission through authentication and light weight encryption.

Author 1: Abhijith H V
Author 2: H S Rameshbabu

Keywords: Internet of things; wireless sensor networks; sensor nodes; data aggregation; authentication; light weight cryptography

PDF

Paper 24: A Contemporary Ensemble Aspect-based Opinion Mining Approach for Twitter Data

Abstract: Aspect-based opinion mining is one among the thought-provoking research field which focuses on the extraction of vivacious aspects from opinionated texts and polarity value associated with these. The principal aim here is to identify user sentiments about specific features of a product or service rather than overall polarity. This fine-grained polarity identification about myriad aspects of an entity is highly beneficial for individuals or business organizations. Extricating these implicit or explicit aspects can be very challenging and this paper elaborates copious aspect extraction techniques, which is decisive for aspect-based sentiment analysis. This paper presents a novel idea of combining several approaches like Part of Speech tagging, dependency parsing, word embedding, and deep learning to enrich the aspect-based sentiment analysis specially designed for Twitter data. The results show that combining deep learning with traditional techniques can produce excellent results than lexicon-based methods.

Author 1: Satvika
Author 2: Vikas Thada
Author 3: Jaswinder Singh

Keywords: Aspect-based sentiment analysis; dependency parsing; long short-term memory (LSTM); part of speech (POS) tagging; term frequency-inverse document frequency (TF-IDF)

PDF

Paper 25: Ultra-key Space Domain for Image Encryption using Chaos-based Approach with DNA Sequence

Abstract: Recently, image encryption has taken an importance especially after the dramatic evolution of the internet and network communication. The importance of securing the images contents is due to the simplicity of capturing and transferring of digital images in various communication media. Although there are many approaches for image encryptions, chaos-based image encryption approach is considered one of the most appropriate approaches because of its simplicity, security, and sensitivity to the input parameter. This research paper presents a new technique for encrypting RGB image components using nonlinear chaotic function and DNA sequence. A new image with the same dimensions of the plain-image is used as a key for confusions and diffusion process for each RGB components of plain-image. Experimental results show the efficiency of the proposed technique, simplicity, and high level of resistant against several cryptanalyst.

Author 1: Ibrahim AlBidewi
Author 2: Nashwan Alromema

Keywords: Chaos-based; image encryption; confusion; diffusion; color image; RGB components; DNA sequence

PDF

Paper 26: Integrated Model to Develop Grammar Checker for Afaan Oromo using Morphological Analysis: A Rule-based Approach

Abstract: This study has implemented a rule-based approach on grammar checkers by integrating a spell-checker with a morphological analyzer to improve the Afaan Oromo grammar checker. A corpus containing about 300,000 words has been prepared to be used for spell-checker. About 300 grammar rules are constructed to detect the grammar error within the Afaan Oromo text and to suggest the possible grammar correction. The developed frameworks have experimented on the document having pairs of 100 correct and incorrect sentences. The experimental result for checking the spelling errors has scored 73% of recall, 76% precision, and 75% of F-measure. The score for suggesting the correct spelling is 78% of recall, 62% precision, and 70% precision F-measure while the evaluation result for detecting the grammar errors has 47% recall, 90% precision and 68% f-measure score. For suggesting the possible correct grammar on the detected error, the system has scored 61% recall, 71% precision and 66% f-measure. The overall performance of the developed system has a good performance. However, there is still a need to conduct further research to improve the Afaan Oromo grammar checker.

Author 1: Jemal Abate
Author 2: Vijayshri Khedkar
Author 3: Sonali Kothari Tidke

Keywords: Grammar checker; spell checker; part-of-speech tag; error detection; syntactic analysis; semantic analysis; morphological analyzer; NLP

PDF

Paper 27: IoT Soil Monitoring based on LoRa Module for Oil Palm Plantation

Abstract: Internet of Things (IoT) Soil Monitoring based on Low Range (LoRa) Module for Palm Oil Plantation is a prototype that sends data from the sender to the receiver by using LoRa technology. This realises the implementation of Industrial Revolution 4.0 in the agriculture sector. Also, this prototype uses the TTGO development board for Arduino with built-in ESP32 and LoRa, pH sensor and moisture level sensor as main components. The prototype utilises the LoRa communication between the sender and the receiver. The sensors will detect soil pH along with the moisture level. The data then will be sent to the receiver, where it will be displayed in the Organic Light-Emitting Diodes (OLED) display. At the same time, the data will be uploaded to the database named ThingSpeak by using wireless communication. Users can monitor the data collected by accessing ThingSpeak's website using smartphones or laptops. The prototype is easy to set up and use to help users monitor the pH level and moisture level percentage. For future enhancement, the project can be enhanced by combining temperature and tilt sensors to get comprehensive data about the soil’s condition.

Author 1: Ahmad Alfian Ruslan
Author 2: Shafina Mohamed Salleh
Author 3: Sharifah Fatmadiana Wan Muhamad Hatta
Author 4: Aznida Abu Bakar Sajak

Keywords: Internet of Things (IoT); Low Range (LoRa); Organic Light-Emitting Diodes (OLED); ThingSpeak; Arduino

PDF

Paper 28: Workload Partitioning of a Bio-inspired Simultaneous Localization and Mapping Algorithm on an Embedded Architecture

Abstract: Many algorithms were developed to perform visual localization and mapping (SLAM) for robotic applications. These algorithms used monocular or stereovision systems to solve constraints related to the navigation in unknown or dynamic environment. The requirement of SLAM systems in terms of processing time and precision is a factor that limits their use in many embedded applications like UAVs or autonomous vehicles. Meanwhile, trends towards low-cost and low-power processing require massive parallelism on hardware architectures. The emergence of recent heterogeneous embedded architectures should help design embedded systems dedicated to Visual SLAM applications. It was demonstrated in a previous work that bio-inspired algorithms are competitive compared to classical methods based on image processing and environment perception. This paper is a study of a bio-inspired SLAM algorithm with the aim of making it suitable for an implementation on a heterogeneous architecture dedicated for embedded applications. An algorithm-architecture adequation approach is used to achieve a workload partitioning on CPU-GPU architecture and hence speeding up processing tasks.

Author 1: Amraoui Mounir
Author 2: Latif Rachid
Author 3: Abdelhafid El Ouardi
Author 4: Abdelouahed Tajer

Keywords: Simultaneous localization and mapping (SLAM); Bio-inspired algorithms; CPU-GPU workload partitioning; embedded systems; visual acuity (VA); hardware/software codesign

PDF

Paper 29: An Approach based on Machine Learning Algorithms for the Recommendation of Scientific Cultural Heritage Objects

Abstract: The Scientific Cultural Heritage (SCH) of the Drâa-Tafilalet region in south-eastern Morocco is a rich source of data testifying to the ingenuity of an older generation that has shaped the past of the region. These data must be preserved for future generations, particularly with new technologies and the semantic web. Recommendation systems (RS) are intended to assist prospective users in recommending the most suitable services based on their profile and expectations. The collaborative filtering (CF), content filtering (CB) or hybrid filtering (CF) RS has shown promising results in order to explore the problems experienced especially in CH. However, there are some limitations to be resolved, mostly due to the ability of these methods to build a stable and complete framework, which can provide a complete image of the user profile and suggest the most appropriate offers. This paper presents a hybrid recommender system for SCH data; a field little explored despite its historical importance and the value it generates. The results presented in this paper belong to the data collected from the region of Drâa-Tafilalet in southern Morocco.

Author 1: Fouad Nafis
Author 2: Khalid AL FARARNI
Author 3: Ali YAHYAOUY
Author 4: Badraddine AGHOUTANE

Keywords: Cultural heritage; CIDOC-CRM; ontologies; OWL; recommender system; semantic web; RDF

PDF

Paper 30: Comprehensive Survey and Research Directions on Blockchain IoT Access Control

Abstract: The Internet of Things (IoT) is a widely used technology in the last decade in different applications. The Internet of things is wirelessly or wired to communicate, store, compute and track various real-time scenarios. This survey mainly discussed the core problems of Internet of things security and access control to unauthorized users and security requirements for IoT. The Internet of things is a heterogeneous device and has low memory, less processing power because of the small sizes. Nowadays, IoT systems are not sure and powerless to protect themselves against cyber attacks. It is mainly due to inadequate space in IoT gadgets, immature standards, and the lack of protected hardware and software design, development, and deployment. To meet IoT requirements, the authors discussed the limitations of traditional access control. Then the authors examined the potential to spread access control by implementing the safe architecture accommodated by the Blockchain. The authors also addressed how to use the Blockchain to work with and resolve some of the standards relevant to IoT security issues. In the end, an analysis of this survey shows future, open-ended problems, and challenges. It offers how the Blockchain potentially ensures reliable, scalable, and more efficient security solutions for IoT and further research work.

Author 1: Hafiz Adnan Hussain
Author 2: Zulkefli Mansor
Author 3: Zarina Shukur

Keywords: Blockchain; Internet of Things; IoT; access control; access control management

PDF

Paper 31: Improving Performance of ABAC Security Policies Validation using a Novel Clustering Approach

Abstract: Cloud computing offers several services, such as storage, software, networking, and other computing services. Cloud storage is a boon for big data and big data owners. Although big data owners can easily avail cloud storage without spending much on infrastructure and software to manage their data, security is a big issue, and protecting the outsourced big data is challenging and ongoing research. Cloud service providers use the attribute-based access control model to detect malicious intruders and address the security requirements of today’s new computing technologies. Anomalies in security policies are removed to improve the efficiency of the access control model. This paper implements a novel clustering approach to cluster security policies. Our proposed approach uses a rule-specific cluster merging technique that compares the rule with the clusters where the probability of similarity is high. Hence this technique reduces the cost, time, and complexity of clustering. Rather than verifying all rules, detecting and removing anomalies in every cluster of rules improve the performance of the intrusion detection system. Our novel clustering approach is useful for the researchers and practitioners in the ABAC policy validation.

Author 1: K. Vijayalakshmi
Author 2: V.Jayalakshmi

Keywords: Anomalies; attribute-based access control model; big data; cloud storage; clustering; intrusion detection system; security policy

PDF

Paper 32: Multi-Robot based Control System

Abstract: One of the most important challenge in Robotic Flexible Manufacturing Systems (RFMS) is how to develop a Multi-Robot based control system in which the robot is able to take intelligent decision to a changing environment. The problematic is how to ensure the flexibility with the proposed multi-robot based control system based on triggering strategies. The flexibility of the whole system is expanded by the capacity of the flexible robots to effectively ensure tasks assigned to it. Through this paper, three contributions can be presented: (i) the RFMS based Control Architecture by presenting in details the main components and methods, (ii) the planning model, and (iii) the different levels of flexibility in RFMS.

Author 1: Atef Gharbi

Keywords: Robotic Flexible Manufacturing Systems (RFMS); multi-robot based control system; RFMS control architecture; planning model; flexibility

PDF

Paper 33: ICS: Interoperable Communication System for Inter-Domain Routing in Internet-of-Things

Abstract: The Internet-of-Things consists of heterogeneous smart appliances connected by global network with self-configuring capabilities requiring interoperable communication schemes while performing inter-domain routing. A review of existing interoperable approaches shows that there is still a large scope of improving IoT interoperability. The proposed system introduces Interoperable Communication System (ICS) by developing a novel inter-domain routing in IoT using two schemes. Preemptive and Non-Preemptive Communication scheme targets mainly emergency-based routing, which demands faster transmission, and dedicated transmission, demanding accountability in communication. A simulation study carried out for the proposed system shows that it offers approximately 90% reduced delay, 57% increased packet delivery ratio, and 98% faster processing time when compared with existing approaches to accomplish interoperability in IoT.

Author 1: Bhavana A
Author 2: Nandha Kumar A N

Keywords: Internet-of-Things (IoT); interoperability; heterogeneous; gateway protocol; inter-domain routing

PDF

Paper 34: Security and Threats of RFID and WSNs: Comparative Study

Abstract: The Internet of Things (IoT) has garnered significant attention from people with growing changes in human life over the last few years. IoT is a network of a group of smart devices that use sensors to collect information and conduct events in their environments. The information can then be shared on the Internet. IoT uses a range of technologies and finds various applications such as smart homes, environmental monitoring, and healthcare. In this paper, we conducted a comparative study to analyze the difference between two technologies—Wireless Sensor Networks (WSNs) and Radio Frequency Identification (RFID). It is pertinent to note that these technologies would not be effective without incorporating security aspects due to a potential number of threats and attacks on the network. This paper provides a comprehensive review of the recent approaches to securing RFID and WSNs. We have carefully chosen most of these studies to investigate only the recent technique from 2017 to 2020. The paper also highlights common attacks on RFID and WSNs and the secure authentication mechanisms on these technologies. It further provides a different way of detecting varying attacks in RFID and WSNs.

Author 1: Ghada Hisham Alzeer
Author 2: Ghada Sultam Aljumaie
Author 3: Wajdi Alhakami

Keywords: Security; IoT; WSN; RFID

PDF

Paper 35: Proposal of a Method to Measure Test Suite Quality Attributes for White-Box Testing

Abstract: As an important asset in software testing, measuring quality attributes of the test suite is important to describe the quality of software. This research proposes a method to measure the test suite quality attributes for white-box testing. The attributes are usability, efficiency, reliability, functionality, portability, and maintainability that are selected from 28 attributes in software quality. By using the proposed method, the test suite quality attributes are calculated with various results of level of quality. The result of test suite quality attribute measurement then proves the validity of its result by the reliability analysis. It is used Cohen’s kappa coefficient to validating the result of test suite quality attributes measurement based on the level of agreement between the result of measurement and expert assessment. Reliability analysis on test suite quality attribute finds the attribute that strongly related based on the minimum percentage of level of agreement value are usability, reliability and functionality. Hence, our proposed method is useful to measure test suite quality attributes.

Author 1: Mochamad Chandra Saputra
Author 2: Tetsuro Katayama

Keywords: Test case; test suite quality attributes; white-box testing; reliability analysis; software quality

PDF

Paper 36: Intelligent Scroll Order Generator Software from View Movements in People with Disabilities

Abstract: People with motor disabilities face problems such as being able to move around independently, as well as having difficulties to take advantage of the technological tools developed for their rehabilitation. This research is based on computer vision and robotics, and was carried out with the objective of generating displacement orders using smoothed and binarized algorithms to assist the displacement of disabled people. The intelligent software for people with disabilities or motor deficiencies generates movement commands for an acceptable time through visual commands made by the person, by means of communication between the camera and the software, constantly capturing images. The results obtained for the scene (image) is cropped according to the face scene, then the view scene is cropped according to the face scene, and by means of necessary algorithms the pupil must be found; The complexity lies not only in locating the pupil, but also in identifying when a command is being sent and when it is not. Finally, the unit processes the movement command (left and right) to turn the LEDs on and off.

Author 1: Juan Ríos-Kavadoy
Author 2: Harold Guerrero-Bello
Author 3: Michael Cabanillas-Carbonell

Keywords: Displacement; motor disability; pupil; computer vision of images

PDF

Paper 37: Onion Crop Monitoring with Multispectral Imagery using Deep Neural Network

Abstract: The world’s growing population leads the government of Pakistan to increase the supply of food for the coming years in a well-organized manner. Feasible agriculture plays a vital role for sustain food production and preserves the environment from any unnecessary chemicals by the use of technology for good management. This research presents the design and development of a multi-spectral imaging system for precision agriculture tasks. This imaging system includes an RGB camera and Pi NoIR camera controlled by a raspberry pi in a drone. The images are captured by Unmanned Aerial Vehicle (UAV) and then send images to the Java application. Images are processed to sharp, resize by application. The Normalized Difference Vegetation Index (NDVI) is calculated to determine the crop health status based on real-time data. The Deep Learning (DL) technique is used to recognize the onion crop growth stage using the captured dataset. We express how to implement a progressive model for the deep neural network to recognize the onion crop growth stage. The performance accuracy of the system for batch size 16 is 96.10% and for batch size 32 is 93.80%.

Author 1: Naseer U Din
Author 2: Bushra Naz
Author 3: Samer Zai
Author 4: Bakhtawer
Author 5: Waqar Ahmed

Keywords: UAV; deep neural networks; onion crop; NDVI; crop monitoring; VGG16

PDF

Paper 38: Combined Non-parametric and Parametric Classification Method Depending on Normality of PDF of Training Samples

Abstract: Classification method with combined nonparametric and parametric classifications which depends on the normality of Probability Density Function of training samples is proposed. The proposed classification method is also based on spatial information for high spatial resolution of satellite based optical sensor images is proposed. Also, a classification method which takes into account not only spectral but also spatial features for LANDSAT-4 and 5 Thematic Mapper (TM) data is proposed. Treatment of the spatial-spectral variability existing within a region is more important for such high spatial resolution of satellite imagery data. Standard deviations in small cells, such as 2x2, 3x3 and 4x4 pixels, were used as measures to represent the spatial-spectral variabilities. This information can be used together with conventional spectral features in a unified way, for the traditional classifier such as the pixelwise Maximum Likelihood Decision Rule (MLHDR). The classification performance of new clear cuts and alpine meadows which are very close in spectral space characteristics and difficult to distinguish them by conventional methods are focused. Through experiments, it is found that there is a substantial improvement in overall classification accuracy for TM forestry data. The Probability of Correct Classification (PCC) for the new clear cuts and the alpine meadows classes rose by 7% to 97% correct. The confusion between alpine meadows and new clear cuts was reduced from 9% to 3%.

Author 1: Kohei Arai

Keywords: Spectral information; spatial information; maximum likelihood decision rule; satellite image; image classification; classification performance; instantaneous field of view

PDF

Paper 39: Online Training and Serious Games in Clinical Training in Nursing and Midwife Education

Abstract: The article examines the application, methods, and trends in online training of health care and medical professionals in Bulgaria. Attention is paid to modern methods for the effective application of online training and the extent to which online training can replace traditional training. The article presents the results of a survey for online training conducted in April 2021 at universities in Bulgaria Health Care professional field, specialty Nurse and Midwife. The results of the survey can serve to improve online education in Bulgaria by including in it educational resources recommended by respondents. The creation of new web-based educational resources (video materials, serious games, virtual simulations, video presentations, webinars, etc.) can complement the traditional methods of training in the students in the Health Care professional field, specialties Nurse and Midwives in Bulgaria.

Author 1: Galya Georgieva-Tsaneva
Author 2: Ivanichka Serbezova

Keywords: Health care; medical education; serious educational games; nurse; midwife; online training

PDF

Paper 40: An Implementation of Hybrid Enhanced Sentiment Analysis System using Spark ML Pipeline: A Big Data Analytics Framework

Abstract: Today, we live in the Big Data age. Social networks, online shopping, mobile data are main sources generating huge text data by users. This "text data" will provide companies with useful insight on how customers view their brand and encourage them to make business strategies actively in order to maintain their trade. Hence, it is essential for the enterprises to analyse the sentiments of social media big data to make predictions. Because of the variety and existence of data, the study of sentiment on broad data has become difficult. However, it includes open-source Big Data platforms and machine learning techniques to process large text information in real-time. The advancement in fields including Big Data and Deep Learning technology has influenced and overcome the traditional restrictions of distributed computing. The primary aim is to perform sentiment analysis on the pipelined architecture of Apache Spark ML to speed upward the computations and improve machine efficiency in different environments. Therefore, the Hybrid CNN-SVM model is designed and developed. Here, CNN is pipeline with SVM for sentiment feature extraction and classification in ML to improve the accuracy. It is more flexible, fast and scalable. In addition, Naive Bayes, Support Vector Machines (SVM), Random Forest, Logistic Regression classifiers have been used to measure the efficiency of the proposed system on multi-node environment. The experimental results demonstrate that in terms of different evaluation metrics, the hybrid sentiment analysis model outperforms the conventional models. The proposed method makes it convenient for effective handling of big sentiment datasets. It would be more beneficial for corporations, government and individuals to improve their great value.

Author 1: Raviya K
Author 2: Mary Vennila S

Keywords: Big data; sentiment analysis; machine learning; apache spark; ML pipeline

PDF

Paper 41: Traffic Engineering in Software-defined Networks using Reinforcement Learning: A Review

Abstract: With the exponential increase in connected devices and its accompanying complexities in network management, dynamic Traffic Engineering (TE) solutions in Software-Defined Networking (SDN) using Reinforcement Learning (RL) techniques has emerged in recent times. The SDN architecture empowers network operators to monitor network traffic with agility, flexibility, robustness and centralized control. The separation of the control and the forwarding plane in SDN has enabled the integration of RL agents in the networking architecture to enforce changes in traffic patterns during network congestions. This paper surveys major RL techniques adopted for efficient TE in SDN. We reviewed the use of RL agents in modelling TE policies for SDNs, with agents’ actions on the environment guided by future rewards and a new state. We further looked at the SARL and MARL algorithms the RL agents deploy in forming policies for the environment. The paper finally looked at agents design architecture in SDN and possible research gaps.

Author 1: Delali Kwasi Dake
Author 2: James Dzisi Gadze
Author 3: Griffith Selorm Klogo
Author 4: Henry Nunoo-Mensah

Keywords: Software defined networking; reinforcement learning; machine learning; traffic engineering

PDF

Paper 42: How Enterprise must be Prepared to be “AI First”?

Abstract: Among disruptive technologies, Artificial Intelligence (AI), Robotic Process Automation (RPA) and Machine Learning (ML) play a very important role in Businesses Transformation and continues to show great promise for creating new sources of wealth and new business models. The reality of AI in the company is not reduced to a simple process optimization. In fact, AI introduces new organizational schemes, new ways of working, new optimization niches, new services, other ways of thinking about interactions with customers and therefore a new way of doing business. It thus reshuffles competitive data and imagine innovative processes to create new business models, offering new opportunities not only for IT solution providers but also for innovators, investors and business owners. Even if the contribution of Artificial Intelligence is not to be proved, many companies face difficulties in adopting this technology, mainly due to the lack of a pragmatic approach highlighting the roles and responsibilities of the various stakeholders, especially IT professionals and business owners and the key steps to follow to make this experience a real success. This research aims to answer fundamental questions, in particular: What will bring the implementation of this technology to the business of the company? How to prepare for this adoption? and if the decision to go is confirmed, what kind of adoption approach should companies follow? and finally how can Enterprises monitor this shift to the Intelligent edge.

Author 1: Mustapha Lahlali
Author 2: Naoual Berbiche
Author 3: Jamila El Alami

Keywords: Artificial intelligence; machine learning; RPA; business transformation; AI adoption

PDF

Paper 43: The Effect of Augmented Reality in Improving Visual Thinking in Mathematics of 10th-Grade Students in Jordan

Abstract: Augmented reality is one of the key issues in the area of improving visual thinking in science courses such as Mathematics. Augmented reality also offers a significant and effective role in the educational process. The current study aimed to investigate the effect of augmented reality in improving visual thinking of 10th-grade students in mathematics in Jordan. To achieve the objectives of the study, the methodology used includes the application of the semi-experimental approach and augmented reality technology. The methodology used also includes preparing a test to measure visual thinking comprising (20) multiple-choice items used as a pre-and post-test, and its validity and reliability are verified. The study sample consists of (57) female students purposefully selected from the 10th-grade students at the Jerash Model Schools for the first semester of 2020/2021. The study sample is divided into two groups as follows: one is an experimental group consisting of (28) female students taught by the augmented reality technology, and the second is a control group consisting of (29) female students taught in the traditional method. The results of the study show that there are statistically significant differences at the level of (α = 0.05) in the development of visual thinking in favor of the experimental group students taught by the augmented reality technology. The study also shows that there are differences in the performance of the experimental group students in each skill of visual thinking.

Author 1: Fadi Abdul Raheem Odeh Bani Ahmad

Keywords: Augmented reality technology; visual thinking development; 10th grade; mathematics

PDF

Paper 44: Speeding up an Adaptive Filter based ECG Signal Pre-processing on Embedded Architectures

Abstract: Medical applications increasingly require complex calculations with constraints of accelerated processing time. These applications are therefore oriented towards the integration of high-performance embedded architectures. In this context, the detection of cardiac abnormalities is a task that remains a high priority in emergency medicine. ECG analysis is a complex task that requires significant computing time since a large amount of information must be analyzed in parallel with high frequencies. Real-time processing is the biggest challenge for researchers, when talking about applications that require time constraints like that of cardiac activity monitoring. This work evaluates the Adaptive Dual Threshold Filter (ADTF) algorithm dedicated to ECG signal filtering using various embedded architectures: A Raspberry 3B+ and Odroid XU4. The implementation has been based on C/C++ and OpenMP to exploit the parallelism in the used architectures. The evaluation was validated using several ECG signals proposed in MIT-BIH Arrhythmia database with a sampling frequency of 360 Hz. Based on an algorithmic complexity study and a parallelization of the functional blocks which present significant workloads, the evaluation results show a mean execution time of 7.5 ms on the Raspberry 3B+ and 0.34 ms on the Odroid XU4. With an efficient parallelization on the Odroid XU4 architecture, real-time performance can be achieved.

Author 1: Safa Mejhoudi
Author 2: Rachid Latif
Author 3: Amine Saddik
Author 4: Wissam Jenkal
Author 5: Abdelhafid El Ouardi

Keywords: ECG signal denoising; ADTF algorithm; OpenMP programming; embedded architectures

PDF

Paper 45: Efficient Rain Simulation based on Constrained View Frustum

Abstract: Realistic real-time rain streaks rendering has been treated as a very difficult problem because of various natural phenomena. Also, for creating and managing many particles in a rain streak, many resources had to be used. This paper propose am efficient real-time rain streaks simulation algorithm by generating view-dependent rain particles, which can express a large amount of rain streaks even with a small number of particles. By creating a ‘constrained view frustum’ depending on the camera moving in real time, particles are rendered only in that space. Accordingly, particles rendered well even if the camera keep moving or rotating rapidly. And a small number of particles are used, since the simulation is performed in a user-viewed limited space, an effect of simulation many particles can be obtained. This enables very efficient real-time simulation of rain streaks.

Author 1: JinGi Im
Author 2: Mankyu Sung

Keywords: View-dependent rendering; realistic real-time simulation; view frustum

PDF

Paper 46: A Secure Communication Process of Wireless Sensor Network Architecture for Smart Urban Environment Monitoring Applications

Abstract: Wireless Sensor Network has been increasingly used for remote monitoring system and its adoption in increasing exponentially for larger application too. However, there are various challenges associated with both resource management and security that roots up when the deployment scale goes massive and distributed in order. The proposed system considers a case study of smart city management where the problems associated with data transmission and security has been addressed. This is carried out using the provisioning of urban environment monitoring system that is an essential system for smart city projects to assure the citizens' better-quality well-being. The scalable and effective urban environment monitoring system requires a seamless transmission of the data from the sensor nodes to the analytics engine. The existing architectures are more designed to suit very specifically the use-cases. As a contribution, the proposed system introduces a cost-effective architecture for environmental monitoring in urban zones of smart city named as a Smart Sensor Surveillance System (4S-UEM). The core idea of the proposed system is to offer a balance between resource efficiency and resilience secure communication in large scale deployment of WSN considering smart city as deployment and assessment area. The proposed system makes use of urban geographical clustering process in order to develop an organized structure of sensor nodes. Different from any existing studies, the proposed system introduces data analytical engine followed by secure routing using gateway. The design of the proposed system is carried out using layered architecture of the communication model targeting towards a cost-effective, energy optimal, and secure data transmission to the analytics engine.

Author 1: Rashmi S Bhaskar
Author 2: Veena S Chakravarthi

Keywords: Wireless sensor network (WSN); sensors; smart city security; secure communication process

PDF

Paper 47: Customer Opinion Mining by Comments Classification using Machine Learning

Abstract: In this era of digital and competitive market, every business entity is trying to adopt a digital marketing strategy to get global business benefits. To get such competitive advantages, it is necessary for E-commerce business organizations to understand the feelings, thinking and seasons of their customers regarding their products and services. The major objective of this study is to investigate customers’ buying behavior and consumer behavior to enable the customer to evaluate an online available product in various perspectives like variety, convenience, trust and time. It performs data analysis on the E-commerce customer data which is collected through intelligent agents (automated scripts) or web scrapping techniques to enable the customers to quickly understand the product in given perspectives through other customers’ opinion at a glance. This is qualitative and quantitative e-commerce content analysis in using various methods like data crawling, manual annotation, text processing, feature engineering and text classification. We have employed got manually annotated data from e-commerce experts and employed BOW and N-Gram techniques for Feature Engineering and KNN, Naïve Bays and VSM classifiers with different features extraction combinations are applied to get better results. This study also incorporates data mining and data analytics results evaluation and validation techniques like precision, recall and F1-score.

Author 1: Moazzam Ali
Author 2: Farwa yasmine
Author 3: Husnain Mushtaq
Author 4: Abdullah Sarwar
Author 5: Adil Idrees
Author 6: Sehrish Tabassum
Author 7: BaburHayyat
Author 8: Khalil Ur Rehman

Keywords: Customer comments; behavior mining; data mining; machine learning

PDF

Paper 48: Spoken Language Identification on Local Language using MFCC, Random Forest, KNN, and GMM

Abstract: Spoken language identification is a field of research that is already being done by many people. There are many techniques proposed for doing speech processing, such as Support Vector Machines, Gaussian Mixture Models, Decision Trees, and others. This paper will use the system using the Mel-Frequency Cepstral Coefficient (MFCC) features of speech input signal, use Random Forest (RF), Gaussian Mixture Model (GMM), and K-Nearest Neighbor (KNN) as a classifier, use the 3s, 10s, and 30s as scoring method, and use dataset that consists of Javanese, Sundanese, and Minang languages which are traditional languages from Indonesia. K-Nearest Neighbor has 98.88% of accuracy for 30s of speech and followed by Random Forest that has 95.55% of accuracy for 30s of speech, GMM has 82.24% of accuracy.

Author 1: Vincentius Satria Wicaksana
Author 2: Amalia Zahra S.Kom

Keywords: Gaussian mixture model; random forest; K-Nearest Neighbor; spoken language recognition; MFCC; GMM; KNN

PDF

Paper 49: Conceptualizing Smart Sustainable Cities: Crossing Visions and Utilizing Resources in Africa

Abstract: Recent advancements in technologies enabled the development of smart cities to be more effective and possible. Smart cities depend on intelligent systems, artificial intelligence, the internet of things, control system, and many more advanced technologies. Sustainability challenges and problems worldwide, with smart and sustainability concepts, reflect almost mutual goals. It includes improving and providing the essential life services for all people efficiently while depending on sustainable, clean, and renewable energy with considerations of different economic, educational, health, social and environmental aspects in the city. In this research, a cost analysis process has been implemented to ease the implementation and resource utilization of smart and sustainable cities in Africa. The challenges and difficulties of those implementations are summarized.

Author 1: Ahmed Al-Gindy
Author 2: Aya Al-Chikh Omar
Author 3: Mariam Aerabe
Author 4: Ziad Elkhatib

Keywords: Smart cities; sustainable energy; renewable energy; internet of things; artificial intelligence

PDF

Paper 50: Monophonic Guitar Synthesizer via Mobile App

Abstract: In the guild of guitarists, it is common to work with guitar synthesizers because the emulation of a great variety of sounds that are produced by different musical instruments, starting from just playing the guitar, which means, a piece of music is played with a guitar, but other musical instruments are actually heard such as, a saxophone, a violin, a piano or percussions, depending on the instrument that has been selected. The problem that arises in this article is that synthesizers are expensive and due to their size, the transportation of the equipment is often impractical. As mentioned, the development of a mobile application that has the function of a monophonic synthesizer is proposed as a solution. In this way, the cost is greatly reduced, and additionally, the user is able to install the application on a mobile device with Android operating system and connect it to an electric or electro-acoustic guitar through an audio interface; obtaining as a result, a functional technological instrument by offering guitarists an alternative with respect to conventional synthesizers. The construction of this application used the Fast Fourier Transform Radix-2 as a signal recognition algorithm, which allowed obtaining the fundamental frequencies generated by the guitar, which were transformed into MIDI notation and later used in sound emulation.

Author 1: Edgar García Leyva
Author 2: Elena Fabiola Ruiz Ledesma
Author 3: Rosaura Palma Orozco
Author 4: Lorena Chavarría Báez

Keywords: Monophonic synthesizer; guitar; sound emulation; mobile application

PDF

Paper 51: Implementation of Artificial Neural Network in Forecasting Sales Volume in Tokopedia Indonesia

Abstract: Predicting sales is one way to get company profits. Tokopedia Indonesia is one of the marketplaces that is included in the type of e-commerce customer to customer (C2C). This research was conducted in order to help sellers in the Tokopedia Indonesia marketplace to predict the sales of their merchandise, so that sellers can prepare or stock items that are predicted to increase in sales by implementing Artificial Neural Networks. Artificial neural networks can help predict future sales values. The data is divided into training data and testing data. The results of the analysis of this study indicate that the network model obtained reaches an accuracy rate of 95%.

Author 1: Meiryani
Author 2: Dezie Leonarda Warganegara

Keywords: Forecasting; e-commerce; backpropagation; artificial neural network

PDF

Paper 52: Design and Evaluation of Bible Learning Application using Elements of User Experience

Abstract: Technological developments can encourage children to learn easily and help solve problems that often arise in the learning process. Sunday School students need learning media to make it easier to understand Christian Education. The method used in Sunday Schools still uses conventional methods which include face-to-face teaching and learning in class. This method often faces various challenges such as student's lack of focus. One of the solutions that are proposed in this paper is to design an Android-based learning application that will support the learning process. Application User Interface and User Experience (UI/UX) design will be built based on the Elements of User Experience methodology. The Elements of User Experience method will be used in the analysis and design process to maximize the usability and engagement level of the application. The learning materials will be designed based on Attention, Relevance, Confidence, and Satisfaction (ARCS) framework. ARCS will help the material design process to ensure the clarity and appropriateness of the material. The application will be implemented and tested on students to measure its effectiveness. The application trial has shown a promising improvement especially in student's engagement toward the materials.

Author 1: Frederik Allotodang
Author 2: Herman Tolle
Author 3: Nataniel Dengen

Keywords: Christian education; Sunday school; element of user experience; ARCS; android application

PDF

Paper 53: A Markerless-based Gait Analysis and Visualization Approach for ASD Children

Abstract: This study proposed a new method in gait acquisition and analysis for autistic children based on the markerless technique versus the gold standard marker-based technique. Here, the gait acquisition stage is conducted using a depth camera with a customizable skeleton tracking function that is the Microsoft Kinect sensor for recording the walking gait trials of the 23 children with autism spectrum disorder (ASD) and 30 typically healthy developing (TD) children. Next, the Kinect depth sensor outputs information is translated into kinematic gait features. Further, analysis and evaluation are done specifically the kinematic angles of the hip, knee, and ankle in analyzing and visualizing the pattern of the plots versus the kinematic plots acquired from the marker-based that is the Vicon motion system gait technique. In addition, these kinematic angles are also validated using the statistical method namely the Analysis of Variance (ANOVA). Results showed that the ρ-values are insignificant for all angles upon computing both the intra-group and inter-group normalization. Hence, these findings have proven that the proposed markerless-based gait technique is indeed apt to be used as a new alternative markerless method for gait analysis of ASD children.

Author 1: Nur Khalidah Zakaria
Author 2: Nooritawati Md Tahir
Author 3: Rozita Jailani

Keywords: Autism spectrum disorder (ASD); kinematic; marker-based; markerless-based; gait analysis

PDF

Paper 54: Increasing the Steganographic Resistance of the LSB Data Hide Algorithm

Abstract: The robustness of the security algorithm is one of the most important properties that determines how difficult it is to break it. Increasing the robustness of the algorithm directly affects the degree of secrecy when it is used for confidential transmission. The paper analyzes the steganographic algorithm Least Significant Bit, represents a method of counteracting the algorithm of the "visual attack" and statistical methods used against stego-containers generated using the LSB algorithm. To prove the increase in resistance, the study used the PSNR index, Chi-square test. The proposed technique involves the use of a uniform distribution and compression method. The paper presents the results of computer experiments demonstrating the effectiveness of the proposed technique.

Author 1: A. Y. Buchaev
Author 2: A. G. Mustafaev
Author 3: V.S. Galyaev
Author 4: A. M. Bagandov

Keywords: Steganography; steganalysis; visual attack; least significant bit

PDF

Paper 55: Twitter based Data Analysis in Natural Language Processing using a Novel Catboost Recurrent Neural Framework

Abstract: In recent years, the sentiment analysis using Twitter data is the most prevalent theme in Natural Language Processing (NLP). However, the existing sentiment analysis approaches are having lower performance and accuracy for classification due to the inadequate labeled data and failure to analyze the complex sentences. So, this research develops the novel hybrid machine learning model as Catboost Recurrent Neural Framework (CRNF) with an error pruning mechanism to analyze the Twitter data based on user opinion. Initially, the twitter-based dataset is collected that tweets based on the coronavirus COVID-19 vaccine, which are pre-processed and trained to the system. Furthermore, the proposed CRNF model classifies the sentiments as positive, negative, or neutral. Moreover, the process of sentiment analysis is done through Python and the parameters are calculated. Finally, the attained results in the performance parameters like precision, recall, accuracy and error rate are validated with existing methods.

Author 1: V. Laxmi Narasamma
Author 2: M. Sreedevi

Keywords: Natural language processing; sentiment analysis; twitter data; Catboost; recurrent neural network

PDF

Paper 56: Image-based Onion Disease (Purple Blotch) Detection using Deep Convolutional Neural Network

Abstract: Agriculture on earth is the biggest need for human sustenance. Over years, many farming methods and components have become computerized to guarantee quicker production with higher quality. Because of the enlarged demand in the farming industry, agricultural produce must be cultivated using an efficient process. Onion (Allium cepa L.) is an economically valuable crop and is the second-largest vegetable crop in the world. The spread of various diseases highly affected the production of the onion crop. One of the serious and most common diseases of onion worldwide is purple blotch. To compensate for a limited amount of training dataset of healthy and infected onion crops, the proposed method employs a pre-trained enhanced InceptionV3 model. The proposed model detects onion disease (purple blotch) from images by recognizing the abnormalities caused by the disease. The suggested approach achieves a classification accuracy of 85.47% in recognizing the disease. This research investigates a novel approach for the rapid and accurate diagnosis of plant/crop diseases, laying the theoretical foundation for the use of deep learning in agricultural information.

Author 1: Muhammad Ahmed Zaki
Author 2: Sanam Narejo
Author 3: Muhammad Ahsan
Author 4: Sammer Zai
Author 5: Muhammad Rizwan Anjum
Author 6: Naseer u Din

Keywords: Disease detection; disease classification; artificial intelligence; inceptionv3; deep convolutional neural network

PDF

Paper 57: Design of Multi-band Microstrip Patch Antennas for Mid-band 5G Wireless Communication

Abstract: Recently, the best antenna structures have considered microstrip patch antenna due to their simple construction, low cost, minimum weight, and the fact that they can be effortlessly integrated with circuits. To achieve multi-band operation an antenna is designed with an etching rectangle and circle slot on the surface of the patch to achieve multi-band frequency capabilities in mid-band 5G applications. Inset-fed structure type of fed of all antenna printed and fabricated on the brow of the Rogers RT5880 substrate. Then, prototype structures of the microstrip patch antenna were acquired during the design process until achieving the desired antennas. The antenna_1 achieved tri-band characteristics covering the WiMAX band including 2.51 – 2.55 GHz, WLAN, and S-band including 3.80 – 3.87 GHz and C-and X-band including 6.19 – 6.60 GHz. The antenna_2 gives dual-band characteristics covering C-band and X-band including (6.72 – 7.92 GHz) with a peak under -45 dB suitable for mid-band 5G applications. High impedance bandwidth increases between (70 MHz-1.25 GHz) for wireless applications. The proposed microstrip patch antennas were simulated using CST MWS-2015 and were experimentally tested to verify the fundamental characteristics of the proposed design, it offers multiple-band operation with high stable gain and good directional radiation characteristics results.

Author 1: Karima Mazen
Author 2: Ahmed Emran
Author 3: Ahmed S. Shalaby
Author 4: Ahmed Yahya

Keywords: Band-width; microstrip; multi-band; notch slot; rectangle slot; 5 G

PDF

Paper 58: Stacked Autoencoder based Feature Compression for Optimal Classification of Parkinson Disease from Vocal Feature Vectors using Immune Algorithms

Abstract: Parkinson’s disease (PD) is a neurological progressive disorder and is most common among people who are above 60 years old. It affects the brain nerve cells due to the deficiency of dopamine secretion. Dopamine acts as a neurotransmitter and helps in the movement of the body parts. Once brain cells/neurons start dying due to aging, then it will lead to a decrease in dopamine levels. The symptoms of Parkinson’s are difficultly in doing regular/habitual movements, uncontrollable shaking of hands and limbs may encounter memory loss, stiff muscles, sudden temporary loss of control, etc. The severity of the disease will be worse if not diagnosed and treated at the early stages. This paper concentrates on developing Parkinson’s disease diagnosing system using machine learning techniques and algorithms. Machine Learning is an integral part of artificial intelligence it takes huge data as input and train by making use of existing algorithms to understand the pattern of the data. Based on the recognized pattern, the machine will act accordingly without any human intervention. In this work, two major approaches have been employed to diagnose PD. Initially, 26 vocal data of PD affected and healthy individual datasets are obtained from the UCI Machine Learning data repository, are taken as initial raw data/features. In pre-processing, the mRMR feature selection algorithm is employed to minimize the feature count and increase the accuracy rate. The selected features will further be extracted using the Stacked Autoencoder technique to improve and increase the accuracy rate and quality of classification with reduced run time. K-fold cross-validation is used to evaluate the predictive capability of the model and the effectiveness of the extracted features. Artificial Immune Recognition System – Parallel (AIRS-P), an immune inspired algorithm is employed to classify the data from the extracted features. The proposed system attained 97% accuracy, outperforms the benchmarked algorithms and proved its significance on PD classification.

Author 1: K. Kamalakannan
Author 2: G.Anandharaj

Keywords: Immune algorithms; Parkinson’s disease; stacked autoencoder; airs-parallel; machine learning

PDF

Paper 59: Gender Diversity in Computing and Immersive Games for Computer Programming Education: A Review

Abstract: This paper provides a review of the current state of the gender gap in computer science and highlights how immersive games can mitigate this issue. Game-based learning (GBL) applications have been shown to successfully incite motivation in students and increase learning efficiency in both formal and non-formal educational settings. With the rise of GBL, researchers have also used virtual reality to provide pupils with a more immersive learning experience. Both GBL and virtual reality techniques are also used for computer programming education. However, there is a paucity of applications that utilize these techniques to incite interest in computer science from a female perspective. This is a cause for concern as immersive games have been proven to be capable of inciting affective motivation and fostering positive attitudes towards specific subjects. Hence, this review summarises the benefits and limitations of GBL and virtual reality; how males and females respond to certain game elements; and suggestions to aid in the development of immersive games to increase female participation in the field of computer science.

Author 1: Chyanna Wee
Author 2: Kian Meng Yap

Keywords: Computer science education; game-based learning; gender; virtual reality

PDF

Paper 60: Fairness Embedded Adaptive Recommender System: A Conceptual Framework

Abstract: In the current fast paced and constantly changing environment, companies should ensure that their way of interacting with user is both relevant and highly adaptive. In order to stay competitive, companies should invest in state-of-the-art technologies that optimize the relationship with the user using increasingly available data. The most popular applications used to develop user relationship are Recommender Systems. The vast majority of the traditional recommender system considers recommendation as a static procedure and focus on a specific type of recommendation, being not very agile in adapting to new situations. Also, when implementing a Recommender System there is the need to ensure fairness in the way decisions are made upon customer data. In this paper, it is proposed a novel Reinforcement Learning-based recommender system that is highly adaptive to changes in customer behavior and focuses on ensuring both producer and consumer fairness, Fairness Embedded Adaptive Recommender System (FEARS). The approach overcomes Reinforcement Learning’s main drawback in recommendation area by using a small, but meaningful action space. Also, there are presented two fairness metrics, their calculation and adaptation for usage with Reinforcement Learning, this way ensuring that the system gets to the optimal trade-off between personalization and fairness.

Author 1: Alina Popa

Keywords: Algorithmic fairness; reinforcement learning; recommender systems; system adaptability

PDF

Paper 61: Online Learning Acceptance Model during Covid-19: An Integrated Conceptual Model

Abstract: Because of Covid-19, many countries shutdown schools in order to prevent spreading the virus in their communities. Therefore, schools have opted to use online learning technologies that support distance learning for students. As consequences, Ministry of Higher Education and Scientific Research encourages higher education institutes to adopt blended learning in their programs. However, different students react in different ways to online learning. Some students were able to make productive use of online learning strategies more than others. A conceptual model based on 15 variables was constructed based on UTAUT2, TAM, and other models to investigate and to study the factors that affect students’ acceptance of online learning. 29 hypotheses were investigated to study the relationships among the variables that affect online learning acceptance and online learning community building in Al-Ahliyya Amman University. The collected responses were analyzed using a structural equation modeling (SEM) approach. SPSS and AMOS were used to analyze the data.

Author 1: Qasem Kharma
Author 2: Kholoud Nairoukh
Author 3: AbdelRahman Hussein
Author 4: Mosleh Abualhaj
Author 5: Qusai Shambour

Keywords: Online learning; technology acceptance; learning assistance; learning community building assistance

PDF

Paper 62: Data Analytics in Investment Banks

Abstract: Capital Markets are one of the most important pillars of worldwide economy. They gather skilled finance and IT professionals as well as economists in order to take the best investment decisions and choose the most suitable funding solutions every time. Data analytics projects in Capital Markets can definitely be very beneficial as all optimizations and innovations would have a financial impact, but can also be very challenging as the field itself has always incorporated a research component, thus finding out what could really be of an added value might be a tricky task. Based on a comprehensive literature review, this paper aims to structure the thoughts around data analytics in investment banks, and puts forward a classification of relevant data analytics use cases. Lastly, it also discusses how transforming to a data-driven enterprise is the real change investment banks should aim to achieve, and discusses some of the challenges that they might encounter when engaging in this transformation process.

Author 1: Basma Iraqi
Author 2: Lamia Benhiba
Author 3: Mohammed Abdou Janati Idrissi

Keywords: Capital markets; data analytics; data analytics use cases; data-driven transformation; investment banks

PDF

Paper 63: Early Prediction of Plant Diseases using CNN and GANs

Abstract: Plant diseases enormously affect the agricultural crop production and quality with huge economic losses to the farmers and the country. This in turn increases the market price of crops and food, which increase the purchase burden of customers. Therefore, early identification and diagnosis of plant diseases at every stage of plant life cycle is a very critical approach to protect and increase the crop yield. In this paper using a deep-learning model, we present a classification system based on real-time images for early identification of plant infection prior of onset of severe disease symptoms at different life stages of a tomato plant infected with Tomato Mosaic Virus (TMV). The proposed classification was applied on each stage of the plant separately to obtain the largest data set and manifestation of each disease stage. The plant stages named in relation to disease stage as healthy (uninfected), early infection, and diseased (late infection). Classification was designed using the Convolutional Neural Network (CNN) model and the accuracy rate was 97%. Using Generative Adversarial Networks (GANs) to increase the number of real-time images and then apply CNN on these new images and the accuracy rate was 98%.

Author 1: Ahmed Ali Gomaa
Author 2: Yasser M. Abd El-Latif

Keywords: Plants diseases; deep learning; early detection; convolutional neural network; generative adversarial networks

PDF

Paper 64: Mammogram Segmentation Techniques: A Review

Abstract: There is a significant development in computer-aided detection (CADe) and computer-aided diagnostic (CADx) systems in recent years. This development coincides with the evolution of computing power and the growth of data. The CAD systems support detections and diagnosis of significant diseases, including cancer. Breast cancer is one of the most prevalent cancers influencing women and causing death around the world. Early detection of breast cancer has a significant effect on treatment. The typical CAD system goes through various steps, including image segmentation, feature extraction, and image classification. Image segmentation plays an important role in CAD systems and simplifies further processing. This review explores popular mammogram segmentation techniques. A mammogram is medical imaging which uses a low-dose x-ray system to see inner tissues of the breast. There are many segmentation techniques used to segment medical images. These techniques can be categorized into five main categories: region-based methods, boundary-based methods, atlas-based methods, model-based methods, and deep learning. A ground truth image is needed to measure the performance of the segmentation algorithm. Different performance measurements were used to evaluate the segmentation process, including accuracy, precision, recall, F1 score, Hausdorff Distance, Jaccard, and Dice Index. The research in mammogram segmentation has yielded promising results, but there is room for improvements.

Author 1: Eman Justaniah
Author 2: Areej Alhothali
Author 3: Ghadah Aldabbagh

Keywords: Mammogram; medical imaging; segmentation; preprocessing; breast cancer

PDF

Paper 65: Sensed-Lexicon based Approach for Identification of Similarity among Punjabi Documents

Abstract: Textual similarity among documents often leads to copyright issues. Manual measurement of similarity among documents is a time consuming infeasible activity. In this paper, we proposed a technique for measuring similarity at sensed-lexicon level for documents written in Punjabi language using Gurumukhi script. 50 Punjabi document pairs were manually collected with the help of Punjabi native writers. The proposed technique consisted of major 4 levels. Level 0 consists of data collection phase. Level 1 consists of noise removal and stop word removal sub levels. Extracted tokens were stemmed, lemmatized and synonyms were replaced based on part of speech tagging in level 2. Vector space representation corresponding to each document leads to n-gram generation of documents in level 2. Extracted n-grams were weighted based on term frequency. In level 3, string based token level similarity indexes such as Jaccard Similarity Index (JSI), Cosine Similarity Index (CSI) and Levenshtien Distance Index (LDI) were experimented with weighed tokens. In this work, Human Intelligence Task (HIT) based rating has been utilized for measuring the similarity among documents between 0-100. Results obtained from HIT based rating are compared with results obtained from the proposed technique with various combinations of pre-processing levels. Results revealed that on the basis of majority voting, combination of stop word removal with stemming and ‘noun’ based synonym replacement leads to the best combination with bi-gram tokens. Statistical analysis indicates strong correlation between CSI and HIT based rating.

Author 1: Jasleen Kaur
Author 2: Jatinderkumar R Saini

Keywords: Cosine Similarity Index (CSI); Jaccard Similarity Index (JSI); Levenshtien Distance Index (LDI); n-gram; Punjabi; similarity checker

PDF

Paper 66: A Review on Feature Selection and Ensemble Techniques for Intrusion Detection System

Abstract: Intrusion detection has drawn considerable interest as researchers endeavor to produce efficient models that offer high detection accuracy. Nevertheless, the challenge remains in developing reliable and efficient Intrusion Detection System (IDS) that is capable of handling large amounts of data, with trends evolving in real-time circumstances. The design of such a system relies on the detection methods used, particularly the feature selection techniques and machine learning algorithms used. Thus motivated, this paper presents a review on feature selection and ensemble techniques used in anomaly-based IDS research. Dimensionality reduction methods are reviewed, followed by the categorization of feature selection techniques to illustrate their effectiveness on training phase and detection. Selection of the most relevant features in data has been proven to increase the efficiency of detection in terms of accuracy and computational efficiency, hence its important role in the design of an anomaly-based IDS. We then analyze and discuss a variety of IDS-based machine learning techniques with various detection models (single classifier-based or ensemble-based), to illustrate their significance and success in the intrusion detection area. Besides supervised and unsupervised learning methods in machine learning, ensemble methods combine several base models to produce one optimal predictive model and improve accuracy performance of IDS. The review consequently focuses on ensemble techniques employed in anomaly-based IDS models and illustrates how their use improves the performance of the anomaly-based IDS models. Finally, the paper laments on open issues in the area and offers research trends to be considered by researchers in designing efficient anomaly-based IDSs.

Author 1: Majid Torabi
Author 2: Nur Izura Udzir
Author 3: Mohd Taufik Abdullah
Author 4: Razali Yaakob

Keywords: Intrusion detection system (IDS); anomaly-based IDS; feature selection (FS); ensemble

PDF

Paper 67: Design, Aggregation and Analysis of Power Consumption Data using the Jump Process

Abstract: This work aims to seek a pragmatic approach to assess electricity consumption at the level of households, buildings and neighborhoods. The main concern consists in proposing aggregation methods based on jump process according to a customer environment that is intrinsically linked to the implementation of a centralized system. The aim of the approach is to present data aggregations that derive their basis from a data model in order to facilitate the processing of electricity data at different scales of analysis. Such a smart meter data management process merits the design of an aggregated database that can store data for a house, a building and a neighborhood. The advantage of this system lies in the facilitation of data interpretation and the ability to guide decision-makers in the management of electricity consumption. An analysis of the behavior of electricity consumption is also proposed based on the monitoring of the electricity consumption of the various devices connected to a smart meter.

Author 1: Yazid Hambally Yacouba
Author 2: Amadou Diabagaté
Author 3: Michel Babri
Author 4: Adama Coulibaly

Keywords: Design; aggregation; analysis; jump process; electricity consumption; smart meter

PDF

Paper 68: Assessing Data Sharing's Model Fitness Towards Open Data by using Pooled CFA

Abstract: This study demonstrates the step-by-step procedure to perform Pooled Confirmatory Factor Analysis (CFA) in the measurement part of Structural Equation Modelling (SEM). CFA is crucial for the SEM measurement model to obtain the acceptable model fit before modeling the structural model. There are two techniques in CFA; individual CFA and Pooled-CFA. Usually, Pooled-CFA is done due to the high number of constructs and items. If the model is too complicated and has so many constructs and items, then it is recommended to perform Pooled-CFA to simplify the model's looks yet easy to understand. The perception of Malaysia Technical University Network (MTUN) academics on data sharing towards open data was analysed by using pooled-CFA. There are three main constructs: data sharing with its 4 sub-constructs; (technological factor, organizational factor, environmental factor, and individual factor), mediator construct (open data licenses), and open data construct was analyzed in this research. Furthermore, second-order constructs' factor loadings towards their corresponding sub-constructs were investigated. This research collected the primary data of 442 respondents using a stratified random sampling technique. This paper will explain the theoretical framework before revealing the results of Pooled-CFA on data sharing towards open data.

Author 1: Siti Nur’asyiqin Ismael
Author 2: Othman Mohd
Author 3: Yahaya Abd Rahim

Keywords: Pooled CFA; data sharing; open data; measurement model; validity

PDF

Paper 69: Towards the Development of a Brain Semi-controlled Wheelchair for Navigation in Indoor Environments

Abstract: Several technological advancements emerged providing the technical assistance supporting people with special needs in tackling their everyday tasks. Particularly, with the advancements in cost-effective Brain-Computer Interfaces (BCI), they can be very useful for people with disabilities to improve their quality of life. This paper investigates the usability of low-cost BCI for navigation in an indoor environment, which is considered one of the daily challenges facing individuals with mobility impairment. A software framework is proposed to control a wheelchair using three modes of operations: brain-controlled, autonomous and semi-autonomous, taking into consideration the usability and safety requirements. A prototype system based on the proposed framework was developed. The system can detect an obstacle in the front, right and left sides of the wheelchair and can stop the movement automatically to avoid collation. The usability evaluation of the proposed system, in terms of effectiveness, efficiency and satisfaction, shows that it can be very helpful in the daily life of the mobility impaired people. An experiment was conducted to assess the usability of the proposed framework using the prototype system. Subjects steered the wheelchair using the three different operation modes effectively by controlling the direction of motion.

Author 1: Hailah AlMazrua
Author 2: Abir Benabid Najjar

Keywords: Usability; wheelchair navigation; indoor navigation; mobility impairment; obstacle avoidance; obstacle detection; path planning; BCI; brain-computer interaction

PDF

Paper 70: Computing Academics’ Perceived Level of Awareness and Exposure to Software Engineering Code of Ethics: A Case Study of a South African University of Technology

Abstract: The need for awareness on ethical computing is increasingly becoming important. As a result this challenges all stakeholders in the software engineering profession, including educators, to improve their efforts on the awareness of professional codes of ethics which provide framework for ethical reference. However, the several compromises in the software engineering practice suggest that there are some in the profession, who are not familiar with the profession’s codes of ethics and subsequently not able to practice and teach students about them. This research work investigates the extent of codes of ethics awareness by practitioners who are teaching software development courses in an academic environment. An online questionnaire with indicators for measuring awareness on software engineering code of ethics was deployed and responded to by 44 educators. Graphical, univariate and bivariate analyses were conducted on the data to determine the profile of the respondents and the extent of their level of awareness on the codes of ethics. The results indicate that majority of the lecturers (54.5 %) are not aware of software engineering codes of ethics, and those who are aware, majority of them were exposed to through self-study or personal development. Furthermore, the inclusion of codes of ethics in the learning activities is minimal as inhibited by lack of awareness and failure to apply the codes practically. This study recommends that lecturing staff as part of the professional software engineers serving as academic corps, should be placed on programmes for exposing them to professional software engineering codes of ethics. Moreover, the study calls for accreditation of software engineering courses, as it is the case with other professional engineering disciplines, to improve awareness and subsequent practical application of the codes of ethics.

Author 1: Robert T. Hans
Author 2: Senyeki M. Marebane
Author 3: Jacqui Coosner

Keywords: Software engineering ethics; code of ethics; ethics awareness; ethics education; moral development ethics

PDF

Paper 71: Automating and Optimizing Software Testing using Artificial Intelligence Techniques

Abstract: The final product of software development process is a software system and testing is one of the important stages in this process. The success of this process can be determined by how well it accomplishes its goal. Due to the advancement of technology, various software testing tools have been introduced in the software engineering discipline. The use of software is increasing day-by-day and complexity of software functions are challenging and there is need to release the software within the short quality evaluation period, there is a high demand in adopting automation in software testing. Emergence of automatic software testing tools and techniques helps in quality enhancement and reducing time and cost in the software development activity. Artificial Intelligence (AI) techniques are widely applied in different areas of Software engineering (SE). Application of AI techniques can help in achieving good performance in software Testing and increase the productivity of the software development firms. This paper briefly presents the state of the art in the field of software testing by applying AI techniques in software testing.

Author 1: Minimol Anil Job

Keywords: Software testing; artificial intelligence; testing automation; software engineering; software quality

PDF

Paper 72: Cultural Events Classification using Hyper-parameter Optimization of Deep Learning Technique

Abstract: Through digitization, maintaining and promoting cultural heritage is being strengthened. Concerning this background, this study presents a new Indonesia cultural events dataset and automatic image classification for cultural events. The dataset was developed using the Flickr image platform, and the five cultural events image was collected including the Baliem Festival, Jember Fashion Festival, Nyepi Festival, Pacu Jawi, and Pasola Festival. Further, Convolutional Neural Networks (CNN) was developed for the classification method. A comparison of CNN models (VGG16 and VGG19) using several optimization configurations was performed to get the best model. The results showed that the VGG16 with image augmentation and dropout regularization technique performed best with 94.66% accuracy. This study hoped to support the heritage's digital documentation process and preserve Indonesia's cultural heritage.

Author 1: Feng Zhipeng
Author 2: Hamdan Gani

Keywords: Cultural events; convolutional neural network (CNN); very depth convolutional network (VGG); multi-class classification

PDF

Paper 73: Towards using Single EEG Channel for Human Identity Verification

Abstract: Biometrics is an interesting area of research as a result of tremendous technological advances, especially in security. It is considered as an automated technology used for identification based on biological or behavioral human traits. An electroencephalogram (EEG) is the brain electrical activity signals considered as biological traits used in biometrics systems. The primary goal of this work is trying to find a single EEG channel to be used for human identification purposes. A single EEG channel recording is used for personal identity-based verification mode, which is preferred for many subjects with instant real-time system decisions. Percent residual difference (PRD) is a common quantitative measurement used to determine the human identity-based measures the distance between two signals. The proposed system sensitivity gives 100% using some single channels placed in the parietal and occipital lobes. The proposed system takes a short time in the enrolment process with an instant decision using verification mode, which is preferred with a large number of subjects. Also, using imaginary tasks is preferred for human identity verification.

Author 1: Marwa A. Elshahed

Keywords: Biometric; EEG; single channel; verification; brain lobes

PDF

Paper 74: Evaluation of Machine Learning Algorithms for Intrusion Detection System in WSN

Abstract: Technology has revolutionized into connecting “things” together with the rebirth of the global network called Internet of Things (IoT). This is achieved through Wireless Sensor Network (WSN) which introduces new security challenges for Information Technology (IT) scientists and researchers. This paper addresses the security issues in WSN by establishing potential automated solutions for identifying associated risks. It also evaluates the effectiveness of various machine learning algorithms on two types of datasets, mainly, KDD99 and WSN datasets. The aim is to analyze and protect WSN networks in combination with Firewalls, Deep Packet Inspection (DPI), and Intrusion Prevention Systems (IPS) all specialized for the overall protection of WSN networks. Multiple testing options were investigated such as cross validation and percentage split. Based on the finding, the most accurate algorithm and the least time processing were suggested for both datasets.

Author 1: Mohammed S. Alsahli
Author 2: Marwah M. Almasri
Author 3: Mousa Al-Akhras
Author 4: Abdulaziz I. Al-Issa
Author 5: Mohammed Alawairdhi

Keywords: Internet of Things (IoT); Wireless Sensor Network (WSN); Information Technology (IT); Denial of Service (DoS); artificial intelligence (AI); machine learning (ML)

PDF

Paper 75: Improving Packet Delivery Ratio in Wireless Sensor Network with Multi Factor Strategies

Abstract: In the design of wireless sensor network (WSN), packet delivery ratio is an import parameter to be maximized. In existing schemes, a secure zone-based routing protocol was implemented for life time improvement in WSNs. In multi - hop communication, a new routing criterion was formulated for packet transmission. Security against message tampering, dropping and flooding attacks was incorporated in the routing metric. The approach skipped risky zones as a whole from routing and chooses alternative path to route a packet in secured manner with less energy consumption. Though energy conservation and attack resilience are achieved, congestion in WSN is increased and because of it packet delivery ratio is diminished. To address this problem, we propose a solution to improve the packet delivery ratio with a multi factor strategies involving routing, differentiation of flows, flow-based congestion control with retransmission and redundant packet coding. Detailed analysis and simulations are undertaken to evaluate the efficiency of the contemplated work compared to the existing solutions.

Author 1: Venkateswara Rao M
Author 2: Srinivas Malladi

Keywords: Multi factor strategies; novel routing metric; packet coding; packet delivery ratio

PDF

Paper 76: UML Sequence Diagram: An Alternative Model

Abstract: The UML sequence diagram is the second most common UML diagram that represents how objects interact and exchange messages over time. Sequence diagrams show how events or activities in a use case are mapped into operations of object classes in the class diagram. The general acceptance of sequence diagrams can be attributed to their relatively intuitive nature and ability to describe partial behaviors (as opposed to such diagrams as state charts). However, studies have shown that over 80% of graduating students were unable to create a software design or even a partial design, and many students had no idea how sequence diagrams were constrained by other models. Many students exhibited difficulties in identifying valid interacting objects and constructing messages with appropriate arguments. Additionally, according to authorities, even though many different semantics have been proposed for sequence diagrams (e.g., translations to state machines), there exists no suitable semantic basis refinement of required sequence diagram behavior because direct style semantics do not precisely capture required sequence diagram behaviors; translations to other formalisms disregard essential features of sequence diagrams such as guard conditions and critical regions. This paper proposes an alternative to sequence diagrams, a generalized model that provides further understanding of sequence diagrams to assimilate them into a new modeling language called thinging machine (TM). The sequence diagram is extended horizontally by removing the superficial vertical-only dimensional limitation of expansion to preserve the logical chronology of events. TM diagramming is spread nonlinearly in terms of actions. Events and their chronology are constructed on a second plane of description that is superimposed on the initial static description. The result is a more refined representation that would simplify the modeling process. This is demonstrated through remodeling sequence diagram cases from the literature.

Author 1: Sabah Al-Fedaghi

Keywords: Requirements elicitation; conceptual modeling; static model; events model; behavioral model

PDF

Paper 77: A Succinct Novel Searching Algorithm

Abstract: A searching algorithm was found to be effective in producing acutely needed results in the operation of data structures. Searching is being performed as a common operation unlike other operations in various formats of algorithms. The binary and linear search book a room in most of the searching techniques. Going with each technique has its inbuilt limitations and explorations. The versatile approach of different techniques which is in practice helps in bringing out the hybrid search techniques around it. For any tree representation, the sorted order is expected to achieve the best performance. This paper exhibits the new technique named the biform tree approach for producing the sorted order of elements and to perform efficient searching.

Author 1: Celine
Author 2: Shinoj Robert
Author 3: Maria Dominic

Keywords: Time complexities; space complexities; searching algorithm; biform tree; pre-order traversal

PDF

Paper 78: Earthquake Prediction using Hybrid Machine Learning Techniques

Abstract: This research proposes two earthquake prediction models using seismic indicators and hybrid machine learning techniques in the region of southern California. Seven seismic indicators were mathematically and statistically calculated depending on pervious recorded seismic events in the earthquake catalogue of that region. These indicators are namely, time taken during the occurrence of n seismic events (T), average magnitude of n events (M_mean), magnitude deficit that is the difference between the observed magnitude and expected one (ΔM), the curve slope for n events using inverse power law of Gutenberg Richter (b), mean square deviation for n events using inverse power law of Gutenberg Richter (η), the square root of the released energy during T time (DE1/2) and average time between events (µ). Two hybrid machine learning models are proposed to predict the earthquake magnitude during fifteen days. The first model is FPA-ELM, which is a hybrid of the flower pollination algorithm (FPA) and the extreme learning machine (ELM). The second is FPA-LS-SVM, which is a hybrid of FPA and the least square support vector machine (LS-SVM). These two models' performance is compared and assessed using four assessment criteria: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Symmetric Mean Absolute Percentage Error (SMAPE), and Percent Mean Relative Error (PMRE). The simulation results showed that the FPA-LS-SVM model outperformed the FPA-ELM, LS-SVM, and ELM models in terms of prediction accuracy.

Author 1: Mustafa Abdul Salam
Author 2: Lobna Ibrahim
Author 3: Diaa Salama Abdelminaam

Keywords: Extreme learning machine; least square support vector machine; flower pollination algorithm; earthquake prediction

PDF

Paper 79: New Smart Encryption Approach based on Multidimensional Analysis Tools

Abstract: In the last decade, with the new situation forced by the Covide-19 pandemic, the information systems are often forced to work remotely, they must communicate and share confidential data with several interlocutors. In such a context, ensuring the confidentiality of communications becomes a complex and difficult task. Hence, the need to have a flexible system that can adapt with different parameters involved in every exchange of information. We recently presented in [1] a new smart approach to data encryption that serves the same purpose. This approach uses the concept of artificial intelligence and apply BNL skyline algorithm to decide about the most suitable algorithm to ensure the best data privacy. However, with the evolution of dimensions and criteria to be considered for this smart encryption, we find that the complexity of the BNL algorithm increase, then, the response time of the application increase and the skyline encryption quality decreases. In this work, we propose a new idea to resolve this problematic. Indeed, this contribution consists in adding another Intelligence brick to dynamically define the Skyline algorithm depending on the type and number of dimensions. Through this paper, we provide an analysis and a comparison of some skyline algorithms for the multidimensional search. The results obtained by this study show the performance of this new approach whether in terms of execution time or in the quality of the dominant encryption solution.

Author 1: Salima TRICHNI
Author 2: Fouzia OMARY
Author 3: Mohammed BOUGRINE

Keywords: Security; confidentiality; artificial intelligent; smart encryption; cryptography; skyline

PDF

Paper 80: Natural Language Processing Applications: A New Taxonomy using Textual Entailment

Abstract: Textual entailment recognition is one of the recent challenges of the Natural Language Processing (NLP) domain. Deep learning strategies are used in the work of text entailment instead of traditional Machine learning or raw coding to achieve new enhanced results. Textual entailment is also used in the substantial applications of NLP such as summarization, machine translation, sentiment analysis, and information verification. Text entailment is more precise than traditional Natural Language Processing techniques in extracting emotions from text because the sentiment of any text can be clarified by textual entailment. For this purpose, when combining a textual entailment with deep learning, they can hugely showed an improvement in performance accuracy and aid in new applications such as depression detection. This paper lists and describes applications of natural language processing regarding textual entailment. Various applications and approaches are discussed. Moreover, datasets, algorithms, resources, and performance evaluation for each model is included. Also, it compares textual entailment application models according to the method used, the result for each model, and the pros and cons of each model.

Author 1: Manar Elshazly
Author 2: Mohammed Haggag
Author 3: Soha Ahmed Ehssan

Keywords: Textual entailment; deep learning; summarization; sentiment analysis; information verification; machine learning; depression detection

PDF

Paper 81: Improved Exemplar based Image Inpainting for Partial Instance Occlusion Handling with K-means Clustering and YCbCr Color Space

Abstract: The images acquired in real time outdoor environment are often subject to uneven illumination conditions, cloudy weather, lighting conditions. The instances of partial occlusions deteriorate the background modeling of such scenes. Varying illumination outdoor scene set of images with partial occlusions are addressed through this investigative work. There is a need for a restoration method that finds improved subjective perception and execution time. The proposed work focuses on novel amended exemplar model to improve the subjective perception. The exemplar inpainting method is improved through the color quantization with K-means clustering approach in YCbCr color space. Experimental validations and proposed method results show better improvement in qualitative and objective measures than existing methods. The average “Peak Signal to Noise Ratio (PSNR)” as 28.2869 and “Structural SIMilarity index (SSIM)” as 0.9759 of the proposed method has shown better results respectively both visually and with a tradeoff in time.

Author 1: Deepa Abin
Author 2: Sudeep D.Thepade

Keywords: Partial occlusion; exemplar inpainting; K-means clustering; YCbCr color space

PDF

Paper 82: Predicting the Appropriate Mode of Childbirth using Machine Learning Algorithm

Abstract: A woman's satisfaction with childbirth may have immediate and long-term effects on her health as well as on the relationship with her newborn child. The mode of baby delivery is genuinely vital to a delivery patient and her infant child. It might be a crucial factor for ensuring the safety of both the mother and the child. During the baby delivery, decision-making within a short time becomes very challenging for the physician. Besides, humans may make wrong decisions selecting the appropriate delivery mode of childbirth. A wrong decision increases the mother's life risk and can also be harmful to the newborn baby's health. Computer-aided decision-making can be an excellent solution to this problem. Considering this scope, we have built a supervised machine learning-based decision-making model to predict the most suitable childbirth mode that will reduce this risk. This work has applied 32 supervised classifier algorithms and 11 training methods on the real childbirth dataset from the Tarail Upazilla Health complex, Kishorganj, Bangladesh. We have also analyzed the result and compared them using various statistical parameters to determine the best-performed model. The quadratic discriminant analysis has shown the highest accuracy of 0.979992 with the F1 score of 0.979962. Using this model to decide the appropriate labor mode may significantly reduce maternal and infant health risks.

Author 1: Md. Kowsher
Author 2: Nusrat Jahan Prottasha
Author 3: Anik Tahabilder
Author 4: Kaiser Habib
Author 5: Md. Abdur-Rakib
Author 6: Md. Shameem Alam

Keywords: Childbirth; labour mode; supervised machine learning; maternal death; infant

PDF

Paper 83: Power-based Side Channel Analysis and Fault Injection: Hacking Techniques and Combined Countermeasure

Abstract: Over the last years, physical attacks have been massively researched due to their capability to extract secret information from cryptographic engine. These hacking techniques are based on exploiting information from physical implementations instead of cryptographic algorithm flaws. Fault-injection attacks (FA) and Side-channel analysis (SCA) are the most popular techniques of implementation attacks. Aiming to secure cryptographic devices against such attacks, many studies have proposed a variety of developed and sophisticated countermeasures. Hence, the majority of these secured approaches are used for precise and single attack and it is difficult to thwart hybrid attack, such as combined power and fault attacks. In this work, the Advanced Encryption Standard is used as a case study in order to analyse the most well-known physical-based Hacking techniques: Differential Fault Analysis (DFA) and Correlation Power Analysis (CPA). Consequently, with the knowledge of such contemporary hacking technique, we proposed a low overhead countermeasure for the AES implementation that combines the concept of correlated power noise generating with a combined-approach based fault detection scheme.

Author 1: Noura Benhadjyoussef
Author 2: Mouna Karmani
Author 3: Mohsen Machhout

Keywords: Advanced encryption standard; fault attack; power attacks; combined countermeasure; hardware implementation

PDF

Paper 84: Drone Security: Issues and Challenges

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Rizwan Majeed
Author 2: Nurul Azma Abdullah
Author 3: Muhammad Faheem Mushtaq
Author 4: Rafaqut Kazmi

Keywords: Drone technology; security; internet of things; threats; privacy

PDF

Paper 85: Blockchain Technology in Education System

Abstract: The aim of this paper is to review the blockchain technology and its benefits in relations to education system. Blockchain technology is widely researched and highly evaluated and appraised for its unique infrastructure. In general, blockchain researched for its association with Bitcoin and cryptocurrency advantages. In this survey the plan is to conduct a full review of previous literatures focused on blockchain in education systems. Provide overall reviews of blockchain concepts and architecture behind the technology and to examine verification software that used by the technology to improve security and immutability. Brief discussion on the consensus algorithms and hashing function and how these operate and difference type of blockchain will be discussed. In this survey, the existing technology used in Saudi Arabia will be reviewed. In-depth research conducted for over 70 papers, of which 35 noted in this survey. Blockchain emerging promise a real time democracy and justice to all users over the world. Educational Industries said to revolutionize its communication system and accessibility and extend their market globally by widening their admissions and providing secure cost-effective, transparent and immutable communications across their educational platforms.

Author 1: Afnan H. Alsaadi
Author 2: Doaa M. Bamasoud

Keywords: Blockchain; certifications; authentication; decentralized-education; transparency; immutability; smart contracts; learning accessibility; fraud prevention; sustainability; ledger; consensus

PDF

Paper 86: Supporting Multi-interface Entities in Software-Defined Wireless Networks

Abstract: Software-Defined Networking (SDN) has gained growing momentum from its earlier application for wired networks (e.g., data center networks) to its application for wireless and mobile networks. In addition, state-of-the-art wireless and mobile networks (cellular networks and mesh networks) have been enhanced through the integration of multiple radio access technologies or multiple interfaces. This paper considers how to evolve multi-interface wireless mobile networks according to a future SDN-based paradigm, and deals with the technical problems therein. It presents the design and implementation of mechanisms that support SDN-based control of two types of multi-interface wireless mobile networks: one with multi-interface user devices and the other with multi-interface switching entities. As a methodology of demonstrating the feasibility of the proposed solution, a novel testing suite incorporating a real SDN controller and a standardized network simulator is designed and built. The functional verification and performance of the proposed solution is demonstrated in a virtual network topology but with the orchestration of the real SDN controller. The results show the multi-interface wireless entities can exploit the multi-radio and multi-channel wireless resources with the help of SDN approach.

Author 1: Jun-Hyuk Park
Author 2: Wonyong Yoon

Keywords: Software-defined networking; multi-interface; multi-interface switch; flow-precision mobility; flow-precision routing

PDF

Paper 87: A Machine Learning based Analytical Approach for Envisaging Bugs

Abstract: A software imperfection is a shortcoming, virus, defect, mistake, breakdown or glitch in software that initiates it to establish an unsuitable or unanticipated result. The foremost hazardous components connected with a software imperfection that is not identified at an initial stage of software expansion are time, characteristic, expenditure, determination and wastage of resources. Faults appear in any stage of software expansion. Thriving software businesses emphasize on software excellence, predominantly in the early stage of the software advancement. In succession to disable this setback, investigators have formulated various bug estimation methodologies till now. Though, emerging vigorous bug estimation prototype is a demanding assignment and several practices have been anticipated in the text. This paper exhibits a software fault estimation prototype grounded on Machine Learning (ML) Algorithms. The simulation in the paper directs to envisage the existence or non-existence of a fault, employing machine learning classification models. Five supervised ML algorithms are utilized to envisage upcoming software defects established on historical information. The classifiers are Naïve Bayes (NB), Support Vector Machine (SVM), K- Nearest Neighbors (KNN), Decision Tree (DT) and Random Forest (RF). The assessment procedure indicated that ML algorithms can be manipulated efficiently with high accuracy rate. Moreover, an association measure is employed to evaluate the propositioned extrapolation model with other methods. The accumulated conclusions indicated that the ML methodology has an improved functioning.

Author 1: Anjali Munde

Keywords: Software bug prediction; prediction model; data mining; machine learning; Naïve Bayes (NB); support vector machine (SVM); k-nearest neighbors (KNN); decision tree (DT); random forest (RF); python programming

PDF

Paper 88: Multi-category Bangla News Classification using Machine Learning Classifiers and Multi-layer Dense Neural Network

Abstract: Online and offline newspaper articles have become an integral phenomenon to our society. News articles have a significant impact on our personal and social activities but picking a piece of an appropriate news article is a challenging task for users from the ocean of sources. Recommending the appropriate news category helps find desired articles for the readers but categorizing news article manually is laborious, sluggish and expensive. Moreover, it gets more difficult when considering a resource-insufficient language like Bengali which is the fourth most spoken language of the world. However, very few approaches have been proposed for categorizing Bangla news articles where few machine learning algorithms were applied with limited resources. In this paper, we accentuate multiple machine learning approaches including a neural network to categorize Bangla news articles for two different datasets. News articles have been collected from the popular Bengali newspaper Prothom Alo to build Dataset I and dataset II has been gathered from the famous machine learning competition platform Kaggle. We develop a modified stop-word set and apply it in the preprocessing stage which leads to significant improvement in the performance. Our result shows that the Multi-layer Neural network, Naïve Bayes and support vector machine provide better performance. Accuracy of 94.99%, 94.60%, 95.50% has been achieved for SVM, Logistic regression and Multi-layer dense Neural network, respectively.

Author 1: Sharmin Yeasmin
Author 2: Ratnadip Kuri
Author 3: A R M Mahamudul Hasan Rana
Author 4: Ashraf Uddin
Author 5: A. Q. M. Sala Uddin Pathan
Author 6: Hasnat Riaz

Keywords: Bangla news classification; supervised learning; feature extraction; category prediction; machine learning; neural network

PDF

Paper 89: The Effect of Using Light Stemming for Arabic Text Classification

Abstract: Arabic is one of the Semitic languages in antiquity and one of the six official languages of the UN. Also, Arabic classification plays a significant and essential role in modern applications. There is a big difference between handling English text and Arabic text classification; preprocessing is also challenging for Arabic text. This paper presents the implementation of a Naïve Bayes classifier for Arabic text with and without stemmer. A set of four categories and 800 documents were used from the Text Retrieval Conference (TREC) 2001 dataset. The results showed that Naïve Bayes with light stemmer achieves better results than Naïve Bayes without stemmer. The findings of the classifier accuracy by employing stemmer and without stemmer are as preprocessing. It reveals that the accuracy resulted from the light stemmer was better than the classifier without stemmer detection, which Naïve Bayes Classification with light stemmer got 35.0745 higher than the Naïve Bayes Classification 33.831% without stemmer. After contrasting them, the stemmer got better accuracy than the classifier.

Author 1: Jaffar Atwan
Author 2: Mohammad Wedyan
Author 3: Qusay Bsoul
Author 4: Ahmad Hamadeen
Author 5: Ryan Alturki
Author 6: Mohammed Ikram

Keywords: Arabic language; light stemming; information retrieval; Naïve Bayes classification

PDF

Paper 90: Travel Behavior Modeling: Taxonomy, Challenges, and Opportunities

Abstract: Personal daily movement patterns have a longitu-dinal impact on the individual’s decision-making in traveling. Recent observation on human travel raises concerns on the impact of travel behavior changes on many aspects. Many travel-related aspects like traffic congestion management and effective land-use were significantly affected by travel behavior changes. Existing travel behavior modeling (TBM) were focusing on assessing traffic trends and generate improvement insights for urban planning, infrastructure investment, and policymaking. However, literature indicates limited discussions on recent TBM adaptation towards future technological advances like the integration of autonomous vehicles and intelligent traveling. This survey paper aims to provide overview insights on recent advances of TBM including notable classifications, emerging challenges, and rising opportunities. In this survey, we reviewed and analyzed recently published works on TBM from high-quality publication sources. A taxonomy was devised based on notable characteristics of TBM to guide the classification and analysis of these works. The taxonomy classifies recent advances in TBM based on type of algorithms, applications, data sources, technologies, behavior analysis, and datasets. Furthermore, emerging research chal-lenges and limitations encountered by recent TBM studies were characterized and discussed. Subsequently, this survey identified and highlights open issues and research opportunities arise from recent TBM advances for the future undertaking.

Author 1: Aman Sharma
Author 2: Abdullah Gani
Author 3: David Asirvatham
Author 4: Riyath Ahmed
Author 5: Muzaffar Hamzah
Author 6: Mohammad Fadhli Asli

Keywords: Travel behavior; travel behavior modeling; predic-tion modeling; intelligent traveling

PDF

Paper 91: A Novel Pornographic Visual Content Classifier based on Sensitive Object Detection

Abstract: With the increasing amount of pornography being uploaded on the Internet today, arises the need to detect and block such pornographic websites, especially in Eastern cultural countries. Studying pornographic images and videos, show that explicit sensitive objects are typically one of the main charac-teristics portraying the unique aspect of pornography content. This paper proposed a classification method on pornographic visual content, which involved detecting sensitive objects using object detection algorithms. Initially, an object detection model is used to identify sensitive objects on visual content. The detection results are then used as high-level features combined with two other high-level features including skin body and human presence information. These high-level features finally are fed into a fusion Support Vector Machine (SVM) model, thus draw the eventual decision. Based on 800 videos from the NDPI-800 dataset and the 50.000 manually collected images, the evaluation results show that our proposed approach achieved 94.06% and 94.88% in Accuracy respectively, which can be compared with the cutting-edge pornographic classification methods. In addition, a pornographic alerting and blocking extension is developed for Google Chrome to prove the proposed architecture’s effectiveness and capability. Working with 200 websites, the extension achieved an outstanding result, which is 99.50% Accuracy in classification.

Author 1: Dinh-Duy Phan
Author 2: Thanh-Thien Nguyen
Author 3: Quang-Huy Nguyen
Author 4: Hoang-Loc Tran
Author 5: Khac-Ngoc-Khoi Nguyen
Author 6: Duc-Lung Vu

Keywords: Computer vision; image proccessing; object detec-tion; pornographic recognition and classification; blocking exten-sion; machine learning; deep learning; CNN

PDF

Paper 92: Deep Learning-based Natural Language Processing Methods Comparison for Presumptive Detection of Cyberbullying in Social Networks

Abstract: Due to TIC development in the last years, users have managed to satisfy many social experiences through several digital media like blogs, web and especially social networks. However, not all social media users have had good experiences with these media. Since there are malicious users that are able to cause negative psychological effects over people, this is called cyberbullying. For this reason, social networks such as Twitter are looking to implement models based on deep learning or machine learning capable of recognizing harassing comments on their platforms. However, most of these models are focused on the use of English language and there are very few models adapted for Spanish language. This is why, in this paper we propose the evaluation of an RNN+LSTM neural network, as well as a BERT model through sentiment analysis, to perform the detection of cyberbullying based on Spanish language for Ecuadorian accounts of the social network Twitter. The results obtained show a balance between execution time and accuracy obtained for the RNN + LSTM model. In addition, evaluations of comments that are not explicitly offensive show a better performance for the BERT model, which outperforms its counterpart by 20%.

Author 1: Diego A. Andrade-Segarra
Author 2: Gabriel A. Le´on-Paredes

Keywords: Bidirectional Encoder Representations from Trans-formers; BERT; Cyberbullying; Natural Language Processing; Re-current Neural Network + Long Short Term Memory; RNN+LSTM; Sentiment Analysis; Semantics; Spanish Language Processing

PDF

Paper 93: Situation Awareness Levels to Evaluate the Usability of Augmented Feedback to Support Driving in an Unfamiliar Traffic Regulation

Abstract: Driving in an unfamiliar traffic regulation using an unfamiliar vehicle configuration contributes to increase number of traffic accidents. In these circumstances, a driver needs to have what is referred to as ‘situation awareness’ (SA). SA is divided into (level 1) perception of environmental cues, (level 2) comprehension of the perceived cues in relation to the current situation and (level 3) projection of the status of the situation in the near future. On the other hand, augmented feedback (AF) is used to enhance the performance of a certain task. In Driving, AF can be provided to drivers via in-vehicle information systems. In this paper, we hypothesize that considering the SA levels when designing AF can reduce the driving errors and thus enhance road safety. To evaluate this hypothesis, we conducted a quantitative study to test the usability of a certain set of feedback and an empirical study using a driving simulator to test the effectiveness of that feedback in terms of improving driving performance, particularly at roundabouts and intersections in an unfamiliar traffic system. The results of the first study enhanced the ability of the in-vehicle information system to provide feedback considering SA levels. This information was incorporated into a driving simulator and provided to drivers. The results of the second study revealed that considering SA levels when designing augmented feedback significantly reduces the driving errors at roundabouts and intersections in an unfamiliar traffic regulation.

Author 1: Hasan J. Alyamani
Author 2: Ryan Alturkiy
Author 3: Arda Yunianta
Author 4: Nashwan A. Alromema
Author 5: Hasan Sagga
Author 6: Manolya Kavakli

Keywords: Situation awareness; unfamiliar traffic regulation; augmented feedback; in-vehicle information systems

PDF

Paper 94: An Interactive Tool for Teaching the Central Limit Theorem to Engineering Students

Abstract: The sole purpose of this paper is to guide students in learning the introductory statistical concepts such as- probability distribution and the central limit theorem (CLT) in an intuitive approach through an interactive tool. When a used data has different probability distributions, this paper intends to clarify the notions of the CLT and the use of samples in the hypothesis testing of a population by demonstrating step-by-step procedures and hands-on simulation approach. This paper discusses the relationship between the sample size and the nature of the sampling distribution, which is a vital element of the CLT, in different population distribution using the developed interactive tool. Finally, the impact of the developed interactive tool is measured via a survey experiment that illustrated the success of the developed tool in teaching the CLT.

Author 1: Anas Basalamah

Keywords: Probability distribution; CLT; population; interac-tive tool; sampling distribution

PDF

Paper 95: Improving the Effectiveness of e-Learning Processes through Dynamic Programming: A Survey

Abstract: E-learning has been widely adopted as an important tool for distance education, especially in these days of pandemic Covid-19. However, several problems/challenges have been re-ported in different processes of e-learning that need to be ad-dressed for effective use of e-learning. These problems/challenges include development of student focused contents, giving learner partial control, addressing different learning styles, etc . Recently, several efforts have been made to solve e-learning process problems using dynamic programming techniques. Dynamic pro-gramming techniques divide a problem situation into several sub-problems and dynamically solves each sub-problem based on student needs. Thus it allows student focused customization at each step and provides adaptive e-learning to support students with different capabilities. The objective of this study is to review different e-learning problems and challenges and how those can be addressed using dynamic programming techniques. We conclude by highlighting the importance of different dynamic programming techniques for different processes of e-learning.

Author 1: Norah Alqahtani
Author 2: Farrukh Nadeem

Keywords: e-learning; e-learning challenge; dynamic programming

PDF

Paper 96: The Feasibility of Implementing a Secure C2C Credit Scoring Platform

Abstract: The continuous development of social media and online commerce, which permeates all aspects of our lives, leads to the need for a similar mechanism similar to the financial credit score in traditional business. Also, a realistic classification of users through social media to be used in all aspects of the relation-ships between users and some of them or between them and organizations is needed. In this article a new metrics to classify users according to their creditworthiness of the transactions that take place through the Internet is established. The object from this aricle design a social credit system model (SCSM) based on these new metrics. How to deal on the Internet, attacking people on social media, violating the privacy of people and others. Also Buying and selling operations, executing purchase and sale orders, paying amounts of money easily and quickly, and so on. These data and their degree of importance were determined according to several questionnaires directed to several segments of society. This creditworthiness can be used in banks, Uber, Online transactions and so on.

Author 1: Mariam Musa Al- 0qabi
Author 2: Wahid Rajeh

Keywords: Social media; online commerce; social credit sys-tem; creditworthiness

PDF

Paper 97: Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds

Abstract: Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.

Author 1: Sajan Kumar
Author 2: Sai Rishvanth Katragadda
Author 3: Ashu Abdul
Author 4: V. Dinesh Reddy

Keywords: Object classification; graph convolution networks; non-autonomous driving

PDF

Paper 98: Techniques for Solving Shortest Vector Problem

Abstract: Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.

Author 1: V. Dinesh Reddy
Author 2: P. Ravi
Author 3: Ashu Abdul
Author 4: Mahesh Kumar Morampudi
Author 5: Sriramulu Bojjagani

Keywords: Lattice; SVP; CVP; post quantum cryptography

PDF

Paper 99: Black-box Fuzzing Approaches to Secure Web Applications: Survey

Abstract: Web applications are increasingly important tools in our modern daily lives, such as in education, business transac-tions, and social media. Because of their prevalence, they are becoming more susceptible to different types of attacks that exploit security vulnerabilities. Exploiting these vulnerabilities may cause damage to the web applications as well as the end-users. Thus, web apps’ developers should identify vulnerabilities and fix them before an attacker exploits them. Using black-box fuzzing techniques for vulnerability identification is very popular during the web apps’ development life cycle. These techniques pledge to find vulnerabilities in web applications by constructing attacks without accessing their source codes. This survey explores the research that has been done in the black-box vulnerability finding and exploits construction in web applications and proposes future directions.

Author 1: Aseel Alsaedi
Author 2: Abeer Alhuzali
Author 3: Omaimah Bamasag

Keywords: Black-box fuzzing; web application security; vulner-ability scanning; automatic web app testing; vulnerability detection; survey

PDF

Paper 100: Improved Trust Model to Enhance Availability in Private Cloud

Abstract: In the process of cloud service selection, it is difficult for users to choose trusted, available, and reliable cloud services. A trust model is a perfect solution for this service selection problem. In cloud computing, data availability and reliability have always been major concerns. According to research, around$285 million is lost per year due to cloud service failures, with a 99.91 percent availability rate. Replication has long been used to improve the data availability of large-scale cloud storage systems where errors are anticipated. As compared to a small-scale environment, where each data node can have different capabilities and can only accept a limited number of requests, replica place-ment in cloud storage systems becomes more complicated. As a result, deciding where to keep replicas in the system to meet the availability criteria is an issue. To address above issue this paper proposes a trust model which helps in selecting appropriate node for replica placement. This trust model generates comprehensive trust value of the data center node based on dynamic trust value combined with QoS parameters. Simulation experiments show that the model can reflect the dynamic change of data center node subject trust, enhance the predictability of node selection, and effectively decreases the failure rate of node.

Author 1: Vijay Kumar Damera
Author 2: A Nagesh
Author 3: M Nagaratna

Keywords: Trust; trust model; cloud; availability; reliability

PDF

Paper 101: Supervised Learning-based Cancer Detection

Abstract: The segmentation, detection and extraction of the infected tumor from Magnetic Resonance Imaging (MRI) images are the key concerns for radiologists or clinical experts. But it is tedious and time consuming and its accuracy depends on their experience only. This paper suggest a new methodology segmentation, recognition, classification and detection of different types of cancer cells from both MRI and RGB (Red, Green, Blue) images are performed using supervised learning, Convolutional Neural Network (CNN) and morphological operations. In this methodology, CNN is used to classify cancer types and semantic segmentation to segment cancer cells. The system trained using the pixel labeled the ground truth where every image labeled as cancerous and non-cancerous. The system trained with 70%images and validated and tested with the rest 30%. Finally, the segmented cancer region is extracted and its percentage area is calculated. The research examined on the MATLAB platform on MRI and RGB images of the infected cell of BreCaHAD dataset for breast cancer, SN-AM Dataset for leukemia, Lung and Colon Cancer Histopathological Images dataset for lung cancer and Brain MRI Images for Brain Tumor Detection dataset for brain cancer.

Author 1: Juel Sikder
Author 2: Utpol Kanti Das
Author 3: Rana Jyoti Chakma

Keywords: Semantic segmentation; CNN; brain; breast; leukemia; lung

PDF

Paper 102: Analysis of the Use of Videoconferencing in the Learning Process During the Pandemic at a University in Lima

Abstract: Due to the health emergency situation, which forced universities to stop using their centers as a means of teaching, many of them opted for virtual education. Affecting the learning process of students, which has predisposed many of them to become familiar with this new learning process, making the use of virtual platforms more common. Many educational centers have come to rely on digital tools such as: Discord, Google Meet, Microsoft Team, Skype and Zoom. The objective of the research is to report on the impact of student learning through the use of the aforementioned videoconferencing tools. Surveys were conducted with teachers and students who stated that 66% were not affected in their educational development. Most of them became familiar with the platforms; however, less than 24% qualified that their academic performance has improved, some teachers still have difficulties at a psychological level due to this new teaching modality. In conclusion, teachers and students agree that these tools are a great help for virtual classes.

Author 1: Angie Del Rio-Chillcce
Author 2: Luis Jara-Monge
Author 3: Laberiano Andrade-Arenas

Keywords: Digital tools; health emergency; universities; video conferencing; virtual education

PDF

Paper 103: Network Forensics: A Comprehensive Review of Tools and Techniques

Abstract: With the evolution and popularity of computer networks, a tremendous amount of devices are increasingly being added to the global internet connectivity. Additionally, more sophisticated tools, methodologies, and techniques are being used to enhance global internet connectivity. It is also worth men-tioning that individuals, enterprises, and corporate organizations are quickly appreciating the need for computer networking. However, the popularity of computer and mobile networking brings various drawbacks mostly associated with security and data breaches. Each day, cyber-related criminals explore and devise complicated means of infiltrating and exploiting individual and corporate networks’ security. This means cyber or network forensic investigators must be equipped with the necessary mech-anisms of identifying the nature of security vulnerabilities and the ability to identify and apprehend the respective cyber-related offenders correctly. Therefore, this research’s primary focus is to provide a comprehensive analysis of the concept of network forensic investigation and describing the methodologies and tools employed in network forensic investigations by emphasizing on the study and analysis of the OSCAR methodology. Finally, this research provides an evaluative analysis of the relevant literature review in a network forensics investigation.

Author 1: Sirajuddin Qureshi
Author 2: Saima Tunio
Author 3: Faheem Akhtar
Author 4: Ahsan Wajahat
Author 5: Ahsan Nazir
Author 6: Faheem Ullah

Keywords: Network forensics; Tshark; Dumpcap; Wireshark; OSCAR; network security

PDF

Paper 104: Cloud Computing in Remote Sensing: Big Data Remote Sensing Knowledge Discovery and Information Analysis

Abstract: With the rapid development of remote sensing technology, our ability to obtain remote sensing data has been improved to an unprecedented level. We have entered an era of big data. Remote sensing data clear showing the characteristics of Big Data such as hyper spectral, high spatial resolution, and high time resolution, thus, resulting in a significant increase in the volume, variety, velocity and veracity of data. This paper proposes a feature supporting, salable, and efficient data cube for time-series analysis application, and used the spatial feature data and remote sensing data for comparative study of the water cover and vegetation change.The spatial-feature remote sensing data cube (SRSDC) is described in this paper. It is a data cube whose goal is to provide a spatial-feature-supported, efficient, and scalable multidimensional data analysis system to handle large-scale RS data. It provides a high-level architectural overview of the SRSDC.The SRSDC offers spatial feature repositories for storing and managing vector feature data, as well as feature translation for converting spatial feature information to query operations.The paper describes the design and implementation of a feature data cube and distributed execution engine in the SRSDC. It uses the long time-series remote sensing production process and analysis as examples to evaluate the performance of a feature data cube and distributed execution engine. Big data has become a strategic highland in the knowledge economy as a new strategic resource for humans. The core knowledge discov-ery methods include supervised learning methods data analysis supervised learning, unsupervised learning methods data analysis unsupervised learning, and their combinations and variants.

Author 1: Yassine SABRI
Author 2: Fadoua Bahja
Author 3: Aouad Siham
Author 4: Aberrahim Maizate

Keywords: Remote sensing; data integration; cloud computing; big data

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org