The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 2

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Model-driven Framework for Requirement Traceability

Abstract: In software development, requirements traceability is often mandated. It is important to apply to support various software development activities like result evaluation, regression testing and coverage analysis. Model-Driven Testing is one approach to provide a way to verify and validate requirements. However, it has many challenges in test generation in addition to the creation and maintenance of traceability information across test-related artifacts. This paper presents a model-based methodology for requirements traceability that relies on leveraging model transformation traceability techniques to achieve compliance with DO-178C standard as defined in the software verification process. This paper also demonstrates and evaluates the proposed methodology using avionics case studies focusing on the functional aspects of the requirements specified with the UCM (Use Case Maps) modeling language.

Author 1: Nader Kesserwan
Author 2: Jameela Al-Jaroodi

Keywords: Requirements; traceability; model transformation; do-178c; model-driven testing; traceability scheme

PDF

Paper 2: Space Mining Robot Prototype for NASA Robotic Mining Competition Utilizing Systems Engineering Principles

Abstract: The 2017 National Aeronautics and Space Ad-ministration (NASA) Robotic Mining Competition (RMC) is an outstanding opportunity for engineering students to imple-ment all the knowledge and experience that they gained in the undergraduate years, in building a robot that will provide an intellectual insight to NASA, to develop innovative robotic excavation concepts. For this competition, multiple universities from all over the U.S. will create teams of students and faculty member to design and build a mining robot that can traverse, mine, excavate at least 10 kg of regolith, then deposit it in a bin in the challenging simulated Martian terrain. Our team’s goal is to improve on our current design, and overcome DustyTRON 2.0’s limitations by analyzing them and implementing new engineering solutions. The process to improve this system will enable our team members to learn mechanical, electrical, and software engineering. DustyTRON 3.0 is divided into three sub-teams, namely, Mechanical, Circuitry, Software sub-teams. Mechanical team focused on solving the mechanical structure, robot mobility, stability and weight distribution. Circuitry team focused on the electrical components such as batteries, wiring, and motors. The Software team focused on programming the NVidia TK1, Arduino controller, and cameras integration. This paper will outline the detailed work following systems engineering principles to complete this project, from research, to design process and robot building compete at the Kennedy Space Center. Only 54 teams were invited to participate from allover the US and DustyTRON team represented the state of Texas and placed the 29th and awarded the “Innovative Design” award.

Author 1: Tariq Tashtoush
Author 2: Jesus A. Vazquez
Author 3: Julian Herrera
Author 4: Liliana Hernandez
Author 5: Lisa Martinez
Author 6: Michael E. Gutierrez
Author 7: Osiris Escamilla
Author 8: Rosaura E. Martinez
Author 9: Alejandra Diaz
Author 10: Jorge Jimenez
Author 11: Jose Isaac Segura
Author 12: Marcus Martinez

Keywords: NASA robotic mining competition; mining robot; ice regolith; autonomous; NASA; space exploration; systems life-cycle; mechanical structure design; control system; systems engineering; software development

PDF

Paper 3: Evaluating the Accuracy of Models for Predicting the Speech Acceptability for Children with Cochlear Implants

Abstract: This study developed a model for predicting healthy hearing people’s speech acceptability for children with cochlear implants using multiple regression analysis, support vector regression, and random forest and evaluated the prediction performance of the model by comparing mean absolute errors and root mean squared errors. This study targeted 91 hearing-impaired children between four and eight years old who had worn cochlear implants at least one year and less than five years. Speech data of children wearing cochlear implants (CI) were collected through two tasks: speaking and reading. The outcome variable, healthy hearing people’s speech acceptability for children wearing CI was evaluated by 80 college students (freshman and sophomore) who did not have prior knowledge of children with a cochlear implant. The results of this study showed that the random forest algorithm (mean absolute errors=0.81and root mean squared error=0.108) was the best model for predicting the speech acceptability of children wearing CI. The results of this study imply that the predictive performance of random forest will be the best among ensemble models when developing a machine learning model using speech data of children wearing CI.

Author 1: Haewon Byeon

Keywords: Cochlear implants; speech acceptability; support vector regression; random forest; mean absolute errors

PDF

Paper 4: Advanced Debugger for Arduino

Abstract: This article describes improved version of our source-level debugger for Arduino. The debugger can be used to debug Arduino programs using GNU debugger GDB with Eclipse or Visual Studio Code as the visual front-end. It supports all the functionally expected from a debugger such as stepping through the code, setting breakpoints, or viewing and modifying variables. These features are otherwise not available for the popular AVR-based Arduino boards without an external debug probe and modification of the board. With the presented debugger it is only needed to add a program library to the user program and optionally replace the bootloader. The debugger can speed up program development and make the Arduino platform even more usable as a tool for controlling various experimental apparatus or teaching computer programming. The article focuses on the new features and improvements we made in the debugger since its introduction in 2016. The most important improvement over the old version is the support for inserting breakpoints into program memory which allows debugging without affecting the speed of the debugged program and inserting breakpoints into interrupt service routines. Further enhancements include loading the program via the debugger and newly added support for Arduino Mega boards.

Author 1: Jan Dolinay
Author 2: Petr Dostálek
Author 3: Vladimír Vašek

Keywords: Arduino; debugger; microcontroller; software debugging

PDF

Paper 5: Transliterating Nôm Scripts into Vietnamese National Scripts using Statistical Machine Translation

Abstract: Nôm scripts were used as the Vietnamese writing system from the 10th century to the early 20th century. During this period, Nôm scripts were the means to record a broad range of historical events, literary works, medical knowledge, as well as wisdom of many other domains. Unfortunately, since hardly any native Vietnamese speaker can read Nôm scripts nowadays, these valuable documents have not been fully harnessed. To address this gap, it is necessary to build an automatic transliteration system that can support us in decoding the ancient scripts and gaining knowledge of our Vietnamese ancestors. This study focuses on categorizing and reviewing the current progress on the Statistical Machine Translation (SMT) approaches to transliterate Nôm scripts into Vietnamese national scripts. In this paper, we discuss the differences between Nôm scripts and Vietnamese national scripts, systematically compare SMT models in transliterating Nôm scripts into Vietnamese national scripts, as well as having a thorough outlook on several promising research directions.

Author 1: Dien Dinh
Author 2: Phuong Nguyen
Author 3: Long H. B. Nguyen

Keywords: Statistical machine translation; automatic translit-eration; Nôm Script (chú Nôm); vietnamese national script (chú Quèc ngú)

PDF

Paper 6: Improve the Effectiveness of Image Retrieval by Combining the Optimal Distance and Linear Discriminant Analysis

Abstract: In image retrieval with relevant feedback, classification and distance calculation have a great influence on image retrieval accuracy. In this paper, we propose an image retrieval method, called ODLDA (Image Retrieval using the optimal distance and linear discriminant analysis). The proposed method can effectively exploit user’s feedback from relevant and irrelevant image sets, which uses linear discriminant analysis to find a linear projection with an improved similarity measure. The experimental results performed on the two benchmark datasets have confirmed the superiority of the proposed method.

Author 1: Phuong Nguyen Thi Lan
Author 2: Tao Ngo Quoc
Author 3: Quynh Dao Thi Thuy
Author 4: Minh-Huong Ngo

Keywords: Content-based image retrieval; deep learning; similarity measures; Mahalanobis metric distance; linear discriminant analysis

PDF

Paper 7: HADOOP: A Comparative Study between Single-Node and Multi-Node Cluster

Abstract: Data analysis has become a challenge in recent years as the volume of data generated has become difficult to manage, therefore more hardware and software resources are needed to store and process this huge amount of data. Apache Hadoop is a free framework, widely used thanks to the Hadoop Distributed Files System (HDFS) and its ability to relate to other data processing and analysis components such as MapReduce for processing data, Spark - in-memory Data Processing, Apache Drill - SQL on Hadoop, and many other. In this paper, we analyze the Hadoop framework implementation making a comparative study between Single-node and Multi-node cluster on Hadoop. We will explain in detail the two layers at the base of the Hadoop architecture: HDFS Layer with its deamons NameNode, Secondary NameNode, DataNodes and MapReuce Layer with JobTrackers, TaskTrackers daemons. This work is part of a complex one aiming to perform data processing in Data Lake structures.

Author 1: Elisabeta ZAGAN
Author 2: Mirela DANUBIANU

Keywords: Hadoop; HDFS; single-node cluster; multi-node cluster; namenode; secondary namenode; datanodes; jobtracker; tasktrackers

PDF

Paper 8: Technology in Education: Attitudes Towards using Technology in Nutrition Education

Abstract: Digital technologies have influenced how teachers conduct the daily practice, and students learn in classrooms. In addition, technology is increasingly being deployed in the classroom environment via a combination of kinesthetics’, visual’s and auditory approaches. This paper aims to investigate teachers’ and students’ attitude towards using technology in nutritional education. Then, it discusses the impact of online games to enhance nutritional education of students. After that it will discuss the implication and findings of applying learning games to the curriculum from both teachers and students perspectives.

Author 1: Asrar Sindi
Author 2: James Stanfield
Author 3: Abdullah Sheikh

Keywords: Technology; application; online games; nutrition; education

PDF

Paper 9: Gender Differences in the Perception of a Student Information System

Abstract: There is growing recognition that electronic student information systems support college administrations and enhance student performance. These systems must fulfill their user’s needs by understanding gender differences among users. This study analyzes gender variations concerning the utilization of online student information systems (SIS), with its central concern being how the dynamics of user experience (UX) are affected. A broad agreement is evident throughout the literature that gender is a crucial aspect when assessing human-computer interactions. Consequently, usability factors are brought into question, although there is some indication among researchers that too much weight is being applied. Study findings are gathered to represent the hedonic and pragmatic qualities of users, with clarifications of students’ perspectives deducted from qualitative methods, together with a UX examination made via Kuwait’s Public Authority for Applied Education and Training (PAAET) institute. Results suggest that none of the differing approaches and habits the two genders have toward UX should be considered as substantial, with the overall sample recording a perception of UX that is “slightly positive”. Furthermore, this research highlights difficulties with usability that developers may wish to take onboard for system upgrades.

Author 1: Rana Alhajri
Author 2: Ahmed Al-Hunaiyyan
Author 3: Bareeq Alghannam
Author 4: Abdullah Alshaher

Keywords: Gender differences; student information system; human-computer interaction; usability; user experience; perceptions

PDF

Paper 10: Student Information System: Investigating User Experience (UX)

Abstract: There is growing recognition that electronic student information systems support college administrations and enhance student performance. These systems must fulfill their user’s needs (efficiently achieve their academic goals) while also providing a positive user experience (UX). This study used quantitative and qualitative approaches to elucidate students’ perceptions and investigate UX toward the SIS currently used at the Public Authority for Applied Education and Training (PAAET), a higher education institution in Kuwait. Survey data collected from 645 PAAET students were analyzed to determine their perceptions of and experiences using this SIS. The findings revealed that students had a slightly positive UX with this SIS. The system’s perspicuity, stimulation, and dependability were rated slightly higher than its novelty, attractiveness, and efficiency. The most pertinent usability issues that focus on the human interaction with systems were identified and discussed, hoping that it will allow officials and SIS system developers alike to make relevant and impactful improvements to newer versions of these systems. These results shed light on the need for continuous SIS evaluation and a broad research scope to develop innovative SIS with intelligent functions for novel activities. Such features enhance students’ interactivity and productivity, which encourage their academic success.

Author 1: Ahmed Al-Hunaiyyan
Author 2: Rana Alhajri
Author 3: Bareeq Alghannam
Author 4: Abdullah Al-Shaher

Keywords: Student information system; user experience; usability; human-computer interaction; e-learning

PDF

Paper 11: Mitigating Denial of Service Signaling Threats in 5G Mobile Networks

Abstract: With the advent of 5th generation (5G) technology, the mobile paradigm witnesses a tremendous evolution involving the development of a plethora of new applications and services. This enormous technological growth is accompanied with an huge signaling overhead among 5G network elements, especially with emergence of massive devices connectivity. This heavy signaling load will certainly be associated with an important security threats landscape, including denial of service (DoS) attacks against the 5G control plane. In this paper, we analyse the performance of a defense mechanism based randomization technique designed to mitigate the impact of DoS signaling attack in 5G system. Based on massive machine-type communications (mMTC) traffic pattern, the simulation results show that the proposed randomization mechanism decreases significantly the signaling data volume raised from the new 5G Radio Resource Control (RRC) model under normal and malicious operating conditions, which up to 70% while avoiding the unnecessary resource consumption.

Author 1: Raja Ettiane
Author 2: Rachid EL Kouch

Keywords: 5G New Radio (NR) network; Radio Resource Control (RRC) state model; Denial of Service (DoS); signaling threats; randomization

PDF

Paper 12: Regulation Proposal for the Implementation of 5G Technology in Peru

Abstract: Telecommunications play a very important role in people’s life, for years there has been an evolution of this technology in the mobile communications’ industry reaching up to the 5G technology, which is more advanced than 4G, making it thus more comfortable for the user. In Peru, 5G technology has not been implemented though because there is fear from great part of population regarding its the antennas. Another fear is nowadyas the spread of COVID–19, this is because there is a lot of false information that has poor scientific support, even that information has been denied by the Ministry of Transport and Communications (MTC) from Peru, but still people hold on to these fears. Due to the aforementioned reasons, the present investigation aims to carry out an assessment of the benefits that the 5G technology would bring to the country and also proposes a regulatory frame for the radioelectric spectrum that will occupy this technology in Peru. By evaluating a regulation proposal of 5G technology in Peru, it is shown that the implementation of this technology will bring benefits in the social and economic sectors of the country.

Author 1: Luis Nunez-Tapia

Keywords: 5G; regulation; antennas; radio spectrum

PDF

Paper 13: A Meta-analysis of Educational Data Mining for Predicting Students Performance in Programming

Abstract: An essential skill amid the 4th industrial revolution is the ability to write good computer programs. Therefore, higher education institutions are offering computer programming as a module not only in computer related programmes but other programmes as well. However, the number of students that underperform in programming is significantly higher than the non-programming modules. It is, therefore, crucial to be able to accurately predict the performance of students pursuing programming since this will help in identifying students that may underperform and the necessary support interventions can be timeously put in place to assist these students. The objective of this study is therefore to obtain the most effective Educational Data Mining approaches used to identify those students that may underperform in computer programming. The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) approach was used in conducting the meta-analysis. The databases searched were, namely, ACM, Google Scholar, IEEE, Pro-Quest, Science Direct and Scopus. A total of 11 scientific research publications were included in the meta-analysis for this study from 220 articles identified through database searching. The residual amount of heterogeneity was high (τ2 = 0.03; heterogeneity I2 = 99.46% with heterogeneity chi-square = 1210.91, a degree of freedom = 10 and P = <0.001). The estimated pooled performance of the algorithms was 24% (95% CI (13%, 35%). Meta-regression analysis indicated that none of the moderators included have influenced the heterogeneity of studies. The result of effect estimates against its standard error indicated publication bias with a P-value of 0.013. These meta-analysis findings indicated that the pooled estimate of algorithms is high.

Author 1: Devraj Moonsamy
Author 2: Nalindren Naicker
Author 3: Timothy T. Adeliyi
Author 4: Ropo E. Ogunsakin

Keywords: Data mining; educational data mining; machine learning; performance; programming

PDF

Paper 14: Adaptive Congestion Window Algorithm for the Internet of Things Enabled Networks

Abstract: Heterogeneous constrained computing resources in the Internet of Things (IoT) are communicated, collected, and share information from the environment using sensors and other high-speed technologies which generate tremendous traffic and lead to congestion in the Internet of Things (IoT) networks. This paper proposes an Adaptive Congestion Window (ACW) algorithm for the Internet of Things. This algorithm is adapted to the traffic changes in the network. The main objective of this paper is to increase the packet delivery ratio and reduce delay while enhancing the throughput which can be attained by avoiding congestion. Therefore, in the proposed algorithm, the congestion window size is depending on the transmission rate of the source node, the available bandwidth of the path, and the receiving rate of the destination node. The congestion window size is altered when the link on the path needs to be shared/released with/by other paths of different transmission in the network. The proposed algorithm, ACW is simulated, evaluated in terms of packet delivery ratio, throughput, and delay. The performance of the proposed algorithm, ACW is compared with IoT Congestion Congrol Algorithm (IoT-CCA) and Improved Stream Control Transmission Protocol (IMP-SCTP) and proved to be better by 27.4%, 11.8%, and 33.7% than IoT-CCA and 44.1%, 22.6%, and 50% than IMP-SCTP concerning packet delivery ratio, throughput, and delay respectively. The variation in congestion window size with time is also projected.

Author 1: Ramadevi Chappala
Author 2: Ch.Anuradha
Author 3: P. Sri Ram Chandra Murthy

Keywords: Congestion window; internet of things; packet delivery ratio; throughput

PDF

Paper 15: AMBA: Adaptive Monarch Butterfly Algorithm based Information of Transfer Scheduling in Cloud for Big Information Application

Abstract: In present days cloud computing is most famous innovation and has a great deal of research potential in different zones like allocation of resource, scheduling of data transfer, security as well as privacy and so on. Data transfer Scheduling is one of the significant issues for improving the proficiency of all cloud based administrations. In cloud computing, data transfer scheduling is utilized to allot the task to best reasonable asset for execution. There are various types of data transfer scheduling algorithms. A few issues like execution time, execution cost, high delay time, complexity, and high data transfer cost as well as various optimization problems have been measured in existing papers. To tackle all the above problems, in this paper, a new Adaptive approach are introduced which is a combination of Monarch Butterfly and Genetic (AMBA) Algorithm based data transfer scheduling is proposed. So here the concept is to develop an optimal algorithm for scheduling the data transfer in an efficient way which helps in reducing the data transfer time. The performance of proposed methodology analyzed in terms of evaluation metrics.

Author 1: D. Sugumaran
Author 2: C. R. Bharathi

Keywords: Information transfer scheduling; AMBA; throughput maximization; migration operation

PDF

Paper 16: Robotic Education in 21st Century: Teacher Acceptance of Lego Mindstorms as Powerful Educational Tools

Abstract: Acceptance of robotic technology in education is a crucial issue in the revolution industry 4.0 era. This study aims to explore the acceptance of Lego Mindstorms Ev3 as one kind of robotic technology by the teachers as a learning resources that can develop teachers and student’s skills. The Technology Acceptance Model (TAM) was introduced by using questionnaires. The questionnaires were responded by 22 elementary school teachers who have experiences with Lego Mindstorms ev3 kits in a workshop. The data was carried out by presenting descriptive statistics, correlation, and regression analyses. Based on the acceptance testing of Lego Mindstorms Ev3 with the TAM model, the result showed that subjective norms (SN) and self-efficacy (SE) as external variables were effective on the acceptance of Lego Mindstorms Ev3 as a learning tools by teachers. Teacher’s SN have a positive correlation with perceived usefulness (PU), perceived ease to use (PE), and behavioral intention to use (BI). Teacher’s self-efficacy were significant in predicting PE and BI. PU and PE had a positive effect on Attitude toward using Lego Mindstorms Ev3 by teachers, and it continued to use. Finally, most teachers have shown positive reactions to Lego Mindstorms Ev3 as educational tools.

Author 1: Mardhiah Masril
Author 2: Ambiyar
Author 3: Nizwardi Jalinus
Author 4: Ridwan
Author 5: Billy Hendrik

Keywords: Education; TAM; teacher acceptance; Lego; Robotic

PDF

Paper 17: Simulation Study on Blood Flow Mechanism of Vein in Existence of Different Thrombus Size

Abstract: Blood velocity is expected to be as a parameter for detecting abnormality of blood such as the existence of thrombus. Proper blood flow in veins is important to ensure effective return of deoxygenated blood to the heart. However, it is much challenging to recognize the vessel condition due to the inability to visualize the thrombus presence in the vessel. The presence of noise in the image obtained from ultrasound scanning is one of the obstructions in recognizing it. Considering the difficulty, this study aims to assess the velocity and vorticity at the vein valve region using Computational Fluid Dynamics (CFD) method. The velocity of blood and the size of valve orifice are considered important parameters in designing the vein since the stenosis and irregularities of velocity in blood vessels are known as the risk factors for thrombus formation. From the simulation, the velocity contour plot of the blood flow can be visualized clearly. The blood distribution was presented using velocity profile while the fluid particles movement was shown by the velocity vector. The low blood velocity clearly shows the low velocity region which reside at the cusps area and at the beginning of the valve leaflets. Therefore, the present study is able to visualize and evaluate the probable location of thrombus development in the blood vessel.

Author 1: Nabilah Ibrahim
Author 2: Nur Shazilah Aziz
Author 3: Muhammad Kamil Abdullah
Author 4: Gan Hong Seng

Keywords: Blood velocity profile; velocity contour plot; computational fluid dynamic (CFD); thrombus; vein valve

PDF

Paper 18: Singer Gender Classification using Feature-based and Spectrograms with Deep Convolutional Neural Network

Abstract: The task of music information retrieval (MIR) is gaining much importance since the digital cloud is growing sparklingly. An important attribute of MIR is the singer-id, which helps effectively during the recommendation process. It is highly difficult to identify a singer in the case of music as the number of signers available in the digital cloud is high. The process of identifying the gender of a singer may simplify the task of singer identification and also helps with the recommendation. Hence, an effort has been made to detect the gender information of a singer. Two different datasets have been considered. Of which, one is collected from Indian cine industries having 20 different singer details of four regional languages. The other dataset is standard Artist20. Various spectral, temporal, and pitch related features have been used to obtain better accuracy. The features considered for this task are Mel-frequency cepstral coefficients (MFCCs), pitch, velocity, and acceleration of MFCCs. The experimentation has been done on various combinations of the mentioned features with the support of artificial neural networks (ANNs) and random forest (RF). Further, the genetic algorithm-based feature selection (GAFS) has been used to select the suitable features out of the best combination obtained. Moreover, we have also utilized the recent popular convolutional neural networks (CNNs) with the support of spectrograms to obtain better accuracy over the traditional feature vector. Average accuracy of 91.70% is obtained for both the Indian and Western clips, which is an improved accuracy of 3% over hand engineering features.

Author 1: Mukkamala S.N.V. Jitendra
Author 2: Y. Radhika

Keywords: Gender identification; spectrogram; genetic algorithm-based feature selection (GAFS); music information retrieval (MIR); music recommendation; and singer’s gender identification

PDF

Paper 19: An Hybrid Approach for Cost Effective Prediction of Software Defects

Abstract: Identifying software defects during early stages of Software Development life cycle reduces the project effort and cost. Hence there is a lot of research done in finding defective proneness of a software module using machine learning approaches. The main problems with software defect data are cost effective and imbalance. Cost effective problem refers to predicting defective module as non defective induces high penalty compared to predicting non defective module as defective. In our work, we are proposing a hybrid approach to address cost effective problem in Software defect data. To address cost effective problem, we used bagging technique with Artificial Neuro Fuzzy Inference system as base classifier. In addition to that, we also addressed Class Imbalance & High dimensionality problems using Artificial Neuro Fuzzy inference system & principle component analysis respectively. We conducted experiments on software defect datasets, downloaded from NASA dataset repository using our proposed approach and compared with approaches mentioned in literature survey. We observed Area under ROC curve (AuC) for proposed approach was improved approximately 15% compared with highly efficient approach mentioned in literature survey.

Author 1: Satya Srinivas Maddipati
Author 2: Malladi Srinivas

Keywords: Cost effective problem; principle component analysis; adaptive neuro fuzzy inference system; area under ROC curve

PDF

Paper 20: Design of Modern Distributed Systems based on Microservices Architecture

Abstract: Distributed systems are very commonplace nowadays. They have seen an enormous growth in use during the past few years. The idea to design systems that are robust, scalable, reliable, secure and fault tolerance are some of the many reasons of this development and growth. Distributed systems provide a shift from traditional ways of building systems where the whole system is concentrated in a single and indivisible unit. The latest architectural changes are progressing toward what is known as microservices. The monolithic systems, which can be considered as ancestors of microservices, cannot fulfill the requirements of today’s big and complex applications. In this paper we decompose a monolithic application into microservices using three different architectural patterns and draw comparisons between the two architectural styles using detailed metrics that are generated from the Apache JMeter tool. The application is created via .NET framework, uses the MVC pattern and is fictive. The two comparable apps before testing with Apache JMeter, will be deployed in almost identical hosting environment in order to gain results that are valuable. Using the generated data, we deduce the advantages and disadvantages of the two architectural styles.

Author 1: Isak Shabani
Author 2: Endrit Mëziu
Author 3: Blend Berisha
Author 4: Tonit Biba

Keywords: Distributed systems; microservice; monolithic; web services; Jmeter

PDF

Paper 21: Feature Engineering for Human Activity Recognition

Abstract: Human activity recognition (HAR) techniques can significantly contribute to the enhancement of health and life care systems for elderly people. These techniques, which generally operate on data collected from wearable sensors or those embedded in most smart phones, have therefore attracted increasing interest recently. In this paper, a random forest-based classifier for human activity recognition is proposed. The classifier is trained using a set of time-domain features extracted from raw sensor data after being segmented into windows of 5 seconds duration. A detailed study of model parameter selection is presented using the statistical t-test. Several simulation experiments are conducted on the WHARF accelerometer benchmark dataset, to compare the performance of the proposed classifier to support vector machines (SVM) and Artificial Neural Network (ANN). The proposed model shows high recognition rates for different activities in the WHARF dataset compared to other classifiers using the same set of features. Furthermore, it achieves an overall average precision of 86.1% outperforming the recognition rate of 79.1% reported in the literature using Convolution Neural Networks (CNN) for the WHARF dataset. From a practical point of view, the proposed model is simple and efficient. Therefore, it is expected to be suitable for implementation in hand-held devices such as smart phones with their limited memory and computational resources.

Author 1: Basma A. Atalaa
Author 2: Ibrahim Ziedan
Author 3: Ahmed Alenany
Author 4: Ahmed Helmi

Keywords: Human activity recognition; random forest; feature engineering; sensor signal processing

PDF

Paper 22: Digitization of Supply Chains as a Lever for Controlling Cash Flow Bullwhip: A Systematic Literature Review

Abstract: Due to the new possibilities offered by digital technologies, more and more companies are embarking on a process of digitizing their supply chains. This dynamic seems to be the opportunity to analyse the impact that digital technologies may have on one of the phenomena that disrupt financial flows within supply chains, and that can alter the companies’ treasury, namely that of cash flow bullwhip (CFB). The results of the systematic literature review that was carried out allow to affirm that several technologies can contribute positively to limiting this phenomenon and this by acting on these operational causes, which are the reliability of forecasts, batch orders, the fluctuation in sales prices, rationing games, and lead times.

Author 1: Hicham Lamzaouek
Author 2: Hicham Drissi
Author 3: Naima El Haoud

Keywords: Cash flow bullwhip; digital technologies; digitization; supply chain; cash flow; bullwhip effect

PDF

Paper 23: IoT System for Vital Signs Monitoring in Suspicious Cases of Covid-19

Abstract: Currently the world is going through a pandemic caused by Covid-19, the World Health Organization recommends to stay isolated from the rest of the people. This research shows the development of a prototype based on the internet of things, which aims to measure three very important aspects: heart rate, blood oxygen saturation and body temperature, these will be measured through sensors that will be connected to a NodeMCU module that integrates a Wi-Fi module, which will transmit the data to an IoT platform through which the data can be displayed, achieving real-time monitoring of the vital signs of the patient suspected of Covid-19.

Author 1: John Amachi-Choqque
Author 2: Michael Cabanillas-Carbonell

Keywords: Covid-19; vital signs; internet of things; NodeMCU; IoT platform

PDF

Paper 24: Factors Influencing Master Data Quality: A Systematic Review

Abstract: Master data refers to the data that represents the core business of the organization, shared among different applications, departments, and organizations and most valued as the important asset to the organization. Despite the outward benefit of master data mainly in decision making and organization performance, the quality of master data is at risk. This is due to the critical challenges in managing master data quality the organization may expose. Hence the primary aim of this study is to identify factors influencing master data quality from the lens of total quality management while adopting the systematic literature review method. The study proposed 19 factors that inhibit the quality of master data namely data governance, information system, data quality policy and standard, data quality assessment, integration, continuous improvement, teamwork, data quality vision and strategy, understanding of the systems and data quality, data architecture management, personnel competency, top management support, business driver, legislation, information security management, training, change management, customer focus, and data supplier management that can be categorized to five components which are organizational, managerial, stakeholder, technological, and external. Another important finding is the identification of the differences for factors influencing master data compared to other data domain which are business driver, organizational structure, organizational culture, performance evaluation and rewards, evaluate cost/benefit tradeoffs, physical environment, risk management, storage management, usage of data, internal control, input control, staff participation, middle management's commitment, the role of data quality and data quality manager, audit, and personnel relation. It is expected that the findings of this study will contribute to a deeper understanding of the factors that will lead to an improved master data quality.

Author 1: Azira Ibrahim
Author 2: Ibrahim Mohamed
Author 3: Nurhizam Safie Mohd Satar

Keywords: Quality management; total quality management; data quality; data quality management; master data; master data quality; master data quality management; systematic literature review

PDF

Paper 25: Hybrid Feature Selection and Ensemble Learning Methods for Gene Selection and Cancer Classification

Abstract: A promising research field in bioinformatics and data mining is the classification of cancer based on gene expression results. Efficient sample classification is not supported by all genes. Thus, to identify the appropriate genes that help efficiently distinguish samples, a robust feature selection method is needed. Redundancy in the data on gene expression contributes to low classification performance. This paper presents the combination for gene selection and classification methods using ranking and wrapper methods. In ranking methods, information gain was used to reduce the size of dimensionality to 1% and 5%. Then, in wrapper methods K-nearest neighbors and Naïve Bayes were used with Best First, Greedy Stepwise, and Rank Search. Several combinations were investigated because it is known that no single model can give the best results using different datasets for all circumstances. Therefore, combining multiple feature selection methods and applying different classification models could provide a better decision on the final predicted cancer types. Compared with the existing classifiers, the proposed assembly gene selection methods obtained comparable performance.

Author 1: Sultan Noman Qasem
Author 2: Faisal Saeed

Keywords: Microarray; gene selection; ensemble classification; cancer classification; gene expression

PDF

Paper 26: Visibility and Ethical Considerations of Pakistani Universities Researchers on Google Scholar

Abstract: Maximizing visibility by using academic profiling sites is very crucial in the academic world to improve the readership of research papers published and constant evaluation of research quality. In this article, the authors focused on the visibility of Pakistani University scholars on Google Scholar (GS). An intelligent Web Bot (MAKGBOT) was developed to collect the scholarly data of all Pakistani scholars, whose data is publicly available on Google Scholar. The findings of this research show that 87% of Pakistani universities have a presence on Google Scholar. It analyzes the research performance of scholars based on the last five years’ data from 2016 to 2020. Furthermore, the analysis reports the level of scholarly activities of all provinces and autonomous areas of Pakistan. This paper concludes by discussing the ethical issue of misrepresentation of information on the public profile and its consequences on the rankings of legitimate scholars.

Author 1: Muhammad Asghar Khan
Author 2: Tariq Rahim Soomro

Keywords: Google scholar; research visibility; Pakistani researchers; ethical considerations; web bot; research in Pakistan

PDF

Paper 27: Early Detection of Severe Flu Outbreaks using Contextual Word Embeddings

Abstract: The purpose of automated health surveillance systems is to predict the emergence of a disease. In most cases, these systems use a text categorization model to classify any clinical text into a category corresponding to an illness. The problem arises when the target classes refer to diseases sharing multiple information such as symptoms. Thus, the classifier will have difficulty discriminating the disease under surveillance from other conditions of the same family, causing an increase in misclassification rate. Clinical texts contain keywords carrying relevant information to distinguish diseases with similar symptoms. However, these specific words are rare and sparse. Therefore, they have a minor impact on machine learning models' performance. Assuming that emphasizing specific terms contributes to improving classification performance, we propose an algorithm that enriches training samples with terms semantically similar to specific terms using the deep contextualized word embeddings ELMo. Next, we devise a weighting scheme combining chi-square and semantic scores to reflect the relatedness between features and the disease under surveillance. We evaluate our model using the SVM algorithm trained on i2b2 dataset supplemented by documents collected from Ibn Sina hospital in Rabat. Experimental results show a clear improvement in classification performance than baseline methods with an F-measure reaching 86.54%.

Author 1: Redouane Karsi
Author 2: Mounia Zaim
Author 3: Jamila El Alami

Keywords: ELMo; SVM; contextual word embeddings; semantic term weighting; health surveillance; text classification

PDF

Paper 28: An Enhanced Artificial Bee Colony: Naïve Bayes Technique for Optimizing Software Testing

Abstract: Software driven technology has become a part of life and the quality of software largely depends on the extent of effective testing performed during various phases of development. A wide range of nature inspired searching techniques are employed over years to automate the testing process and provide promising solutions to elude the infeasibility of exhaustive testing. These techniques use metaheuristics and work by converting the problem space into search space. A subset of optimized solutions is searched that reduces overall time by shortening the testing time. Objective: An enhanced Artificial Bee Colony- Naïve Bayes optimizer for test case selection is proposed in this paper. This article also aims to provide brief insights into the emergence of hybrid swarm-inspired techniques over the last two decades. Method: The modified Artificial Bee colony is applied after component selection and further optimization is achieved using Naïve Bayes classifier. The proposed technique is implemented and evaluated taking three benchmark programs into consideration. The proposed technique is also compared to other competitive swarm intelligence-based techniques of its class. Results: The experimental results show that the proposed technique outperforms other swarm-inspired techniques in terms of execution time in a given scenario and capable of higher detection of faults with minimal test case selection. Conclusion: The proposed approach is an improvement over existing techniques and helps in huge time and cost saving. It will contribute to the testing society and enhance the overall quality of the software.

Author 1: Palak
Author 2: Preeti Gulia
Author 3: Nasib Singh Gill

Keywords: Software testing; artificial bee colony; swarm intelligence; Naïve Bayes; test case selection

PDF

Paper 29: Intelligent Climate Control System inside a Greenhouse

Abstract: An agricultural greenhouse is an environment to ensure intensive agricultural production. The favorable climatological conditions (temperature, lighting, humidity ...) for agricultural production must be reproduced in a non-natural way by controlling these parameters using several actuators (heating/air conditioning, ventilation, and humidifier/ dehumidifier). The objective of this study is to control the humidity inside the greenhouse; it is a problem that remains to be negotiated. To that end, an actuator based on a humidifier and a dehumidifier was installed in an experimental greenhouse and activated by a fuzzy logic controller to achieve the desired optimal indoor humidity in the greenhouse.

Author 1: A Labidi
Author 2: A. Chouchaine
Author 3: A. Mami

Keywords: Greenhouse; climate; humidity; fuzzy logic controller; humidification; dehumidification

PDF

Paper 30: Selection of Social Media Applications for Ubiquitous Learning using Fuzzy TOPSIS

Abstract: The exponential advancements in Information and Communications Technology has led to its prevalence in education, especially with the arrival of COVID-19. Ubiquitous learning (u-learning) is everyday learning that happens irrespective of time and place and it is enabled by m-learning, e-learning, and social computing such as social media. Due to its popularity, there has been an expansion of social media applications for u-learning. The aim of this research paper was to establish the most relevant social media applications for u-learning in schools. Data was collected from 260 respondents, which comprised learners, and instructors in high schools who were asked to rank 14 of the top social media applications for ubiquitous learning. Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) was the method employed for the ranking of the 14 of the most popular social media applications using 15 education requirements, 15 technology criteria, and 260 decision makers. The simulation was implemented on MATLAB R2020a. The results showed that YouTube was the most likely social media application to be selected for u-learning with a closeness coefficient of 0.9188 and that Viber was the least likely selected social media application with a closeness coefficient of 0.0165. The inferences of this research study will advise researchers in the intelligent decision support systems field to reduce the time and effort made by instructors and learners to select the most beneficial social media application for u-learning.

Author 1: Caitlin Sam
Author 2: Nalindren Naicker
Author 3: Mogiveny Rajkoomar

Keywords: Social media applications; Ubiquitous learning; fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS); Multiple criteria decision-making tools

PDF

Paper 31: Nonlinear Rainfall Yearly Prediction based on Autoregressive Artificial Neural Networks Model in Central Jordan using Data Records: 1938-2018

Abstract: Jordan is suffering a chronicle water resources shortage. Rainfall is the real input for all water resources in the country. Acceptable accuracy of rainfall prediction is of great importance in order to manage water resources and climate change issues. The actual study include the analysis of time series trends of climate change regards to rainfall parameter. Available rainfall data for five stations from central Jordan where obtained from the Ministry of water and irrigation that cover the interval 1938- 2018. Data have been analyzed using Nonlinear Autoregressive Artificial Neural Networks NAR-ANN) based on Levenberg-Marquardt algorithm. The NAR model tested the rainfall data using one input layer, one hidden layer and one output layer with a different combinations of number of neuron in hidden layer and epochs. The best combination was using 25 neurons and 12 epochs. The classification performance or the quality of result is measured by mean square error (MSE). For all the meteorological stations, the MSE values were negligible ranging between 4.32*10-4 and 1.83*10-5. The rainfall prediction result show that forecasting rainfall values in the base of calendar year are almost identical with those estimated for seasonal year when dealing with long record of years. The average predicted rainfall values for the coming ten-year in comparison with long-term rainfall average show; strong decline for Dana station, some decrees for Rashadia station, huge increase in Abur station, and relatively limited change between predicted and long-term average for Busira and Muhai Stations.

Author 1: Suhail Sharadqah
Author 2: Ayman M Mansour
Author 3: Mohammad A Obeidat
Author 4: Ramiro Marbello
Author 5: Soraya Mercedes Perez

Keywords: Jordan; rainfall distribution; time series analyses; Levenberg-Marquardt algorithm; climate change

PDF

Paper 32: Fungal Blast Disease Detection in Rice Seed using Machine Learning

Abstract: The economy of Pakistan mainly relies upon agriculture alongside other vital industries. Fungal blast is one of the significant plant diseases found in rice crops, leading to reduction of agricultural products and hindrance in the country's economic development. Plant disease detection is an initial step towards improving the yield and quality of agricultural products. Manual Analyzation of plant health is tiresome, time taking and costly. Machine learning offers an alternate inspection method providing benefits of automated inspection, ease of availability, and cost reduction. The visual patterns on the rice plants are processed using the machine learning classifiers such as support vector machine (SVM), logistic regression, decision tree, Naïve Bayes, random forest, linear discriminant analysis (LDA), principal component analysis (PCA), and based on classification results plants are recognized as healthy or unhealthy. For this process, a dataset containing 1000 images of rice seed crop is collected from different fields of Kashmore, and whole analysis of image acquisition, pre-processing, and feature extraction is done on the rice seed only. The dataset is annotated with healthy and unhealthy samples with the help of a plant disease expert. The algorithms used for processing data are evaluated in terms of F1-score and testing accuracy. This paper contains results from traditional classifiers, and alongside these classifiers, transfer learning has been used to compare the results. Finally, a comparative analysis is done between the results of traditional classifiers and deep learning networks.

Author 1: Raj Kumar
Author 2: Gulsher Baloch
Author 3: Pankaj
Author 4: Abdul Baseer Buriro
Author 5: Junaid Bhatti

Keywords: Fungal blast; machine learning; support vector machine (SVM); logistic regression; decision tree; Naïve Bayes; random forest; linear discriminant analysis (LDA); principal component analysis (PCA); image acquisition; pre-processing; feature extraction; F1-Score; convolutional classifier; deep learning

PDF

Paper 33: Investigation of Factors Affecting Employee Satisfaction of IT Sector

Abstract: Job satisfaction or employee satisfaction has various definitions, but we can generalize it by how gratified an individual is with his or her job. Happy employees help to strengthen the company by lowering turnover and increasing loyalty. Job satisfaction also promotes a healthy working environment that helps to attract talent and increase productivity. However, little research has been done that focuses specifically on the IT sector. The goal of this research is to measure the level of satisfaction among Kuwaiti IT workers and discover tangible and intangible factors affecting their job satisfaction. To highlight factors contributing to positive satisfaction in the IT jobs in Kuwait, we propose a six-factor structural model, including compensation, workplace, intangible benefits, support, communication, and satisfaction. A targeted snowball descriptive survey was distributed via WhatsApp messages to Information Technology workers; 209 responses were collected after data cleaning. SPSS statistical software was used to analyze the data, with results indicating IT employees felt an average level of satisfaction. Additionally, several work-related variables were significantly associated with job satisfaction. Work position showed a statistically significant association with work satisfaction. Finally, individuals in a leading position reported higher satisfaction compared to individuals in non-leading positions.

Author 1: Eiman Tamah Al-Shammari

Keywords: Job satisfaction; IT sector; productivity; intangible benefits; communication

PDF

Paper 34: Fuzzy based Search in Motion Estimation for Real Time Video Compression

Abstract: Video compression ratio, quality and efficiency are determined by the motion estimation algorithm. Motion estimation is used to perform inter frame prediction in video sequences. The individual frames are divided into blocks the motion estimation is computed by a video codec such as H.264. A video codec computes the displacement of block between the previous frame (reference frame) and the current frame, for each block in current frame the best motion vector is determined in the reference frame as a block belongs to a current frame. In this, research paper, a novel technique has been presented for motion vector calculation, using fuzzy Gaussian membership function. The motion estimation block uses fuzzy membership function to estimate the connectedness of different blocks of the current frame to that of the reference frame The fuzzy decision matching is done based on the matching criterion and the best matching block is selected. The motion vectors are thus calculated with respect to the reference frame. The fuzzification process produces optimally matched blocks, which are then utilized to calculate the motion vectors of the predicted frame. Using fuzzy based search the search area is automatically updated and adaptive search steps provides an optimized result of search. As in real time streaming no file is exchanged during the transmission user is not able to download the file the only way for smooth transmission is frame management fuzzy based search for the motion estimation provides a better compression for the predicted frames.

Author 1: Upendra Kumar Srivastava
Author 2: Rakesh Kumar Yadav

Keywords: Fuzzy logic; motion estimation; compression; current frame; reference frame; predicted frame

PDF

Paper 35: Using Blockchain based Authentication Solution for the Remote Surgery in Tactile Internet

Abstract: Since the Tactile Internet has been considered as a new era of Internet, delivering real-time interactive systems as well as ultra-reliable and ultra-responsive network connectivity, tremendous efforts have been made to ensure authentication between communication’s parties to secure remote surgery. Since this human to machine interaction like remote surgery is critical and the communication between the surgeon and the tactile actor i.e. robot arms should be fully protected during the surgical procedure, a fully secure mutual user authentication scheme should be used in order to establish a secure session among the communicating parties. The existing methods usually require a server to ensure the authentication among the communicating parties, which makes the system vulnerable to single of point failure and not fit the design of such critical distributed environment i.e. tactile internet. To address these issues, we propose a new decentralized blockchain based authentication solution for tactile internet. In our proposed solution, there is no need for a trusted party; moreover, the decentralized nature of our proposed solution makes the authentication immutable, efficient, secure, and low latency requirement. The implementation of our proposed solution is deployed on Ethereum official test network Ropsten. The experimental results show that our solution is efficient, highly secured, and flexible.

Author 1: Tarik HIDAR
Author 2: Anas ABOU EL KALAM
Author 3: Siham BENHADOU
Author 4: Oussama MOUNNAN

Keywords: Tactile internet; blockchain; human to machine interaction; authentication; remote surgery

PDF

Paper 36: PHY-DTR: An Efficient PHY based Digital Transceiver for Body Coupled Communication using IEEE 802.3 on FPGA Platform

Abstract: Body coupled communication (BCC) is an efficient networking approach to body area network (BAN) based on Human-centric communication. The BCC provides interference only between humans in very close proximity. In this work, an efficient Physical layer (PHY) based digital transceiver) is designed for BCC. The digital transceiver Module mainly contains a Digital transmitter (TX) with Manchester encoder, clock synchronization unit, and Digital receiver (RX) with Manchester decoder. The TX and RX modules are designed using a finite state machine as per the IEEE 802.3 Standards. The complete work is also varied for BAN applications by connecting two Application layer transceivers and two Physical layer-based digital transceivers. The architecture is simulated in a Model-sim simulator. The complete Module is synthesized using different FPGA families, and the hardware design constraints are contrasted. The digital transceiver works at 231.28 MHz operating frequency, consumes 0.113W power, and provides a 7.7 Mbps data rate and 4.67 Kbps/Slice efficiency on Artix-7 FPGA. The proposed transceiver is also compared with existing digital transceivers with hardware constraints improvements.

Author 1: Sujaya B. L
Author 2: S.B. Bhanu Prashanth

Keywords: Body coupled communication; physical layer; digital; FPGA; radiofrequency; human body

PDF

Paper 37: Towards an Ontological Learner’s Modeling During and After the COVID-19 Pandemic

Abstract: The health crisis and the unprecedented upheaval in the education systems which it caused are far from being over, consequently, the adaptation of the learning experience is most needed, and it should take into consideration the criteria of this specific crisis and its impact on the physical and mental health of the learners. In this article, we aimed to present an ontology-based learner model that will bring together the pedagogical and psychological characteristics, but also the health risks generated by the epidemic on the learners, following a Knowledge-Engineering Methodology. We developed an ontology that combines the IMS-LIP standard features and the learner characteristic. It is ready for different uses in different systems and situations during and after the COVID-19 pandemic, and it will give a global representation of the learner in order to allow him to get the best-adapted courses.

Author 1: Amina OUATIQ
Author 2: Kamal El Guemmat
Author 3: Khalifa Mansouri
Author 4: Mohammed Qbadou

Keywords: Learner model; personalization; adaptive learning; ontology; COVID-19

PDF

Paper 38: A Survey on Dental Imaging for Building Classifier to Benefit the Dental Implant Practitioners

Abstract: Endo-osseous implants are considered an ideal dental fixture. It is becoming the preferred choice of the edentulous patient to rehabilitate toothlessness because of their aesthetic and functional outcome. Despite the successful surgery and implant placement, complications occur, which may be related to several factors, like operative assessment, treatment planning, patient-related factors, surgical procedures, and surgeons' experience. Comprehensive radiological assessment plays a vital role in clinical analysis for better treatment planning, avoiding complications, and increasing the Implant's success rate. However, despite the variety of dental imaging, choosing the right imaging technology has become difficult for clinical experts. The investigative survey conducted in this paper aims to determine the correlation between different imaging modalities, their essential role in implant therapy. This review extensively discussed which types of computational operations applied to image modalities in the existing literature address various noises and other relevant issues. These study findings reveal significant issues with various dental imaging modalities and provide an understanding to bridge all existing research gaps towards building cost-effective classification and predictive models for accurate dental treatment planning and higher implant success rates.

Author 1: Shashikala J
Author 2: Thangadurai N

Keywords: Dental implant; complication; implant failure; dental imaging; pre-processing

PDF

Paper 39: Emerging Line of Research Approach in Precision Agriculture: An Insight Study

Abstract: The present state of agriculture and its demand is very much different than what it used to be two decades back. Hence, Precision Agriculture (PA) is more in demand to address this challenging demand. With consistent pressure to develop multiple products over the same agricultural land, farmers find PA’s adoption the best rescue-based solution with restricted resources. PA accelerates the yield and potentially assists in catering up the demand of scarcity of demands of food. With the increasing adoption of PA-based technologies over farming, there are best possibilities to explore efficient farming practices and better decision-making facilitated by real-time data availability. There isan evolution of various novel technologies to boost agricultural performance, i.e. variable rate technology, Geomapping, remote sensing, automated steering system, and satellite positioning system. Apart from this, it is also observed that Internet-of-Things (IoT) and Wireless Sensor Network (WSN) have been slowly penetrating this area to acceleratePA's technological advancement. It is noticed that the adoption of sensing technology is a common factor in almost all the techniques used in PA. However, there is no clear idea about the most dominant approach in this regard. Therefore, this paper discusses existing approaches concerning standard conventional PA and sensing-based PA using WSN. The study contributes towards some impressive learning outcomes to state that WSN and IoT are extensive to boost PA.

Author 1: Vanishree K
Author 2: Nagaraja G S

Keywords: Precision agriculture; smart farming; wireless sensor network; internet-of-things; remote sensing; variable rate technology

PDF

Paper 40: Optimal Power Allocation in Downlink Non-Orthogonal Multiple Access (NOMA)

Abstract: Fifth generation of wireless cellular networks promise to enable better services anytime and anywhere. Non-orthogonal multiple access (NOMA) stands as a suitable multiple accessing scheme due to the ability to allow multiple users to share the same radio resource simultaneously via different domains (power, code, etc.). Through the introduced power domain, users multiplexed at the radio resource within different power levels. This paper studies power allocation in downlink NOMA, an optimization problem formulated that aims to maximize the system's sum rate. To solve the problem, a genetic algorithm based power allocation (GAPA) was proposed that uses genetic algorithm (GA) that employs heuristics to search for suitable solutions. The performance of the proposed power allocation algorithm compared with full search power allocation (FSPA) that gives an optimal performance. Results show that GAPA reaches a performance near to FSPA with lower complexity. In addition, GAPA simulated with various user paring algorithms. Channel state sorting based user pairing with GAPA achieves the best performance comparing to random user pairing algorithm and exhaustive user pairing.

Author 1: Wajd Fahad Alghasmari
Author 2: Laila Nassef

Keywords: Non-orthogonal multiple access; power allocation; genetic algorithm; user pairing

PDF

Paper 41: Comparing the Accuracy and Developed Models for Predicting the Confrontation Naming of the Elderly in South Korea using Weighted Random Forest, Random Forest, and Support Vector Regression

Abstract: Since dementia patients clearly show the retrogression of linguistic ability from the early stage, evaluating cognitive and language abilities is very important when diagnosing dementia. Among them, naming is an essential item (sub-test) that is always included in the dementia-screening test. This study developed confrontation naming prediction models using support vector regression (SVR), random forest, and weighted random forest for the elderly in the community and identified an algorithm showing the best performance by comparing the accuracy of the models. This study used 485 elderly subjects (248 men and 237 women) living in Seoul and Incheon who were 74 years old or older. Prediction models were developed using SVR, random forest, and weighted random forest algorithms. This study revealed that the root mean squared error of weighted random forests was the lowest when comparing the prediction performance using models based on SVR, random forest, and weighted random forest. Future studies are needed to compare the prediction performance of weighted random forest with other machine learning models by calculating various performance indices such as sensitivity, specificity, and harmonic mean using data from various fields to prove the superior prediction performance of weighted random forest.

Author 1: Haewon Byeon

Keywords: Confrontation naming; generative naming; support vector regression; random forest; weighted random forest

PDF

Paper 42: Towards the Development of Computational Thinking and Mathematical Logic through Scratch

Abstract: Currently the need to provide quality education to future generations has led to the development of new teaching methodologies, within this fact the tools provided by information technologies have been positioned as the future of learning, in this sense, the learning to program is no longer considered a selective skill in the field of computing, being today a necessity for any student who wants to be competent in this globalized and dynamic world. Within this context, the present research aims to analyze to what extent the use of the Scratch programming language allows the development of computational thinking skills and mathematical logic. The methodology consisted of the application of programming fundamentals through Scratch 3.0 to an experimental group composed of 25 students who were randomly selected from a population of 100 students, the data collection was carried out through a test of logical reasoning standardized by Acevedo and Oliva and a test of levels of computational thinking standardized by González. According to the results, a significant difference is postulated in the performance of the students in both tests, having a more considerable improvement in the criteria: Loops, Control of Variables (CV), Probability (PB) and Combinatorial Operations (CB). Therefore, it is concluded by highlighting the importance of teaching basic concepts of Computer Science such as computational thinking and mathematical logic, since it contributes to the internalization of concepts when developing algorithms in problem-solving.

Author 1: Benjamín Maraza-Quispe
Author 2: Ashtin Maurice Sotelo-Jump
Author 3: Olga Melina Alejandro-Oviedo
Author 4: Lita Marianela Quispe-Flores
Author 5: Lenin Henry Cari-Mogrovejo
Author 6: Walter Cornelio Fernandez-Gambarini
Author 7: Luis Ernesto Cuadros-Paz

Keywords: Scratch; computational thinking; logic reasoning; teaching

PDF

Paper 43: Survey of Centralized and Decentralized Access Control Models in Cloud Computing

Abstract: In recent years, cloud computing has become a popular option for a number of different businesses sectors. It is a paradigm employed to deliver a range of computing services, such as sharing resources via the Internet. Security issues in cloud computing necessitates the need for a mechanism to keep the system safe and reliable. An access control mechanism is one that permits or denies access to cloud services. This paper presents a survey of access control models in Cloud Computing. Several existing surveys on access control mechanisms in cloud computing mainly focused on traditional access control models and encryption-based access control models while the others focused on applying blockchain technology in cloud access control. However, access models possess different characteristics, such as the system’s reliance on a centralized cloud trusted system administrator to manage the access policy or adopting decentralized approach. This paper reviews and analyses existing access control mechanisms in cloud computing, based on centralized and decentralized access control models, provides detailed comparisons on each model’s advantages and limitations, and discusses the challenges of, and future research direction for access control.

Author 1: Suzan Almutairi
Author 2: Nusaybah Alghanmi
Author 3: Muhammad Mostafa Monowar

Keywords: Cloud computing; access control; cloud security; centralized; decentralized

PDF

Paper 44: An Efficient Color LED Driver based on Self-Configuration Current Mirror Circuit

Abstract: The string channel of Color LED driver with precise current balancing is proposed. It is noted that to drive a multiple LEDs string is by using a proper current source, due to the level of the brightness LED depends on the quantity of the current flows. In the production of LEDs, the variation in the forward voltage for each LED has been found significantly high. This variation causes different levels of brightness in LEDs. Then, controlling load current of LED by using a resistor to limit the LED current flowing is considered by associated with the forward voltage, instantly. Current sources have been designed to become immune to the above problem since it regulates the current, and not the voltage which flows through the LEDs. Hence, constant current source is the essential requirement to drive the LEDs. Besides, it is complex for color LEDs, dependent on the number of control nodes and dimming configuration. To arrange an accurate load current for the different sets of string color LEDs, the efficient LED driver is required, in which the current sharing is complement to each LED strings. Therefore, this paper suggests a color LED driver with self-configuration of enhanced current mirrors in multiple LED strings. The load currents have been efficiently balanced among the identical loads and unequal loads. The comparable efficiency of the string color LEDs losses has been shown thoroughly.

Author 1: Shaheer Shaida Durrani
Author 2: Abu Zaharin Bin Ahmad
Author 3: Bakri Bin Hassan
Author 4: Atif Sardar Khan
Author 5: Asif Nawaz
Author 6: Naveed Jan
Author 7: Rehan Ali Khan
Author 8: Rohi Tariq
Author 9: Ahmed Ali Shah
Author 10: Tariq Bashir
Author 11: Zia Ullah Khan
Author 12: Sheeraz Ahmed

Keywords: Color LED driver; current mirror circuit; super diode

PDF

Paper 45: Prediction of Sunspots using Fuzzy Logic: A Triangular Membership Function-based Fuzzy C-Means Approach

Abstract: Fuzzy logic is an algorithm that works on “degree of truth”, instead of the conventional crisp logic where the possible answer can be 1 or 0. Fuzzy logic resembles human thinking as it considers all the possible outcomes between 1 and 0 and it tries to reflect reality. Generation of membership functions is the key factor of fuzzy logic. An approach for generating fuzzy gaussian and triangular membership function using fuzzy c-means is considered in this research. The problem related to sunspot prediction is considered and its accuracy is calculated. It is evident from the results that the proposed technique of generating membership functions using fuzzy c-means can be adopted for predicting sunspots.

Author 1: Muhammad Hamza Azam
Author 2: Mohd Hilmi Hasan
Author 3: Said Jadid Abdul Kadir
Author 4: Saima Hassan

Keywords: Fuzzy logics; fuzzy c-means (FCM); Gaussian membership function; prediction; sunspots; triangular membership function

PDF

Paper 46: Optimum Spatial Resolution of Satellite-based Optical Sensors for Maximizing Classification Performance

Abstract: Optimum spatial resolution of satellite based optical sensors for maximizing classification performance is investigated. Also, classification performance assessment method considering spatial resolution of satellite based optical imagers is proposed. Optimum spatial resolution which makes the highest classification accuracy is determined from spatial frequency components, spectral features of objects and classification method. First, in this paper, based on the relationship between variance of pixels and classification accuracy, classification accuracy for Landsat Multiple Spectral Scanner: MSS images with various Instantaneous Field of View (IFOV) will be shown. In their connection, variance of pixel values for images with various IFOV will be clarified. Second, assuming the shape of boundary line between adjacent categories is circle, relationship among IFOV, ratio of Mixels and classification accuracy will be cleared under the supposition that the number of Mixels equals to that of misclassified pixels. Finally, it will be also shown that aforementioned relationships and optimum spatial resolution have been confirmed by using airborne based MSS data of Sayama district in Japan.

Author 1: Kohei Arai

Keywords: Spectral information; spatial information; maximum likelihood decision rule; satellite image; image classification; mixed pixel (Mixels); optimum spatial resolution; classification performance; spatial and spectral features

PDF

Paper 47: Disruptive Technologies for Labor Market Information System Implementation Enhancement in the UAE: A Conceptual Perspective

Abstract: In December 2019, the world learned about the first outbreak of the novel coronavirus (COVID-19) that first broke out in Wuhan, China. This limited outbreak in a small province of China has rapidly evolved into a global pandemic that has led to a health and economic crisis. As millions of individuals have lost their lives, others have lost their jobs due to the recession of 2020. While the skills and educational mismatch have been a prevalent problem in the UAE labor market, it is logical to assume that the global pandemic has likely increased this problem's extent. Therefore, there is an urgent need to adopt an agile, innovative solution to address the upcoming challenges in the labor markets due to the lack of skilled resources and the fear of future work amid the COVID-19 pandemic. Since industry and academia have identified skills and educational mismatch as a complex and multivariate problem, the paper builds a conceptual case from a system engineering perspective to solve this problem efficiently. Based on the literature reviewed related to disruptive technologies and labor market management systems, the paper proposes a new implementation approach for an integrated labor market information system enabled by the most widely used disruptive technologies components in the UAE (Machine Learning, AI, Blockchain, Internet of Things, Big Data Analytics, and Cloud Computing). The proposed approach is considered one of the immediate course of actions required to minimize the UAE economy’s negative impact due to the presence of the skills and educational mismatch phenomena.

Author 1: Ghada Goher
Author 2: Maslin Masrom
Author 3: Astuty Amrin
Author 4: Noorlizawati Abd Rahim

Keywords: Disruptive technologies; labor market information systems; skills and educational mismatch; future of work; system engineering; system design thinking; COVID-19

PDF

Paper 48: Exploratory Study of Some Machine Learning Techniques to Classify the Patient Treatment

Abstract: Numerous studies have been carried out on computation and its applications to medical data with proven benefits for improving the quality of public health. However, not all research results or practical applications can be applied to all conditions but must be in accordance with the various contexts such as community culture, geographical, or citizen behaviors. Unfortunately, the use of digital data in Indonesia is still very limited. The study objective is to assess various data mining techniques to utilize data from laboratory test results collected from a private hospital in Indonesia in predicting the next patient treatment. Furthermore, various machine learning classification techniques were explored for the purpose. Based on the experiments, it was concluded that XGBoost with hyperparameter tuning produced the best accuracy level at 0.7579, compared to other classifiers. A better level of accuracy can be obtained by enriching the type of dataset used, such as the patient's medical record history.

Author 1: Mujiono Sadikin
Author 2: Ida Nurhaida
Author 3: Ria Puspita Sari

Keywords: Electronic health record; XGBoost; patient treatment; patient laboratory test data

PDF

Paper 49: Sentiment Analysis using Social and Topic Context for Suicide Prediction

Abstract: In many fields, analysing large user-generated microblogs is very crucial and drawing many researchers to study. However, processing such short and noisy microblogs is very difficult and challenging. Most prior studies use only texts to find the polarity of sentiment and presume that microblog site is independent and distributed identically, ignoring networked data from microblogs. Consequently, not satisfied with performance motivated by emotional and sentimental sociological approaches. This paper proposes a new methodology that incorporates social and topic context to analyze sentiment on microblogs by introducing the meaning of structure similarity into social context. Unlike from previous research employing direct relations from user and by suggesting a new method to quantify structure similarity. In addition, to design the microblog semantic relation, topic context is introduced. The Laplacian matrix of these graph produced by these context combines social and topic context and Laplacian regularization is applied to the microblogging sentiment model. The Experimental results on the two datasets show that, the suggested model had reliably and substantially outperformed the baseline methods that is helpful for suicide prediction.

Author 1: E. Rajesh Kumar
Author 2: K.V.S.N. Rama Rao

Keywords: Social context; topic context; microblogging; Laplacian matrix; emotional and sentimental

PDF

Paper 50: A DNA Cryptographic Solution for Secured Image and Text Encryption

Abstract: In recent days, DNA cryptography is gaining more popularity for providing better security to image and text data. This paper presents a DNA based cryptographic solution for image and textual information. Image encryption involves scrambling at pixel and bit levels based on hyperchaotic sequences. Both image and text encryption involves basic DNA encoding rules, key combination, and conversion of data into binary and other forms. This new DNA cryptographic approach adds more dynamicity and randomness, making the cipher and keys harder to break. The proposed image encryption technique presents better results for various parameters, like Image Histogram, Correlation co-efficient, Information Entropy, Number of Pixels Change Rate (NPCR), and Unified Average Changing Intensity (UACI), Key Space, and Sensitivity compared with existing approaches. Improved time and space complexity, random key generation for text encryption prove that DNA cryptography can be a better security solution for new applications.

Author 1: Bahubali Akiwate
Author 2: Latha Parthiban

Keywords: DNA cryptography; image encryption; text encryption; DNA digital coding; DNA sequences

PDF

Paper 51: Smart Control System for Smart City using IoT

Abstract: As technologies are introducing and improving day by day, there is a tremendous change in the applications like “Smart City”. The Internet of Things (IoT) is the best approach to combine various Sensors with Embedded devices to create solutions for the real time problems and this will help us to connect with Internet Society. The term IoT means controlling the things through Internet, in other terminology the objects will “talk” to each other and build some communication to work or to react. There are so many attention-grabbing modules in our society, so in this project we will Implement a model of Smart City with around six elements like Smart Garbage System, Smart Irrigation System, Smart Building, Smart Parking System, Restaurant Menu Ordering System and Manhole Detection and Monitoring System that too in an advanced way. Which means we are going to make some advancement in all these elements like Sending messages by using GSM module with Sensor responses, Sending the information and controlling the components through cloud platform like ADAFRUIT, Accessing webpages through IP Address using Networking domain, Enabling Technologies, Connectivity models and we are going to make all this system automatic. The main objective of this project is without any Human Involvement all the systems or elements has to work to make life easier. The required technologies in this project is Internet of Things (IoT), Embedded Systems and Networking.

Author 1: Parasa Avinash
Author 2: B Krishna Vamsi
Author 3: Thumu Srilakshmi
Author 4: P V V Kishore

Keywords: Internet Society; Attention-Grabbing; Networking; Enabling Technologies; Connectivity Models; IP Address; ADAFRUIT; Embedded Systems

PDF

Paper 52: Public Sentiment Analysis on Twitter Data during COVID-19 Outbreak

Abstract: The COVID-19 pandemic, is also known as the coronavirus pandemic, is an ongoing serious global problem all over the world. The outbreak first came to light in December 2019 in Wuhan, China. This was declared pandemic by the World Health Organization on 11th March 2020. COVID-19 virus infected on people and killed hundreds of thousands of people in the United States, Brazil, Russia, India and several other countries. Since this pandemic continues to affect millions of lives, and a number of countries have resorted to either partial or full lockdown. People took social media platforms to share their emotions, and opinions during this lockdown to find a way to relax and calm down. In this research work, sentiment analysis on the tweets of people from top ten infected countries has been conducted. The experiments have been conducted on the collected data related to the tweets of people from top ten infected countries with the addition of one more country chosen from Gulf region, i.e. Sultanate of Oman. A dataset of more than 50,000 tweets with hashtags like #covid-19, #COVID19, #CORONAVIRUS, #CORONA, #StayHomeStaySafe, #Stay Home, #Covid_19, #CovidPandemic, #covid19, #Corona Virus, #Lockdown, #Qurantine, #qurantine, #Coronavirus Outbreak, #COVID etc. posted between June 21, 2020 till July 20, 2020 was considered in this research. Based on the tweets posted in English a sentiment analysis was performed. This research was conducted to understand how people from different infected countries cope with the situation. The tweets were collected, pre-processed and then text mining algorithms used and finally sentiment analysis have been done and presented with the results. The purpose of this research paper to know about the sentiments of people from COVID-19 infected countries.

Author 1: Mohammad Abu Kausar
Author 2: Arockiasamy Soosaimanickam
Author 3: Mohammad Nasar

Keywords: COVID-19; corona virus; corona; pandemic; social media; sentiment analysis; Twitter

PDF

Paper 53: The Enrichment of Texture Information to Improve Optical Flow for Silhouette Image

Abstract: Recent advances in computer vision with machine learning enabled detection, tracking, and behavior analysis of moving objects in video data. Optical flow is fundamental information for such computations. Therefore, accurate algorithm to correctly calculate it has been desired long time. In this study, it was focused on the problem that silhouette data has edge information but does not have texture information. Since popular algorithms for optical flow calculation do not work well on the problem, a method was proposed in this study. It artificially enriches the texture information of silhouette images by drawing shrunk edge on the inside of it with a different color. By the additional texture information, it was expected to give a clue of calculating better optical flows to popular optical flow calculation algorithms. Through the experiments using 10 videos of animals from the DAVIS 2016 dataset and TV-L1 algorithm for dense optical flow calculation, two values of errors (MEPE and AAE) were evaluated and it was revealed that the proposed method improved the performance of optical flow calculation for various videos. In addition, some relationships among the size of shrunk edge and the type and the speed of movement were suggested from the experimental results.

Author 1: Bedy Purnama
Author 2: Mera Kartika Delimayanti
Author 3: Kunti Robiatul Mahmudah
Author 4: Fatma Indriani
Author 5: Mamoru Kubo
Author 6: Kenji Satou

Keywords: Optical flow; silhouette image; artificial increase of texture information

PDF

Paper 54: Verb Sense Disambiguation by Measuring Semantic Relatedness between Verb and Surrounding Terms of Context

Abstract: Word sense disambiguation (WSD) is considered an AI complete problem which may be defined as the ability to resolve the intended meaning of ambiguous words occurring in a language. Language has complex structure and is highly ambiguous which has deep rooted relations between its different components specifically words, sentences and paragraphs. Incidentally, human beings can easily comprehend and resolve the intended meanings of the ambiguous words. The difficulty arises in building a highly accurate machine translation system or information retrieval system because of ambiguity. A number of algorithms have been devised to solve ambiguity but the success rate of these algorithms are very much limited. Context might have played a decisive role in human judgment while deciphering the meaning of polysemic words. A significant number of psychological models have been proposed to emulate the way the human beings understand the meaning of words, sentences or text depending on the context. The pertinent question that the researchers want to address is how the meanings are represented by human beings in mental memory and whether it is feasible to simulate with a computational model. Latent Semantic Analysis (LSA), a mathematical technique which is effective in representation of meanings in the form of vectors that closely approximates human semantic space. By comparing the vectors in the LSA generated semantic space, the closest neighbours of the word vector can be derived which indirectly provides lot of information about a word. However, LSA does not provide a complete theory of meaning. That is why psychological process modules are combined with LSA to make the theory of meaning concrete. Predication algorithm with LSA was proposed by Kintch, 2001 which was sufficient to capture various word senses and was successful in homonym disambiguation. Meaning of a word might have multiple senses specifically verbs. For example, verb “run” has 42 senses in WordNet. In order to find the correct sense of a verb is really a daunting task and resolving verb ambiguity using psycholinguistic model is very much limited. The proposed method has exploited the high dimensional vector LSA space resulted from training samples by applying predication algorithm to derive the most appropriate semantic neighbours for the target polysemous verb from the semantic space. Finally the vector space of test samples are checked with the training samples i.e. semantic neighbours to classify the senses of polysemous words in accurate manner.

Author 1: Arpita Dutta
Author 2: Samir Kumar Borgohain

Keywords: Word sense disambiguation; ambiguous verb; context; semantic space; latent semantic analysis; polysemy; machine translation

PDF

Paper 55: Water Level Monitoring and Control System in Elevated Tanks to Prevent Water Leaks

Abstract: The shortage of water is recurrent in Lima, Perú and in the world, either due to natural disasters, deficiency in pipes due to age, or breakages by external agents such as heavy trucks, heavy machinery, etc., which damage to the underground pipes causing flooding and, shortages in the affected area or zone. As a possible solution, many inhabitants have elevated tanks, but they do not have an automatic control, nor view the water level in the tank, nor recognize possible water leaks if they occur, such leaks are economic detriment to the user. The objective research work wants to avoid shortages in the short period, controlling and monitoring of water for use at home or industry. For the implementation of this project, the technology of the Arduino Uno board, a 16x2 LCD screen, an ultrasonic sensor, and a mini pump will be used, which will be fed with a DC voltage, it is intended to have manual control every time, otherwise the work will be fully automatic. The results obtained were as expected, always displaying on the LCD screen, as at the beginning of the process with the tank empty and its corresponding alarm on a led diode, the percentage of water that gradually rises until the end of the process with the full tank message and its corresponding alarm on another led diode. The implementation of this project is economical, which is why it is very viable for many households and companies that can choose this alternative.

Author 1: Christian Baldeon-Perez
Author 2: Brian Meneses-Claudio
Author 3: Alexi Delgado

Keywords: Arduino; ultrasonic sensor; stockouts; algorithm; monitoring and control; water leaks; elevated tanks

PDF

Paper 56: An Evaluation of the Localization Quality of the Arabic Versions of Learning Management Systems

Abstract: The recent years have witnessed the development of numerous Learning Management Systems (LMSs) to address the increasing needs of individuals and institutions all over the world. For accessibility and commercial purposes, many of these LMSs are released in different languages using what is known as localization systems. In this regard, there has been a parallel between the development of LMSs on one hand and the localization systems on the other. One main aspect in the recent evaluation systems and studies of LMSs is localization quality. Despite the prolific literature on localization quality, very little has been done on Arabic localization. As thus, this study is concerned with the evaluation of the localization quality of the Arabic version of LMSs. In order to explore users’ perceptions towards the Arabic versions of the LMSs, an online questionnaire was conducted. Participants were asked about their familiarity with LMSs and whether they used the Arabic versions of these systems. They were also asked about their experiences with the Arabic localization of these systems and whether they faced any problems in dealing with the Arabic version. The findings indicate that translation inconsistencies are the main problems with the Arabic versions of different LMSs including Blackboard Learn, Microsoft Teams, and Zoom. These problems have negative impacts on the effectiveness and reliability of these systems in schools, universities, and training institutions. For the proper implementation of LMSs both localization and translation should go hand-in-hand. Localization developers and LMSs designers need to consider the linguistic the peculiar linguistic features of Arabic. The findings of the study have implications to translation programs in Arab universities and training institutions. Program designers should integrate translation technologies and localization systems into translation studies. They need to consider the changes within the translation industry. The study was limited to the study of translation quality in the Arabic versions of the localized LMSs. The localization quality of other software programs, games, websites, and applications needs to be explored. Finally, it is recommended to develop a quality matrix that encompasses all the dimensions and peculiarities of Arabic localization.

Author 1: Abdulfattah Omar

Keywords: Ambiguity; Arabic; language inconsistencies; Learning Management Systems (LMSs); localization quality

PDF

Paper 57: Comparative Analysis of the Impact on Air Quality Due to the Operation of La Oroya Metallurgical Complex using the Grey Clustering Method

Abstract: Air pollution is one of the biggest problems worldwide due to the increase of burning of fossil fuels by industries around the world. In the present work, the air quality study will be carried out with the grey clustering method, since the data obtained presents a certain level of uncertainty. In order to obtain a correct analysis of air quality, the comparison was made in two different years with the same monitoring stations. The air quality assessment was carried out in three monitoring stations located in three different districts of the province of La Oroya (La Oroya Antigua, minor town of Huari and Santa Rosa de Sacco), in which, they installed sampling equipment for the evaluation on the basis of 10 particulate matter (PM10) and sulfur dioxide (SO2). In each point of study a positive result was obtained, where an improvement in air quality can be seen, this it is due to the reduction of mining activity in the study area. These results show the improvement over the years. Finally, this method can also be used by any organization in the nation for water or air quality studies.

Author 1: Alexi Delgado
Author 2: Luis Vasquez
Author 3: Luis Espinoza
Author 4: Manuel Mejía
Author 5: Erick Yauri
Author 6: Chiara Carbajal
Author 7: Enrique Lee Huamaní

Keywords: Air quality assessment; Grey clustering method; particulate matter (PM10); sulfur dioxide (SO2)

PDF

Paper 58: Deep Wavelet Neural Network based Robust Text Recognition for Overlapping Characters

Abstract: This paper presents a deep learning based intelligent text recognition system with touching and overlapped characters. The robustness and effectiveness in the proposed model are enhanced through the modified configuration of neural network known as Deep Wavelet Neural Network (DWNN). The capability of deep learning networks to learn efficiently from an unlabeled dataset has attracted the attention of many researchers over the last decade. However, the performance of these networks is subject to the quality of the dataset and invariant image representation. Numerous optical character recognition techniques have also been presented in the recent years, but the overlapped and touching characters have not been addressed much. The nonlinear and uncertain representation of image data in case of overlapped text adds severe complexity in the process of feature extraction and respective learning. The proposed architecture of DWNN uses fast decaying wavelet functions as activation function in place of conventional sigmoid function to cope up with the uncertainties and nonlinearity of the data representation in overlapped text images. It comprises of cascaded layered architecture of translated and dilated versions of wavelets as activation functions for the training and feature extraction at multiple levels. The local transformation and deformation variation in the visual data has also been taken care efficiently through the modified architecture of DWNN. Comprehensive experimental analysis has been performed over various test images to verify the effectiveness of the proposed text recognition system. The performance of the proposed method is assessed with the help of the metrics, namely, estimation error, cost function and accuracy. The proposed approach will be implemented in MATLAB.

Author 1: Neha Tripathi
Author 2: Pushpinder Singh Patheja

Keywords: Text recognition; overlapped characters; deep wavelet neural network; feature extraction; segmentation; basis function; optical character recognition

PDF

Paper 59: Performance Improvement of Network Coding for Heterogeneous Data Items with Scheduling Algorithms in Wireless Broadcast

Abstract: This is the age of information. Now-a-days everyone communicates with each other by means of digital systems. Humans are always communicating with each other on the go. On-demand broadcasting is an efficient way to broadcast information according to user requests. In an on-demand broadcasting network, anyone can satisfy multiple clients in one broadcast which helps to fulfill the enormous demand of information by clients. The optimized flow of digital data in a network through the transmission of digital evidence about messages is called network coding. The “digital evidence” is composed of two or more messages. Network coding incorporated with data scheduling algorithms can further improve the performance of on-demand broadcasting networks. Using network coding, anyone can broadcast multiple data items using single broadcast strategy which can satisfy the needs of more clients. In this work, it is described that network coding cannot always maintain its superiority over non-network coding when the system handles different sized data items. However, the causes of performance reduction on network coding have been analyzed and THETA based dynamic threshold value integration strategy has been proposed through which the network coding can overcome its limitation for handling heterogeneous data items. In the proposed strategy, THETA based dynamic threshold will control which data item will be selected from the Client Relationship graph (CR-graph) so that large sized data items cannot be encoded with small sized data items. Simulation result shows some interesting performance comparison.

Author 1: Romana Rahman Ema
Author 2: Md. Alam Hossain
Author 3: Nazmul Hossain
Author 4: Syed Md. Galib
Author 5: Md. Shafiuzzaman

Keywords: Network coding; scheduling algorithms; CR graph; wireless broadcast; simulation; LTSF; STOBS; performance metric

PDF

Paper 60: Optimality Assessments of Classifiers on Single and Multi-labelled Obstetrics Outcome Classification Problems

Abstract: It is indisputable that clinicians cannot exactly state the outcome of pregnancies through conventional knowledge and methods even as the surge in human knowledge continues. Hence, several computational techniques have been adapted for precise pregnancy outcome (PO) prediction. Obstetric datasets for PO determination exist as single label learning (SLL), multi-label learning (MLL) and multi-target (MTP) problems. There is however no single classifier recommended to optimally satisfy the needs of all the classification types. This work therefore identifies six widely used PO classifiers and investigates their performances in all three classification categories; to find the best performing classifier. Obstetric dataset exposed to input rank analysis via Principal component Analysis, produced thirteen (13) significant features for the experiment. Accuracy, F1-measure and build/test time were used as evaluation metrics. Decision tree (DT) had an average accuracy and F1 score of 89.23% and 88.23% respectively, with 1.0 average rank. Under MLL configuration, average accuracy (91.71%) and F1 score (94.28%) were highest in the random forest (RF) which had a 1.0 average test time rank. Using MTP, DT had an average accuracy of 88.80% and average F1 score of 71.13%, the multi-layered perceptron (MLP) had the best time cost with an average rank value of 2.0. From the results, RF is most optimal in terms of accuracy and average rank value, while DT is the most efficient in terms of time cost. The comparative analysis of global averages of the six base classifiers shows that RF is the most optimal algorithm with an average accuracy of 87.3% given all three data setups in the study. MLP on the other hand had an unexpectedly high time cost, making it unsuitable for similar data classifications if time is the main criterion. It is recommended that the choice of the classifier should either be RF or DT depending on the application domain and whether or not time cost is a major consideration.

Author 1: Udoinyang G. Inyang
Author 2: Samuel A. Robinson
Author 3: Funebi F. Ijebu
Author 4: Ifiok J. Udo
Author 5: Chuwkudi O. Nwokoro

Keywords: Pregnancy outcome; random forest; multi-label learning; comparative analytics; machine learning algorithms; single label learning; maternal outcome prediction; decision tree

PDF

Paper 61: Pixel Value Difference based Face Recognition for Mitigation of Secret Message Detection

Abstract: Data security is an important aspect of the modern digital world. Authentication is necessary for the prevention of data from intruders and hackers. Most of the existing system uses textual password which can provide only single-layer security. The textual passwords are simple but they may prone to spyware as well as dictionary attacks. Hence there is a need for a highly secure and multilayer security method. Steganography, the art of hiding the existence of a message by embedding it into another medium, can be exploited in an authentication system. Steganography has emerged as a technology that introduced steganalysis to detect hidden information. In this approach, the multimedia file is the input that is to be transferred over the media. On the transmitter side, the audio and video files are extracted. The secret audio file is embedded with an audio file using the LSB method while the face of the authenticated person is embedded into the video frame using the Pixel Value Differencing (PVD) method. At the receiver side, the face is extracted using the reverse PVD method and authenticated using the Convolutional Neural Network-based face recognition method. After authentication, the secret audio is extracted using the reverse LSB method. The results show that the MSE, RMSE, PSNR, and SSIM of 0.0000045303, 0.0021, 53.5877, and 0.9957, respectively.

Author 1: Alaknanda S. Patil
Author 2: G. Sundari

Keywords: Audio; Face recognition; Information Security; LSB; Steganography; Video

PDF

Paper 62: Machine Learning based Optimization Scheme for Detection of Spam and Malware Propagation in Twitter

Abstract: Social networking sites are new generation of web-services providing global community of users in an online environment. Twitter is one of such popular social networks having more than 152 million daily active users making a half billions of tweets per day. Owing to its immense popularity, the accounts of legitimate Twitter users are always at a risk from spammers and hackers. Spam and Malware are the most affecting threats reported on the Twitter platform. To preserve the privacy and ensure data safety for online Twitter community, it is necessary develop a framework to safeguard their accounts from such malicious attackers. Machine Learning is recently matured and widely used technique useful to prevent the propagation of such malicious activities in social media. However, the Machine Learning based techniques have yielded a promising result in filtering the undesired contents from the user tweets but its efficiency always remains restricted within the technological limits of the technique used. To devise a more efficient model to detect propagation of spam and malware in the Twitter, this research has proposed a Machine Learning based optimization scheme based on hybrid similarity (Cosine and Jaccard) measured in conjunction with Genetic Algorithm (GA) and Artificial Neural Network (ANN). The Cosine with Jaccard in hybridization has been applied on the Twitter dataset to identify the tweets containing spam and malware. In conjunction to it the GA has been used to enhance the training rate and reduce training error by automatically selecting the designed fitness function while the ANN was applied to classify malicious tweets from through voting rule. The simulation experiments were conducted to compute the precision rate, recall, F-measures. The results of Machine Learning based optimization scheme proposed in this research were compared with the existing state-of-arts techniques already available in this regime. The comparative study reveals that the model proposed in this research is faster and more precise then the existing models.

Author 1: Savita Kumari Sheoran
Author 2: Partibha Yadav

Keywords: Social networking sites, Twitter, spam, malware, Cosine similarity, Jaccard similarity, genetic algorithm, artificial neural network

PDF

Paper 63: Secure Intruder Information Sharing in Wireless Sensor Network for Attack Resilient Routing

Abstract: Securing the routing process against attacks in wireless sensor network (WSN) is a vital factor to ensure the reliability of the network. In the existing system, a secure attack resilient routing for WSN using zone-based topology is proposed against message drops, message tampering and flooding attacks. The secure attack resilient routing provides a protection against attacks by skipping the routing towards less secure zones. However, the existing work did not consider the detection and isolation of the malicious nodes in the zone based wireless sensor network. To solve this issue, we proposed enhanced attack resilient routing by detecting malicious zones and isolating the malicious nodes. We proposed a three-tire framework by adopting sequential probability test to detect and isolate malicious nodes. Attacker information is shared in a secure manner in the network, so that routing selection decision can be made locally in addition to attack resiliency route selection provided at the sink. Overhearing rate is calculated for all nodes in each zone to detect blackhole attackers. Simulation results shows that the proposed Three Tier Frame work provides more security, reduced network overhead and improved Packet delivery ratio in WSNs by comparing with the existing works.

Author 1: Venkateswara Rao M
Author 2: Srinivas Malladi

Keywords: Flooding; malicious zone; network overhead; overhearing rate; packet delivery ratio

PDF

Paper 64: Mobile Technologies’ Utilization and Competency among College Students

Abstract: The focus of this study is to (1) determine the level of mobile technologies’ utilization and competency and (2) report separately the level of basic operation, communication and collaboration, information Seeking, digital citizenship and creativity and innovation skills in mobile technologies among first year undergraduate students in College of Computer Science and Information Technology at Hadhramout University. The sample size consists of 148 freshmen students. Using the descriptive statistical analysis, the results of this study reveal that undergraduates in College of Computer Science and Information Technology have highly utilized mobile technologies. It was also figured out that they were extremely capable to use these devices. Moreover, it was revealed that undergraduates’ competency and utilization levels were so high in communication purpose due to certain social and educational reasons. Due to the results of the study, wider and greater implication of this method in College of Computer Science and Information Technology and other colleges at Hadhramout University is recommended along with the activation of the university apps.

Author 1: Mokhtar Hood Bindhorob
Author 2: Khaled Salmen Aljaaidi

Keywords: Mobile technologies; utilization; competency; college students

PDF

Paper 65: Infrastructure Study for Solving Connectivity Problems Through the Nile River

Abstract: Fiber optics cables present various benefits over regular cables when used as a data transportation medium in today’s communication networks. It is noted that there are significant challenges in the connectivity of inner cities that are located far inland away from the coastal areas. Most of the networks developed in Africa, especially in Egypt, are connected via submarine cables flowing across coastal areas. Very few connections are constructed to connect inner cities by crossing the Nile. The Nile River is characterized by a wide area, offering a natural path for underwater cables’ laying areas. In this study, the analysis and evaluation of the laying of these cables along the bed of the Nile River in Egypt, rather than crossing it, is investigated. There are many issues with laying fiber optic cables across the Nile River. Some of these are the requirement of using more than one node over fiber optic cable for each. When the number of nodes increases, the cost of installation and drilling effort increases with each node. The fiber optic cable path along the Nile River is simulated with a numerical model (Delft-3D). Two different scenarios for laying cables were applied and analyzed to evaluate the effect of the predicted water surface and sediment profiles on the fiber optic cable path. Based on the results obtained, the fiber-optic network infrastructure is proposed to solve connectivity problems by laying fiber optic cables along the Nile River.

Author 1: Noha Kamal
Author 2: Ibrahim Gomaa

Keywords: Communications; optical fiber cables; delft3d; underwater / river crossing cable

PDF

Paper 66: A Meta Analysis of Attention Models on Legal Judgment Prediction System

Abstract: Artificial Intelligence in legal research is transforming the legal area in manifold ways. Pendency of court cases is a long-lasting problem in the judiciary due to various reasons such as lack of judges, lack of technology in legal services and the legal loopholes. The judicial system has to be more competent and more reliable in providing justice on time. One of the major causes of pending cases is the lack of legal intelligence to assist the litigants. The study in this paper reviews the challenges faced by judgment prediction system due to lengthy case facts using deep learning model. The Legal Judgment prediction system can help lawyers, judges and civilians to predict the win or loss rate, punishment term and applicable law articles for new cases. Besides, the paper reviews current encoding and decoding architecture with attention mechanism of transformer model that can be used for Legal Judgment Prediction system. Natural Language Processing using deep learning is an exploring field and there is a need for research to evaluate the current state of the art at the intersection of good text processing and feature representation with a deep learning model. This paper aims to develop a systematic review of existing methods used in the legal judgment prediction system and about the Hierarchical Attention Neural network model in detail. This can also be used in other applications such as legal document classification, sentimental analysis, news classification, text translation, medical reports and so on.

Author 1: G. Sukanya
Author 2: J.Priyadarshini

Keywords: Legal judgment prediction; hierarchical attention neural network; text processing; transformer

PDF

Paper 67: Development of a Virtual Pet Simulator for Pain and Stress Distraction for Pediatric Patients using Intelligent Techniques

Abstract: Pediatric medical procedures are often stressful and painful for children, so they can resist and make the work of doctors and nurses a little more complicated. This research aims to develop a virtual pet simulator to distract pediatric patients from pain and stress using smart techniques. The methodology used is SUM. The primary data for the development of the simulator were gravity, the player's position, the speed, and the mass for the calculation of the predictive physics in the toy to interact with the pet. As part of the intelligent techniques, the A-star algorithm was used for the pet to follow the user and the flocking algorithm to have a natural behavior of a group of animals and thus have a higher immersion level. Trials were conducted with pediatric patients where those who made use of the virtual pet simulator during the medical procedure felt less pain and stress than those who did not try the simulator. Therefore, it is highly recommended to use alternatives such as the one developed to reduce pain and stress in pediatric patients.

Author 1: Angie Solis-Vargas
Author 2: Iam Contreras-Alcázar
Author 3: Jose Sulla-Torres

Keywords: Virtual pet; pediatric patients; pain; stress; smart techniques; A-star algorithm; flocking algorithm

PDF

Paper 68: Identifying Communication Issues Contributing to the Formation of Chaotic Situation: An AGSD View

Abstract: The software can be constructed in many different contexts using various approaches to software creation, Software Development (GSD), Agile Software Development (ASD) and Agile Global Software Development (AGSD) in an ecumenically distributed way (a coalescence of GSD and ASD). This GSD (Global Engenderment of Software) is becoming increasingly important. Although communication is important in the sharing of information between team members, there are additional barriers to multi-site software creation, various time zones and cultures, IT infrastructure, etc., and delays in communication activities that are already problematic. In the case of Agile Global Software Development (AGSD), Agile Global Software Development (AGSD) is much more critical and plays a primary role in interaction and communication. The aim of this paper is to tackle the chaos problems associated with evolution of Agile Global Software (AGSD). We have obtained knowledge from previous works and from web reviews from worldwide, a literature review was conducted. Using a conceptual model, tabulated based on authors, and addressed also, the chaos issues are then illustrated. We identify the most discussed and less discussed issues in the literature. It is consequential to define the chaos issue in order to illustrate the genuine issues that subsisted in AGSD.

Author 1: Hina Noor
Author 2: Babur Hayat Malik
Author 3: Zeenat Amjad
Author 4: Mahek Hanif
Author 5: Sehrish Tabussum
Author 6: Rahat Mansha
Author 7: Kinza Mubasher

Keywords: Chaotic situation; chaos; issues; communication; agile; distributed software development; global distributed software development; communication challenges; AGSD

PDF

Paper 69: Urban Addressing Practices and Geocoding Algorithm Validity in Developing Countries

Abstract: Addressing systems have a key role in understanding and managing economic connections and social conditions, especially in urban territories. Developing countries need to learn from previous experiences and adapt solutions and techniques to their local contexts. A review of the world bank’s experience in addressing cities in Africa during the 1990s provides valuable lessons. It provides an understanding of the operational issues and the key success factors of such operations. It also helps to understand the conceptual components of these systems and the efforts required to build them in the field before the creation of their IT infrastructure. An addressing experience from a private sector initiative in Casablanca-Morocco is also reviewed, where efforts concern the creation of a comprehensive database of addresses. The methods used to collect the data in the field are presented as well as the conceptual model for its integration. The validity of geocoding techniques, which represent the core computing tools of addressing systems, is discussed. In the Moroccan context, the official addressing rules follow Western models and standards, used by default in geocoding algorithms. The study of data collected in Casablanca, processed with GIS tools and algorithms, shows that the percentage of cases not respecting these rules is far from negligible. The analysis was particularly interested in the two main criteria of address numbers: “parity” and “respect of intervals”, analyzed by street segment. Compliance with these conditions was only observed at about 53%. It is then concluded that a geocoding system based on a linear model is not sufficiently validated in the Moroccan context.

Author 1: Mohamed El Imame MALAAININE
Author 2: Hatim LECHGAR

Keywords: Addressing system; geocoding; Geographic Information System (GIS)

PDF

Paper 70: A Self Supervised Defending Mechanism Against Adversarial Iris Attacks based on Wavelet Transform

Abstract: In biometric applications, deep neural networks have presented significant improvements. However, when presenting carefully designed input training data known as adversarial examples, their output is severely reduced. These types of attacks are termed as adversarial attacks, and any biometric security system is greatly affected by these attacks. In the proposed work, an effective defensive mechanism has been developed against adversarial attacks which are introduced in iris images. The proposed defensive mechanism is following the concept of wavelet domain processing and it investigates the mid and high frequency components of wavelet domain components. Based on this, the model reproduces the various denoised copies of input iris images. The proposed strategies are intended to denoise each sub-band of the wavelet domain and assess the sub-bands most likely to be affected by the adversary using the reconstruction error measured for each sub-band. We test the effectiveness of the proposed adversarial protection mechanism against various attack methods and analyzed the results with other state of the art defense approaches.

Author 1: Meenakshi K
Author 2: G. Maragatham

Keywords: Iris classification; deep neural networks; adversarial attack; defense method; wavelet processing; biometrics

PDF

Paper 71: Acquisition of Positional Accuracy with Comparative Analysis of GPS and EGNOS in Urban Constituency

Abstract: Over the years, precise positioning has been the ultimate goal for Satellite Navigation Systems. The American Global Navigation Satellite System deliver the position and time information intended for various sectors such as vehicle tracking, oil exploration, atmospheric studies, astronomical telescope pointing, airport and harbor security tracking etc. Corresponding technological competitors such as Russian Global Navigation Satellite System (GLONASS), European Union’s GALILEO, China’s BeiDou and Japanese Quasi Zenith Satellite System (QZSS) are few other versions of Satellite based Augmentation Systems. Nevertheless, stern security measures, geographical statistics and stimulation of diverse Electronic Gadgets at indoor/outdoor surroundings make it critical to acquire data about any vicinity with seamless accessibility, accuracy and integrity with satellite links. In this paper, positional accuracy has been tested with analysis of EGNOS, EDAS and simple GPS receiver models at Rome City, Italy. To support results, various real time experiments/tests has been performed with GPS Receiver SIRF Demo software. The test was conducted on-board a car by installing a laptop equipped with GPS Receiver plus supportive SBAS (EGNOS particularly) through three diverse bus routes of locality and outcomes of few tested samples inside the Rome City center are specified to check the availability of desired satellite signals. Subsequently, comparative analysis has been executed between the simple GPS data received and GPS + EGNOS data collected during daytime traffic. The strength of test signals reveals accuracy of EGNOS in open terrain area with less congestion. Furthermore, Asian and European Advanced GPS systems are compared in terms of performance as well as feasibility of authentic, accurate and swift satellite navigation systems.

Author 1: Zeeshan Ali
Author 2: Riaz Ahmed Soomro
Author 3: Faisal Ahmed Dahri
Author 4: Muhammad Mujtaba Shaikh

Keywords: Differential GPS; augmentation; EGNOS; EDAS; on-board equipment; urban and positional accuracy

PDF

Paper 72: Review of Public Procurement Fraud Detection Techniques Powered by Emerging Technologies

Abstract: Numerous studies and various methods have been used to detect and prevent corruption in public procurement. With the development of IT technology and thus the digitization of the Public Procurement Process (PPP), the amount of available data is increasing. Studies have shown progress in this area and have revealed many challenges and open issues geared to the various goals outlined in this paper. Different data mining and business intelligence techniques and methods are being used to develop models that will find any suspicious public procurement process, contracts, economic operators, or to classify observations as corrupt. In addition to using classification models, methods such as association rules and graph databases are used to find relationships between economic operators and contracting authorities, as well as to find daughter companies that participate in PPP collusion. Therefore, this paper addresses a comprehensive review of the emerging techniques and models used for the detection of suspicious or corrupted observations, their goals, open issues, challenges, methods and metrics used, tools, and relevant data sources. The findings show that models are mostly fitted on historical data and move in the direction of an early warning system. Moreover, the efficiency of fraud or anomaly detection depends on data set quality and detection of the most important red flags. The study is presenting a summary of identified fraud detection model objectives such as predicting fraud risk in contracts and contractors or finding split purchases, and detection of used data sources such as public procurement process or economic operator data.

Author 1: Nikola Modrušan
Author 2: Kornelije Rabuzin
Author 3: Leo Mršic

Keywords: Public procurement; fraud detection techniques; corruption detection; fraud detection review; fraud data source

PDF

Paper 73: Mobile-based Decision Support System for Poultry Farmers: A Case of Tanzania

Abstract: Poultry farms in Tanzania are characterized by inadequate management practices which are mainly caused by the lack of adequate systems to guide the small-scale poultry farmers in decision making. It is well-established that information is a key factor in making effective decisions in numerous sectors including poultry farming. Furthermore, various researchers have identified the use of mobile decision support tools to be an effective way of aiding farmers in making informed decisions. In this paper, we present a mobile-based decision support system that will aid rural and small-scale poultry farmers in Tanzania to obtain reliable information that is crucial for making proper decisions in their farming activities. In this context, a mobile-based decision support system was achieved through a mobile application integrated with a chatbot assistant to provide a solution to various poultry farming-related problems and simplify their decision-making process. We used a data-driven approach towards developing an informational chatbot assistant for Android smartphones that is capable of interacting with small-scale poultry farmers through natural conversations by utilizing the RASA framework.

Author 1: Martha Shapa
Author 2: Lena Trojer
Author 3: Dina Machuve

Keywords: Decision support system; chatbot; mobile application; poultry farming; data-driven approach

PDF

Paper 74: Using Behaviour-driven Requirements Engineering for Establishing and Managing Agile Product Lines

Abstract: Requirements engineering in agile product line engineering refers to both common and variability components establishing a software. Although it is conventional for the requirements engineering to take place in a dedicated upfront domain analysis phase, agile-based environments denounce such a proactive behaviour. This paper provides an observational study examining a reactive incremental requirement engineering approach called behaviour-driven requirements engineering. The proposed approach uses behaviour-driven development to establish and maintain agile product lines. The findings of the study are very promising and suggest the following: the approach is easy to understand and quick to learn; the approach supports the constantly changing nature of software development; and using behaviour-driven requirements engineering produces reliable and coherent requirements. In practice, the observational study showed that using the proposed approach saved time for development team and customers, decreased costs, improved the software quality, and shortened the time-to-market.

Author 1: Heba Elshandidy
Author 2: Sherif Mazen
Author 3: Ehab Hassanein
Author 4: Eman Nasr

Keywords: Agile product line engineering; behaviour-driven requirements engineering; observational study; requirements engineering

PDF

Paper 75: Detecting Generic Network Intrusion Attacks using Tree-based Machine Learning Methods

Abstract: The development Intrusion Detection System (IDS) has a solid impact in mitigating against internal and external cyber threats among other cybersecurity methods. The machine learning-based method for IDS has proven to be an effective approach to detecting either anomaly or multiple classes of intrusion. For the detection of various types of intrusion by a single IDS model, it is discovered that the overall high accuracy of the IDS model does not translate to high accuracy for each attack type. Some intrusion attacks are seen to share similarities with other attacks thereby evading detection, one of which is the generic attack. The notoriety of the generic attack is the ability of a single generic attack to compromise a whole bunch of block-ciphers. Therefore, this study proposed a machine learning framework to specifically detect generic network intrusion by implementing two (2) decision tree algorithms. The decision tree methods were developed using two distinct variants namely the J48 and Random Tree algorithms. A balanced generic network dataset was curated and used for model development. A 10-fold cross-validation technique was implemented for model development and performance evaluation, where all obtainable performance scores were extracted and presented. The performances of the decision tree methods for generic network intrusion attack detection were comparative analysis and also evaluated against existing methods. The proposed methods of this study are robust, stable and empirically seen to have outperformed existing methods.

Author 1: Yazan Ahmad Alsariera

Keywords: Generic attack; decision trees; cybersecurity; intrusion detection

PDF

Paper 76: An Extensive Analysis of the Vision-based Deep Learning Techniques for Action Recognition

Abstract: Action recognition involves the idea of localizing and classifying actions in a video over a sequence of frames. It can be thought of as an image classification task extended temporally. The information obtained over the multitude of frames is aggregated to comprehend the action classification output. Applications of action recognition systems range from assistance for healthcare systems to human-machine interaction. Action recognition has proven to be a challenging task as it poses many impediments including high computation cost, capturing extended context, designing complex architectures, and lack of benchmark datasets. Increasing the efficiency of algorithms in human action recognition can significantly improve the probability of implementing it in real-world scenarios. This paper has summarized the evolution of various action localization, classification, and detection algorithms applied to data from vision-based sensors. We have also reviewed the datasets that have been used for the action classification, localization, and detection process. We have further explored the areas of action classification, temporal and spatiotemporal action detection, which use convolution neural networks, recurrent neural networks, or a combination of both.

Author 1: Manasa R
Author 2: Ritika Shukla
Author 3: Saranya KC

Keywords: Action recognition; deep learning; vision sensors; convolution neural networks (CNN); recurrent neural networks (RNN); action classification; temporal action detection; spatiotemporal action detection

PDF

Paper 77: Evaluation of Sentiment Analysis based on AutoML and Traditional Approaches

Abstract: AutoML or Automated Machine Learning is a set of tools to reduce or eliminate the necessary skills of a data scientist to build machine learning or deep learning models. Those tools are able to automatically discover the machine learning models and pipelines for the given dataset within very low interaction of the user. This concept was derived because developing a machine learning or deep learning model by applying the traditional machine learning methods is time-consuming and sometimes it is challenging for experts as well. Moreover, present AutoML tools are used in most of the areas such as image processing and sentiment analysis. In this research, the authors evaluate the implementation of a sentiment analysis classification model based on AutoML and Traditional approaches. For the evaluation, this research used both deep learning and machine learning approaches. To implement the sentiment analysis models HyperOpt SkLearn, TPot as AutoML libraries and, as the traditional method, Scikit learn libraries were used. Moreover for implementing the deep learning models Keras and Auto-Keras libraries used. In the implementation process, to build two binary classification and two multi-class classification models using the above-mentioned libraries. Thereafter evaluate the findings by each AutoML and Traditional approach. In this research, the authors were able to identify that building a machine learning or a deep learning model manually is better than using an AutoML approach.

Author 1: K. T.Y. Mahima
Author 2: T.N.D.S.Ginige
Author 3: Kasun De Zoysa

Keywords: Automated machine learning; sentiment analysis; deep learning; machine learning

PDF

Paper 78: Detecting Hate Speech using Deep Learning Techniques

Abstract: Social networking sites saw a steep rise in terms of number of users in last few years. As a result of this, the interaction among the users also increased considerably. Along with these posting racial comments based on cast, race, gender, religion, etc. also increased. This propagation of negative messages is collectively known as hate speeches. Often these posts containing negative comments in social networking sites create law and order situations in the society, leading to loss of human life and properties. Detecting hate speech is one of the major challenges faced in recent time. In recent past, there have been a considerable amount of research going on the field of detection of hate speech in the social networking sites. Researchers in the fields of Natural Language Processing and Machine Learning have done considerable amount research in in this area. This paper uses a simple up sampling method to make the data balanced and implements deep learning models like Long Short Term Memory (LSTM) and Bi-directional Long Short Term Memory (Bi-LSTM) for improved accuracy in detecting hate speech in social networking sites. LSTM was found to have better accuracy that Bi-LSTM for the data set considered. LSTM also had better values for precision and F1 score. Bi-LSTM only for higher values for recall.

Author 1: Chayan Paul
Author 2: Pronami Bora

Keywords: Bi-directional Long Short Term Memory (Bi-LSTM); deep learning; hate speech; Long Short Term Memory (LSTM); text classification

PDF

Paper 79: Design and Implementation of a Strong and Secure Lightweight Cryptographic Hash Algorithm using Elliptic Curve Concept: SSLHA-160

Abstract: Cryptographic hash function assumes a fundamental job in numerous pieces of cryptographic algorithm and conventions, particularly in authentication, non-repudiation and information trustworthiness administrations. A cryptographic hash work takes a commitment of optional tremendous size message and conveys a fixed little size hash code as a yield. In the proposed work SSLHA-160 (a strong and secure lightweight cryptographic hash algorithm), each 512-digit square of a message is first diminished to 256-bit. A cryptographic hash work takes a contribution of discretionary enormous size message and delivers a fixed little size hash code as a yield. In the proposed work SSLHA-160 (A strong and secure lightweight cryptographic hash algorithm), each 512- digit square of a message is first diminished to 256-bit and afterward partitioned into eight equivalent block of 32 pieces each and each 32-cycle block is additionally separated into two sub-block of 16-piece each. These two sub-blocks go about as two purposes of an elliptic curve, which are utilized for computing another point which is of 16 pieces. The new point esteems are thusly handled to produce message digest. SSLHA-160 is easy to develop, simple to actualize and displays solid torrential slide impact (avalanche), when contrasted with SHA1, RIPEMD160 and MD5.

Author 1: Bhaskar Prakash Kosta
Author 2: Pasala Sanyasi Naidu

Keywords: Cryptography hash function; message digests; authentication; elliptic curve concepts

PDF

Paper 80: Heart Diseases Prediction for Optimization based Feature Selection and Classification using Machine Learning Methods

Abstract: Globally, heart disease is considered to be the major cause of death. As per statistics, 17.9 million people are losing their lives every year worldwide. Chronic Kidney Disease (CKD) and Breast Cancer takes the next positions in the list. Disease classification is an important issue that needs more attention now. Making use of an optimized technique for such classification would be a better option. In this heart disease classification, initially, feature selection was done using Teaching learning based Optimization based (TLO) and Kernel Density. TLO is based on the process of classroom teaching, which involves too much iteration that leads to time complexity. Similarly, a certain level of misclassifications has been observed by using Kernel Density (KD). In the proposed method, K-Nearest Neighbour (KNN) is used to address the issue of NaN values and Density based Modified Teaching Learning based Optimization (DMTLO) is used for feature selection. Finally the classification process is done by considering Support Vector Machine (SVM) and Ensemble (Adaboosting method). SVM categorizes data bydissimilar class names by defining a group of support vectors that are part of the group of training inputs that plan a hyper plane in the attribute space. Ensemble method is used to solve statistical, computational and representational problems. Experimental outcomes have proved that the projected DMTLOovertakes the existing methodologies with required quantity of attributes.

Author 1: N. Rajinikanth
Author 2: L. Pavithra

Keywords: Teaching learning based optimization; kernel density; support vector machine; k-nearest neighbour; ensemble learning

PDF

Paper 81: Face Recognition based on Convolution Neural Network and Scale Invariant Feature Transform

Abstract: Recently, Face Recognition (FR) has been received wide attention from both the research community and the cyber security industrial companies. Low accuracy of recognition is considered a main challenge when it comes to talking about employing the Artificial Intelligence (AI) for FR. In this work, the Scale Invariant Feature Transform (SIFT) and the Convolutional Neural Networks (CNN) feature extraction methods are utilized to build an AI based classifier. The CNN extracts features through both the convolutional and polling layers, while the SIFT extracts features depending on the scale space, directions, and histograms of points of interest. The features that are extracted by the CNN and the SIFT methods are used as an inputs for the KNN classifier. The experimental results with 400 test images of 40 persons, with 240 images are randomly chosen as training sets and 160 images from test sets, demonstrate in terms of accuracy, sensitivity, and error rate, that the CNN-based KNN classifier achieved better results when compared to the SIFT-based KNN classifier (accuracy = 97%, sensitivity = 93%, error rate = 3%).

Author 1: Jamilah ALAMRI
Author 2: Rafika HARRABI
Author 3: Slim BEN CHAABANE

Keywords: Face recognition; training; testing; CNN; SIFT; accuracy; classifier

PDF

Paper 82: Regression Test Case Prioritization: A Systematic Literature Review

Abstract: The techniques associated with the Test Case Prioritization (TCP) are used to reduce the cost of regression testing to achieve the objectives that the modifications in the target code would not impact the functionality of updated software. The effectiveness of the TCP is measured based on the cost, the code coverage, and fault detection ability. The regression testing techniques proposed so far are focusing on one or two effectiveness parameters. In this paper, we presented a state-of-art review of the approaches used in regression testing in detail. The second objective is to combine these effective adequacy measures into a single or multi-objective TCP task. This systematic literature review is conducted to identify the state-of-the-art research in regression TCP from 2007 to 2020. The research identifies fifty-two (52) relevant studies that were focusing on these three selection parameters to justify their findings. The results reveal that there were six families of regression TCP in which meta-heuristic regression TCP were reported in 38% and generic regression TCP techniques in 31%. The parameters used as prioritization criteria were cost, code coverage, and fault detection ability. The code coverage is reported by 38%, cost in 17%, and cost and code coverage in 31%. There were three sources for datasets were identified named Software artefact Infrastructure Repository (SIR), Apache Software Foundation, and Git Hub. The measurement and metrics used to validate the effectiveness are inclusiveness, precision, recall, and retest-all.

Author 1: Ali Samad
Author 2: Hairulnizam Mahdin
Author 3: Rafaqut Kazmi
Author 4: Rosziati Ibrahim

Keywords: Software testing; regression testing; test case prioritization; cost; code coverage; fault detection ability

PDF

Paper 83: A Complexity Survey on Density based Spatial Clustering of Applications of Noise Clustering Algorithms

Abstract: Data Clustering is an interesting field of unsupervised learning that has been extensively used and discussed over several research papers and scientific studies. It handles several issues related to data analysis by grouping similar entities into the same set. Up to now, many algorithms were developed for clustering using several techniques including centroids, density and dendrograms approaches. We count nowadays more than 100 diverse algorithms and many enhancements for each algorithm. Therefore, data scientists still struggle to find the best clustering method to use among this diversity of techniques. In this paper we present a survey on DBSCAN algorithm and its enhancements with respect to time requirement. A significant comparison of DBSCAN versions is also illustrated in this paper to help data scientist make decisions about the best version of DBSCAN to use.

Author 1: Boulchahoub Hassan
Author 2: Rachiq Zineb
Author 3: Labriji Amine
Author 4: Labriji Elhoussine

Keywords: Unsupervised learning; clustering; density clustering; DBSCAN

PDF

Paper 84: Particle Physics Simulator for Scientific Education using Augmented Reality

Abstract: In this era of fourth industrial revolution, young learners need to be equipped with 21st century skills, such as critical thinking, creativity, communication, collaboration, innovation and problem solving. Augmented Reality (AR) based learning systems are an effective tool to embed these skills. This paper presents a detailed review of latest research on an AR-based learning systems. Furthermore, an AR-based learning system is proposed to demonstrate the particle physics experiments i.e. proton-proton collision and Higgs field. The proposed learning system algorithms are developed using particle system of unity 3D software. Then, Microsoft Kinect sensor is interfaced with unity 3D to create an immersive experience. Then, the qualitative analysis of the proposed system and latest AR-based learning systems is presented. Finally, the quantitative analysis of the proposed system is conducted. Overall, the results suggest that 85% of the participants recommended the proposed learning system.

Author 1: Hasnain Hyder
Author 2: Gulsher Baloch
Author 3: Khawaja Saad
Author 4: Nehal Shaikh
Author 5: Abdul Baseer Buriro
Author 6: Junaid Bhatti

Keywords: Particle physics; augmented reality; proton-proton collision; Higgs field; interactive classroom; AR in education; AR based lab experiments

PDF

Paper 85: Parallelization Technique using Hybrid Programming Model

Abstract: A multi-core processor is an integrated circuit that contains multiple core processing unit. For more than two decades, the single-core processors dominated the computing environment. The continuous development of hardware and processors led to the emergence of high-performance computers that able to address complex scientific and engineering programs quickly. Besides, running the software codes sequentially increases the execution time in huge and complex programs. The serial code is converted to parallel code to improve the program perfor-mances and reduce the execution time. Therefore, parallelization helps programmers solve computing problems efficiently. This study introduced a novel automatic translation tool that converts serial C++ code into a hybrid parallel code. The study analyzed the performance of the proposed S2PMOACC tool using linear algebraic dense matrix multiplication benchmarking. Besides, we introduced Message Passing Interface (MPI) + Open Accelerator (OpenACC) as a hybrid programming model without preliminary knowledge of parallel programming models and dependency analysis of their source code. The research outcomes enhance the program performances and decrease the implementation time. Moreover, our proposed technique offers better performance than other tools.

Author 1: Abdullah Algarni
Author 2: Abdulraheem Alofi
Author 3: Fathy Eassa

Keywords: Serial code translation; parallel code; C++; hybrid programming model; auto-translation; S2PMOACC

PDF

Paper 86: Fully Convolutional Networks for Local Earthquake Detection

Abstract: Automatic earthquake detection is widely studied to replace manual detection, however, most of the existing methods are sensitive to seismic noise. Hence, the need for Machine and Deep Learning has become more and more significant. Regardless of successful applications of the Fully Convolutional Networks (FCN) in many different fields, to the best of our knowledge, they are not yet applied in earthquake detection. In this paper, we propose an automatic earthquake detection model based on FCN classifier. We used a balanced subset of STanford EArthquake Dataset (STEAD) to train and validate our classifier. Each sample from the subset is re-sampled from 100Hz to 50Hz then normalized. We investigated different, widely used, feature normalization methods, which consist of normalizing all features in the same range, and we showed that feature normalization is not suitable for our data. On the contrary, sample normalization, which consists of normalizing each sample of our dataset individually, improved the accuracy of our classifier by ∼16% compared to using raw data. Our classifier exceeded 99% on training data, compared to 􀀀83% when using raw data. To test the efficiency of our classifier, we applied it to real continuous seismic data from XB Network from Morocco and compared the results to our catalog containing 77 earthquakes. Our results show that we could detect 75 out of 77 earthquakes contained in the catalog.

Author 1: Youness Choubik
Author 2: Abdelhak Mahmoudi
Author 3: Mohammed Majid Himmi

Keywords: Earthquake detection; fully convolutional networks; data normalization; classification

PDF

Paper 87: A Hybridized Deep Learning Method for Bengali Image Captioning

Abstract: An omnipresent challenging research topic in com-puter vision is the generation of captions from an input image. Previously, numerous experiments have been conducted on image captioning in English but the generation of the caption from the image in Bengali is still sparse and in need of more refining. Only a few papers till now have worked on image captioning in Bengali. Hence, we proffer a standard strategy for Bengali image caption generation on two different sizes of the Flickr8k dataset and BanglaLekha dataset which is the only publicly available Bengali dataset for image captioning. Afterward, the Bengali captions of our model were compared with Bengali captions generated by other researchers using different architectures. Additionally, we employed a hybrid approach based on InceptionResnetV2 or Xception as Convolution Neural Network and Bidirectional Long Short-Term Memory or Bidirectional Gated Recurrent Unit on two Bengali datasets. Furthermore, a different combination of word embedding was also adapted. Lastly, the performance was evaluated using Bilingual Evaluation Understudy and proved that the proposed model indeed performed better for the Bengali dataset consisting of 4000 images and the BanglaLekha dataset.

Author 1: Mayeesha Humaira
Author 2: Shimul Paul
Author 3: Md Abidur Rahman Khan Jim
Author 4: Amit Saha Ami
Author 5: Faisal Muhammad Shah

Keywords: Bengali image captioning; hybrid architecture; In-ceptionResNet; Xception

PDF

Paper 88: Hybrid Approaches based on Simulated Annealing, Tabu Search and Ant Colony Optimization for Solving the k-Minimum Spanning Tree Problem

Abstract: In graph theory, the k-minimum spanning tree problem is considered to be one of the well-known NP hard problems to solve. This paper address this problem by proposing several hybrid approximate approaches based on the combination of simulated annealing, tabu search and ant colony optimization algorithms. The performances of the proposed methods are compared to other approaches from the literature using the same well-known library of benchmark instances.

Author 1: El Houcine Addou
Author 2: Abelhafid Serghini
Author 3: El Bekkaye Mermri

Keywords: k-Minimum spanning tree; metaheuristics; simu-lated annealing; ant colony optimization algorithms; tabu search; approximation algorithms

PDF

Paper 89: Automatic Classification of Preliminary Diabetic Retinopathy Stages using CNN

Abstract: Diabetes Mellitus is one of the modern world’s most prominent and dominant maladies. This condition later on leads to a menacing eye disease called Diabetic Retinopathy (DR). Diabetic Retinopathy is a retinal disease that is caused by high blood sugar levels in the retina, and can naturally progress to irreversible vision loss (blindness). The primary purpose of this imperative research is the early detection and classification of this hazardous condition, to try and prevent any threatening complications in the future. In the course of recent years, Convo-lutional Neural Networks (CNNs) turned out to be exceptionally famous and fruitful in solving and unraveling image processing and object detection problems for enormous datasets. Throughout this pivotal research, a model was proposed to detect the presence of (DR) and classify it into 5 distinct stages, factoring in an immense and substantial dataset. The model starts by applying preprocessing techniques such as normalization, to maintain the same dimensions for all the images before proceeding to the main processing stage. Furthermore, diverse sampling methods such as “Resize & Crop”, “Rotation”, and “Flipping” have been tested out, so as to pinpoint the best augmentation technique. Finally, the normalized images were fed into a Convolutional Neural Network (CNN), to predict whether a person suffers from DR or not, and classify the level/stage of the disease. The proposed method was utilized on 88,700 retinal fundus images, which are a parcel of the full (EyePACS) dataset, and finally achieved 81.12%, 89.16%, and 84.16% for sensitivity, specificity, and accuracy, respectively.

Author 1: Omar Khaled
Author 2: Mahmoud ElSahhar
Author 3: Mohamed Alaa El-Dine
Author 4: Youssef Talaat
Author 5: Yomna M. I. Hassan
Author 6: Alaa Hamdy

Keywords: Diabetes mellitus; diabetic retinopathy; DR; convo-lutional neural networks (CNNs); image processing

PDF

Paper 90: Smart Home Energy Management System based on the Internet of Things (IoT)

Abstract: The global increasing demand for energy has brought attention to the need for energy efficiency. Markedly noticeable in developing areas, energy challenges can be at-tributed to the losses in the distribution and transmission sys-tems, and insufficient demand-side energy management. Demand-oriented systems have been widely proposed as feasible solutions. Smart Home Energy Management Systems have been proposed to include smart Internet of Things (IoT)-capable devices in an ecosystem programmed to achieve energy efficiency. However, these systems apply only to already-smart devices and are not appropriate for the many locales where a majority of appliances are not yet IoT-capable. In this paper, we establish the need to pay attention to non-smart appliances, and propose a solution for incorporating such devices into the energy-efficient IoT space. As a solution, we propose Homergy, a smart IoT-based Home Energy Management Solution that is useful for any market –advanced and developing. Homergy consists of the Homergy Box (which is an IoT device with Internet connectivity, an in-built microcontroller and opto-coupled relays), a NoSQL cloud-based database with streaming capabilities, and a secure cross-platform mobile app (Homergy Mobile App). To validate and illustrate the effectiveness of Homergy, the system was deployed and tested in 3 different consumer scenarios: a low-consuming house, a single-user office and a high-consuming house. The results indicated that Homergy produced weekly energy savings of 0.5 kWh for the low-consuming house, 0.35 kWh for the single-user office, and a 13-kWh improvement over existing smart-devices-only systems in the high-consuming house.

Author 1: Emmanuel Ampoma Affum
Author 2: Kwame Agyeman-Prempeh Agyekum
Author 3: Christian Adumatta Gyampomah
Author 4: Kwadwo Ntiamoah-Sarpong
Author 5: James Dzisi Gadze

Keywords: Internet of things; energy efficiency; home control; smart home

PDF

Paper 91: Security, Privacy and Trust in IoMT Enabled Smart Healthcare System: A Systematic Review of Current and Future Trends

Abstract: In the past decades, healthcare has witnessed a swift transformation from traditional specialist/hospital centric approach to a patient-centric approach especially in the smart healthcare system (SHS). This rapid transformation is fueled on account of the advancements in numerous technologies. Amongst these technologies, the Internet of medicals things (IoMT) play an imperative function in the development of SHS with regard to productivity of electronic devices in addition to reliability, accuracy. Recently, several researchers have shown interest to leverage the benefits of IoMT for the development of SHS by interconnecting with the existing healthcare services and available medical resources. Though the integration of IoMT within medical resources enable to revolutionize the patient healthcare service from reactive to proactive care system, the security of IoMT is still in its infancy. As IoMT are mainly employed to capture extremely sensitive individual health data, the security and privacy of IoMT is of paramount importance and very crucial in safeguarding the patient life which could otherwise adversely affect the patient health state and in worse case may also lead to loss of life. Motivated by this crucial requirement, several researchers in tandem to the advancement in IoMT technologies have continuously made noteworthy progress to tackle the security and privacy issues in IoMT. Yet, many possible potential directions exist for future investigation. This necessitates for a complete overview of existing security and privacy solutions in the field of IoMT. Therefore, this paper aims to canvass the literature on the most promising state-of-the-art solutions for securing IoMT in SHS especially in the light of security, privacy protection, authentication and authorization and the use of blockchain for secure data sharing. Finally, highlights the review outcome briefing not only the benefits and limitation of existing security and privacy solutions but also summarizing the opportunities and possible potential future directions that can drive the researchers of next decade to improve and shape their research committed on safe integration IoMT in SHS.

Author 1: Thavavel Vaiyapuri
Author 2: Adel Binbusayyis
Author 3: Vijayakumar Varadarajan

Keywords: Smart healthcare system; internet of medical things; authentication and authorization; security and privacy; blockchain; intrusion detection system

PDF

Paper 92: High Speed Single-Stage Face Detector using Depthwise Convolution and Receptive Fields

Abstract: At present face detectors use a large Convolutional Neural Network (CNN) to achieve high detection performance, which is a widely used sub-area of artificial intelligence. These face detectors have a large number of parameters which reduces their detection speed dreadfully on a system with low computa-tional resources. This is a challenging problem to achieve good performance and high detection speed with finite computational power. In this paper, we propose a single-stage end-to-end trained face detector to address this challenging problem. The computational cost is reduced by using depthwise convolution and swiftly reducing the size of an input image. The early layers of the model use CReLU (Concatenated Rectified Linear Unit) activations to preserve the information and generate better representative features of the input. Respective Field (RF) blocks used in the model improve the detection performance. The proposed model is of 1.7 Megabytes size, able to achieve 42 FPS (Frame Per Second) on CPU (i5-8330H) and 179 FPS on GPU (GTX1060). The model is evaluated on various benchmark datasets like WIDER FACE, PASCAL faces and AFW and archive good performance compared to other state of art methods.

Author 1: Rahul Yadav
Author 2: Priyanka
Author 3: Priyanka Kacker

Keywords: Artificial intelligence; computer-vision; Convolu-tional Neural Network (CNN); face detector

PDF

Paper 93: A Novel Framework for Modelling Wheelchairs under the Realm of Internet-of-Things

Abstract: Innovations in research labs are driven to global markets by applied, established standard engineering practices, using state-of-the-art research that most likely results in manu-facturing highly effective and efficient engineered products. As a technology, that enables assistance to physically challenged people, wheelchairs have attracted researchers across the globe whilst showcasing an increased demand for higher production. However, wheelchairs in relation with the environments im-plementing Internet-of-things (IoT) devices, have been mostly overlooked to include the assessment of global market trends. Therefore, this paper proposes Acceptability Engineering (AE) framework to enhance the growth and expansion of markets relying on the environments, wherein wheelchairs can coordinate with IoT to enable smart technologies. AE as a standard engi-neering approach would help in – evaluating the characteristics of IoT-wheelchair environments, analysing their market trends, and highlighting the deficiencies between early and prevailing markets. This will significantly impact the manufacturers, who market wheelchairs specific to the IoT environments, and in addition manufacturers would be able to identify the potential users of their manufactured products.

Author 1: Sameer Ahmad Bhat
Author 2: Muneer Ahmad Dar
Author 3: Hazem Elalfy
Author 4: Mohammed Abdul Matheen
Author 5: Saadiya Shah

Keywords: Wheelchairs IoT; acceptability engineering; human-centered engineering; innovative technologies; early adopters

PDF

Paper 94: Impact of Mobile Applications for a Lima University in Pandemic

Abstract: The current global pandemic situation has forced universities to opt for distance education, relying on digital tools that are currently available, such as course management platforms like Moodle, videoconferencing applications like Google Meet or Zoom, or instant messaging apps like WhatsApp. In this study it is detailed that these tools have made virtual education an effective alternative to provide education without having a physical space where teachers and students can concentrate. In addition, this document shows that in this form of teaching learning it is not necessary to have a computer, it is enough to have a cell phone to access this type of education in Peru, since most of the country’s homes have a smartphone . Both students and teachers affirm that, although a little more time is invested than usual, this teaching method is satisfactory. The result obtained is that the use of mobile applications plays a very important role in virtual classes since the vast majority of students use the cell phone. In conclusion, teaching and learning in higher university education with the use of mobile applications, both teachers and students said that it was of great help due to the interaction through communication with WhatsApp, zoom, Google meet, among others. In addition, being in constant communication with the students through the applications strengthened the teaching.

Author 1: Carlos Diaz-Nunez
Author 2: Gianella Sanchez-Cochachin
Author 3: Yordin Ricra-Chauca
Author 4: Laberiano Andrade-Arenas

Keywords: Higher education; internet connection; mobile ap-plications; pandemic; university

PDF

Paper 95: Deep Convolutional Neural Network for Chicken Diseases Detection

Abstract: For many years in the society, farmers rely on experts to diagnose and detect chicken diseases. As a result, farmers lose many domesticated birds due to late diagnoses or lack of reliable experts. With the available tools from artificial intelligence and machine learning based on computer vision and image analysis, the most common diseases affecting chicken can be identified easily from the images of chicken droppings. In this study, we propose a deep learning solution based on Convolution Neural Networks (CNN) to predict whether the faeces of chicken belong to either of the three classes. We also leverage the use of pre-trained models and develop a solution for the same problem. Based on the comparison, we show that the model developed from the XceptionNet outperforms other models for all metrics used. The experimental results show the apparent gain of transfer learning (validation accuracy of 9􀀀% using pretraining over its contender 􀀁􀀂.􀀃􀀄% developed CNN from fully training on the same dataset). In general, the developed fully trained CNN comes second when compared with the other model. The results show that pre-trained XceptionNet method has overall performance and highest prediction accuracy, and can be suitable for chicken disease detection application.

Author 1: Hope Mbelwa
Author 2: Jimmy Mbelwa
Author 3: Dina Machuve

Keywords: Image classification; Convolutional Neural Net-works (CNNs); disease detection; transfer learning

PDF

Paper 96: Efficient Lung Nodule Classification Method using Convolutional Neural Network and Discrete Cosine Transform

Abstract: In today’s medicine, Computer-Aided Diagnosis Systems (CAD) are very used to improve the screening test accuracy of pulmonary nodules. Processing, classification, and detection techniques form the basis of CAD architecture. In this work, we focus on the classification step in a CAD system where we use Discrete Cosine Transform (DCT) along with Convolutional Neural Network (CNN) to perform an efficient classification method for pulmonary nodules. Combining both DCT and CNN, the proposed method provides high-level accuracy that outperforms the conventional CNN model.

Author 1: Abdelhamid EL HASSANI
Author 2: Brahim AIT SKOURT
Author 3: Aicha MAJDA

Keywords: Convolutional neural network; discrete cosine trans-form; pulmonary nodule classification; computer aided diagnosis systems

PDF

Paper 97: Streaming of Global Navigation Satellite System Data from the Global System of Navigation

Abstract: The Big Data phenomenon has driven a revolution in data and has provided competitive advantages in business and science domains through data analysis. By Big Data, we mean the large volumes of information generated at high speeds from various information sources, including social networks, sensors for multiple devices, and satellites. One of the main problems in real applications is the extraction of accurate information from large volumes of unstructured data in the streaming process. Here, we extract information from data obtained from the GLONASS satellite navigation system. The knowledge acquired in the discovery of geolocation of an object has been essential to the satellite systems. However, many of these findings have suffered changes as error vocalizations and many data. The Global Navigation Satellite System (GNSS) combines several existing navigation and geospatial positioning systems, including the Global Positioning System, GLONASS, and Galileo. We focus on GLONASS because it has a constellation with 31 satellites. Our research’s difficulties are: (a) to handle the amount of data that GLONASS produces efficiently and (b) to accelerate data pipeline with parallelization and dynamic access to data because these have only structured one part. This work’s main contribution is the Streaming of GNSS Data from the GLONASS Satellite Navigation System for GNSS data processing and dynamic management of meta-data. We achieve a three-fold improvement in performance when the program is running with 8 and 10 threads.

Author 1: Liliana Ibeth Barbosa-Santillan
Author 2: Juan Jaime Sanchez-Escobar
Author 3: Luis Francisco Barbosa-Santillan
Author 4: Amilcar Meneses-Viveros
Author 5: Zhan Gao
Author 6: Julio Cesar Roa-Gil
Author 7: Gabriel A. Le´on Paredes

Keywords: GLONASS; streaming; extraction; satellites data; observation files; metadata

PDF

Paper 98: Deep Reinforcement Learning based Handover Management for Millimeter Wave Communication

Abstract: The Millimeter Wave (mm-wave) band has a broad-spectrum capable of transmitting multi-gigabit per-second date-rate. However, the band suffers seriously from obstruction and high path loss, resulting in line-of-sight (LOS) and non-line-of-sight (NLOS) transmissions. All these lead to significant fluctu-ation in the signal received at the user end. Signal fluctuations present an unprecedented challenge in implementing the fifth gen-eration (5G) use-cases of the mm-wave spectrum. It also increases the user’s chances of changing the serving Base Station (BS) in the process, commonly known as Handover (HO). HO events become frequent for an ultra-dense dense network scenario, and HO management becomes increasingly challenging as the number of BS increases. HOs reduce network throughput, and hence the significance of mm-wave to 5G wireless system is diminished without adequate HO control. In this study, we propose a model for HO control based on the offline reinforcement learning (RL) algorithm that autonomously and smartly optimizes HO decisions taking into account prolonged user connectivity and throughput. We conclude by presenting the proposed model’s performance and comparing it with the state-of-art model, rate based HO scheme. The results reveal that the proposed model decreases excess HO by 70%, thus achieving a higher throughput relative to the rates based HO scheme.

Author 1: Michael S. Mollel
Author 2: Shubi Kaijage
Author 3: Michael Kisangiri

Keywords: Handover management; 5G; machine learning; re-inforcement learning; mm-wave communication

PDF

Paper 99: Disposable Virtual Machines and Challenges to Digital Forensics Investigation

Abstract: Digital forensics field faces new challenges with emerging technologies. Virtualization is one of the significant challenges in the field of digital forensics. Virtual Machines (VM) have many advantages either it be an optimum utilization of hardware resources or cost saving for organizations. Traditional forensics’ tools are not competent enough to analyze the virtual machines as they only support for physical machines, to overcome this challenge Virtual Machine Introspection technologies were developed to perform forensic investigation of virtual machines. Until now, we were dealing with persistent virtual machines; these are created once and used many times. We have extreme version of virtual machine and that is disposable virtual machine. However, the disposable virtual machine once created and are used one time, it vanish from the system without leaving behind any significant traces or artifacts for digital investigator. The purpose of this paper is to discuss various disposable virtualiza-tion technologies available and challenges posed by them on the digital forensics investigation process and provided some future directions to overcome these challenges.

Author 1: Mohammed Yousuf Uddin
Author 2: Sultan Ahmad
Author 3: Mohammad Mazhar Afzal

Keywords: Digital forensics; digital investigation; disposable virtual machines; light weight virtual machine; Microsoft sandbox; QEMU; qubes

PDF

Paper 100: Priority-Mobility Aware Clustering Routing Algorithm for Lifetime Improvement of Dynamic Wireless Sensor Network

Abstract: Wireless sensor network with mobility is rapidly evolving and increasing in the recent decade. The cluster and hierarchical routing strategy demonstrates major changes in the lifespan of the network and the scalability. The latency, average energy consumption, packet distribution ratio is highly impacted due to a lack of coordination between cluster head and extreme mobile network nodes. Overall efficiency of highly mobile wireless sensor network is reduced by current techniques such as mobility-conscious media access control, sleep/wakeup scheduling and transmission of real-time services in wireless sensor network. This paper proposes a novel Priority-Mobility Aware Clustering Routing algorithm (p-MACRON) for high delivery of packets by assigning fair weightage to each and every packet of node. To automatically decide the scheduling policy, reinforcement learning approach is integrated. The mixed approach of priority and self-learning results into better utilization of energy. The experimental result shows comparisons of slotted sense multiple access protocol, AODV, MEMAC and P-MACRON, in which proposed algorithm delivered better results in terms of interval, packet size and simulation time.

Author 1: Rajiv R. Bhandari
Author 2: K. Raja Sekhar

Keywords: Cluster; routing; sleep scheduling; priority; reinforcement

PDF

Paper 101: Cluster-based Access Control Mechanism for Cellular D2D Communication Networks with Dense Device Deployment

Abstract: In cellular device-to-device (D2D) communication networks, devices can communicate directly with each other without passing through base stations. Access control is an important function of radio resource management which aims to reduce frequency collision and mitigate interference between user’s connections. In this paper, we propose a cluster-based access control (CBAC) mechanism for heterogeneous cellular D2D communication networks with dense device deployment where both the macro base station and smallcell base stations (SBSs) coexist. In the proposed CBAC mechanism, relied on monitoring interference from its neighboring SBSs, each SBS firstly selects their operating bandwidth parts. Then, it jointly al-locates channels and assigns transmission power to smallcell user equipments (SUEs) for their uplink transmissions and users using D2D communications to mitigate their interference to uplink transmissions of macrocell user equipments (MUEs). Through computer simulations, numerical results show that the proposed CBAC mechanism can provide higher network throughput as well as user throughput than those of the network-assisted device-decided scheme proposed in the literature. Simulation results also show that SINR of uplink transmissions of MUEs and D2D communications managed by the MBS can be significantly improved.

Author 1: Thanh-Dat Do
Author 2: Ngoc-Tan Nguyen
Author 3: Thi-Huong-Giang Dang
Author 4: Nam-Hoang Nguyen
Author 5: Minh-Trien Pham

Keywords: D2D communications; access control; channel al-location; power assignment; interference mitigation

PDF

Paper 102: Human Recognition using Single-Input-Single-Output Channel Model and Support Vector Machines

Abstract: WiFi based human motion recognition systems mainly rely on the availability of Channel State Information (CSI). Embedded within WiFi devices, the present radio sub-systems can output CSI that describes the response of a wireless communication channel. Radio subsystems as such, use complex hardware architectures that consume lots of energy during data transmission, as well as exhibit phase drift in the sub-carriers. Although human motion recognition (HMR) based on multi-carrier transmission systems show better classification accuracy, transmission of multiple sub-carriers results in an increase in the overall energy consumption at the transmitter. Apparently CSI based systems can be perceived as process intensive and power hungry devices. To alleviate the process intensive computing and reduce energy consumption in WiFi, this study proposes a human recognition system that uses only one radio carrier frequency. The study uses two software defined radios and a machine learning classifier to identify four humans, and the study results show that human identification is possible with 99% accuracy using only one radio carrier. The results of this study will have an impact on the development process of smart sensing systems, particularly those that relate to healthcare, authentication, and passive monitoring and sensing.

Author 1: Sameer Ahmad Bhat
Author 2: Abolfazl Mehbodniya
Author 3: Ahmed Elsayed Alwakeel
Author 4: Julian Webber
Author 5: Khalid Al-Begain

Keywords: Motion detection; pattern recognition; received sig-nal strength indicator; Software Defined Radio (SDR); supervised learning

PDF

Paper 103: Agile Fitness of Software Companies in Bangladesh: An Empirical Investigation

Abstract: With the mandate of light-weight working practices, iterative development, customer collaboration and incremental delivery of business values, Agile software development methods have become the de-facto standard for commercial software development, worldwide. Consequently, this research aims to empirically investigate the preparedness and the adoption of agile practices in the prominent software companies in Bangladesh. To achieve this goal, an extensive survey with 16 established software companies in Bangladesh is carried out. Results exhibit that the Scrum agile methodology is the highest practiced one. Alongside, to a great extent these software companies have the readiness to effectively adopt the Scrum methodology. However, with regard to practicing the Scrum principles, they fall short in many key aspects.

Author 1: M M Mahbubul Syeed
Author 2: Razib Hayat Khan
Author 3: Jonayet Miah

Keywords: Agile manifesto; agile methodology; scrum; soft-ware development projects; software companies in Bangladesh

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org