The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 10

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: An Effective Design of Model for Information Security Requirement Assessment

Abstract: Information security is a major domain of analysis for enhancing the security of sensitive detained business organizations. These days, attackers are advancing themselves by applying highly advanced technological solutions such as artificially intelligent malicious codes, advanced phishing methods and many others to acquire sensitive and critical data from businesses. This paper presents a novel model framework to analyze the requirements of information security for a more robust information system and its assets in organizations. The framework of this model is designed in such a fashion that both new and legacy organizations can adopt it to define the requirement of security that will ensure confidentiality, integrity and availability of information systems and their components - including sensitive domain business and private data that is critical to the organization. There are two different model frameworks which are proposed here. The first one provides specifications of the security requirements and the second provides for the audit of the access logs to capture any unethical practices and violations by internal users. The proposed model for security requirements provides the roadmap to analyze and build proper security requirements to secure business sensitive data. Stepwise processes which are needed to analyze and define security requirements are the key factors of this security model, as they help in clear definitions of security frameworks and infrastructure for an organization. The Audit Model provides the framework for defining information auditing requirements, thus enabling the capture of unethical and unauthorized access to the information system components of the organization.

Author 1: Shailaja Salagrama

Keywords: Information security; network security; web security; confidentiality; integrity; availability; communication technology; information system; internet security; security framework introduction

PDF

Paper 2: UAV Aided Data Collection for Wildlife Monitoring using Cache-enabled Mobile Ad-hoc Wireless Sensor Nodes

Abstract: Unmanned aerial vehicle (UAV) assisted data collection is not a new concept and has been used in various mobile ad hoc networks. In this paper, we propose a caching assisted scheme alternative to routing in MANETs for the purpose of wildlife monitoring. Rather than deploying a routing protocol, data is collected and transported to and from a base station using a UAV. Although some literature exists on such an approach, we propose the use of intermediate caching between the mobile nodes and compare it to a baseline scenario where no caching is used. The paper puts forward our communication design where we have simulated the movement of multiple mobile sensor nodes in a field that move according to the Levy walk model imitating wildlife animal foraging and a UAV that makes regular trips across the field to collect data from them. The unmanned aerial vehicle can collect data not only from the current node it is communicating with but also data of other nodes that this node came into contact with. Simulations show that exchanging cached data is highly advantages as the drone can indirectly communicate with many more mobile nodes.

Author 1: Umair B. Chaudhry
Author 2: Chris I. Phillips

Keywords: UAV; caching; sensors; MANETs; WSN; waypoint

PDF

Paper 3: Collaborative Recommendation based on Implication Field

Abstract: Recently, recommender systems has grown rapidly in both quantity and quality and has attracted many studies aimed at improving their quality. Especially, collaborative fil-tering techniques based on rule mining model combined with statistical implication analysis (SIA) technique also achieved some interesting results. This has shown the potential of SIA to improve the performance of recommender systems. However, it is still not rich and there are several problems to be solved for better results such as the problem of non-binary data processing, dealing with bottleneck case of data partitioning method according to the number of transactions on the very sparse transaction sets during training and testing the model, and not paying attention to exploiting the trend of variation of statistical implication. In order to contribute to solving these problems, the paper focuses on proposing a new data partitioning method, and developing the recommendation model based on equipotential planes mining generated by variation of implication intensity or implication index in the implication field on both binary and non-binary data to improve the recommendations further. Experimental results have shown the success of this new approach through its quality comparison with collaborative filtering recommendation models as well as existing SIA-based ones.

Author 1: Hoang Tan Nguyen
Author 2: Lan Phuong Phan
Author 3: Hung Huu Huynh
Author 4: Hiep Xuan Huynh

Keywords: Implication intensity; implication rules; implication field; equipotential surface

PDF

Paper 4: Taxonomy of Cybersecurity Awareness Delivery Methods: A Countermeasure for Phishing Threats

Abstract: Phishing is a serious threat to the Internet users and has become a vehicle for cybercriminals to perpetrate large-scale crimes worldwide. A wide range of technical and educational measures have been developed and used to address phishing threats. However, the technical anti-phishing measures have been widely studied in the current literature whereas comprehensive analysis of the non-technical anti-phishing techniques has generally been ignored. To close this gap, we develop a new taxonomy of the most common cybersecurity training delivery methods and compare them along various factors. The work reported in this paper is useful for various stakeholders. For organizations conducting or considering phishing training, it helps them understand the various awareness training and phishing campaigns capabilities and design an appropriate program with a meaningful return. For researchers, it offers a clearer understanding of the main challenges, the existing solution space, and the potential scope of future research to be addressed.

Author 1: Asma A. Alhashmi
Author 2: Abdulbasit Darem
Author 3: Jemal H. Abawajy

Keywords: Phishing attack; human factors in cybersecurity; cybersecurity threats; cybersecurity awareness; anti-phishing awareness delivery methods

PDF

Paper 5: Visual Selective Attention System to Intervene User Attention in Sharing COVID-19 Misinformation

Abstract: Information sharing on social media must be accompanied by attentive behavior so that in a distorted digital environment, users are not rushed and distracted in deciding to share information. The spread of misinformation, especially those related to the COVID-19, can divide and create negative effects of falsehood in society. Individuals can also cause feelings of fear, health anxiety, and confusion in the treatment COVID-19. Although much research has focused on understanding human judgment from a psychological underline, few have addressed the essential issue in the screening phase of what technology can interfere amidst users' attention in sharing information. This research aims to intervene in the user's attention with a visual selective attention approach. This study uses a quantitative method through studies 1 and 2 with pre-and post-intervention experiments. In study 1, we intervened in user decisions and attention by stimulating ten information and misinformation using the Visual Selective Attention System (VSAS) tool. In Study 2, we identified associations of user tendencies in evaluating information using the Implicit Association Test (IAT). The significant results showed that the user's attention and decision behavior improved after using the VSAS. The IAT results show a change in the association of user exposure, where after the intervention using VSAS, users tend not to share misinformation about COVID-19. The results are expected to be the basis for developing social media applications to combat the negative impact of the infodemic COVID-19 misinformation.

Author 1: Zaid Amin
Author 2: Nazlena Mohamad Ali
Author 3: Alan F. Smeaton

Keywords: Visual selective attention; COVID-19 misinformation; user attention; information sharing; implicit association test

PDF

Paper 6: Head Position and Pose Model and Method for Head Pose Angle Estimation based on Convolution Neural Network

Abstract: Head position and pose model is created. Also, a method for head poses angle estimation based on Convolution Neural Network (CNN) is proposed. 3D head position model is created from these locations and obtain 3D coordinate of head position. The method proposed here uses CNN. As for the head pose detection, OpenCV and Dlib of the open-source software tools are used with Python program. The images used were RGB images, RGB images + thermography, grayscale images, and RGB images assuming images obtained by near infrared rays, with only the red channel elements extracted. As a result, the RGB image model was the most accurate, but considering the criteria set, the RGB image model was used for morning and daytime detection, and the near-infrared image was used for nighttime and rainy weather scenes. It turned out that it is better to use the model obtained by the training in. The experimental results show almost perfect head pose detection performance when the head pose angle ranges from 0 to 180 degrees with 45 degrees steps.

Author 1: Kohei Arai
Author 2: Akifumi Yamashita
Author 3: Hiroshi Okumura

Keywords: CNN; head pose; OpenCV; Dlib; open-source software; python

PDF

Paper 7: Introduction to NFTs: The Future of Digital Collectibles

Abstract: This paper commences by introducing the essentials of blockchain technology and then goes into how Ethereum blockchain revolutionized blockchain. Smart contracts are presented in the context of showing how they play an important role in implementing rules regarding the Ethereum blockchain, allowing the user to regulate digital assets. The standards used in the Ethereum blockchain to build Non-Fungible Tokens (NFTs) are discussed. The paper concludes by presenting the benefits of NFTs as well as the use of Ethereum blockchain for future applications.

Author 1: Muddasar Ali
Author 2: Sikha Bagui

Keywords: Blockchain technologies; smart contracts; cryptocurrencies; ethereum; non-fungible tokens (NFTs)

PDF

Paper 8: Hybrid e-Government Framework based on Datawarehousing and MAS for Data Interoperability

Abstract: The exponential growth in technological innovation is driven in large part by the digitization of multiple domains and assumes environments of increasing data volumes, arriving at high velocity and variety. e-Government is one such domain that exploits current ICT innovations to improve the delivery of public services to its citizens, businesses, and other stakeholders This imposes to continuously maintain information on daily operations, activities, and assets as well as extensive profiles on citizens, institutions, and organizations. In addition, current centralized platform-based approaches suffer from the single-point-of-failure, which may result in data breaches and leakages, leading to the need for efficient robust mechanisms to ensure secure information sharing, data interoperability, and privacy. In this paper, we propose a business intelligence approach to design a data interoperability framework for e-governance based on data warehousing technology to improve transparency and data accessibility. We also present a hybrid data filtering mechanism, which relies both on the Extraction, Transformation, and Loading (ETL) process, and multi-agent technology to integrate data quality and data interoperability, and supports data transformation into human-readable format. Finally, the framework emphasizes the availability of materialized views to enable efficient execution of analytical queries directly on the large volumes of raw data in the data warehouse.

Author 1: Barakat Oumkaltoum
Author 2: El beqqali Omar
Author 3: Ouksel Aris
Author 4: Chakir Loqman

Keywords: e-Government; interoperability; multi-agent system; materialized views; datawarehouse; business intelligence

PDF

Paper 9: Analyzing User Involvement Practice: A Case Study

Abstract: Engaging users in software development is recognized as effective in furthering the likelihood of product efficacy and a successful project, together with user contentment. Furthermore, user involvement is potentially applicable to numerous organizational contexts that can incorporate a focused user-centered group. This research analyzes the findings of a case study carried out to assess the user involvement situation within a business specializing in innovative software for general consumers, service providers, and enterprises. This company has now formed a user experience group that is devoted to applying user-centered approaches for the overall development of the organizational structure. General feedback was confirmed as the most typical means of gaining user insight, with the level of user involvement in focused development falling short. Nevertheless, the study led to recognition that a firm plan for drawing users into development processes is necessary moving forward.

Author 1: Asaad Alzayed
Author 2: Abdulwahed Khalfan

Keywords: User involvement practices; user involvement challenges; usability; user-center practice; user feedback; end users communication

PDF

Paper 10: Predictive Scaling for Elastic Compute Resources on Public Cloud Utilizing Deep Learning based Long Short-term Memory

Abstract: The cloud resource usage has been increased exponentially because of adaptation of digitalization in government and corporate organization. This might increase the usage of cloud compute instances, resulting in massive consumption of energy from High performance Public Cloud Data Center servers. In cloud, there are some web applications which may experience diverse workloads at different timestamps that are essential for workload efficiency as well as feasibility of all extent. In cloud application, one of the major features is scalability in which most Cloud Service Providers (CSP) offer Infrastructure as a Service (IaaS) and have implemented auto-scaling on the Virtual Machine (VM) levels. Auto-scaling is a cloud computing feature which has the ability in scaling the resources based on demand and it assists in providing better results for other features like high availability, fault tolerance, energy efficiency, cost management, etc. In the existing approach, the reactive scaling with fixed or smart static threshold do not fulfill the requirement of application to run without hurdles during peak workloads, however this paper focuses on increasing the green tracing over cloud computing through proposed approach using predictive auto-scaling technique for reducing over-provisioning or under-provisioning of instances with history of traces. On the other hand, it offers right sized instances that fit the application to execute in satisfying the users through on-demand with elasticity. This can be done using Deep Learning based Time-Series LSTM Networks, wherein the virtual CPU core instances can be accurately scaled using cool visualization insights after the model has been trained. Moreover, the LSTM accuracy result of prediction is also compared with Gated Recurrent Unit (GRU) to bring business intelligence through analytics with reduced energy, cost and environmental sustainability.

Author 1: Bharanidharan. G
Author 2: S. Jayalakshmi

Keywords: Predictive auto-scaling; business intelligence; virtual machines (VM’s); deep learning models; analytics; elasticity; high performance public cloud data centre (HP-PCDC); right sizing

PDF

Paper 11: Highly Efficient Parts of Speech Tagging in Low Resource Languages with Improved Hidden Markov Model and Deep Learning

Abstract: Over the years, many different algorithms are proposed to improve the accuracy of the automatic parts of speech tagging. High accuracy of parts of speech tagging is very important for any NLP application. Powerful models like The Hidden Markov Model (HMM), used for this purpose require a huge amount of training data and are also less accurate to detect unknown (untrained) words. Most of the languages in this world lack enough resources in the computable form to be used during training such models. NLP applications for such languages also encounter many unknown words during execution. This results in a low accuracy rate. Improving accuracy for such low-resource languages is an open problem. In this paper, one stochastic method and a deep learning model are proposed to improve accuracy for such languages. The proposed language-independent methods improve unknown word accuracy and overall accuracy with a low amount of training data. At first, bigrams and trigrams of characters that are already part of training samples are used to calculate the maximum likelihood for tagging unknown words using the Viterbi algorithm and HMM. With training datasets below the size of 10K, an improvement of 12% to 14% accuracy has been achieved. Next, a deep neural network model is also proposed to work with a very low amount of training data. It is based on word level, character level, character bigram level, and character trigram level representations to perform parts of speech tagging with less amount of available training data. The model improves the overall accuracy of the tagger along with improving accuracy for unknown words. Results for “English” and a low resource Indian Language “Assamese” are discussed in detail. Performance is better than many state-of-the-art techniques for low resource language. The method is generic and can be used with any language with very less amount of training data.

Author 1: Diganta Baishya
Author 2: Rupam Baruah

Keywords: Hidden markov models; viterbi algorithm; machine learning; deep learning; text processing; low resource language; unknown words; parts of speech tagging

PDF

Paper 12: Critical Success Factors Associated to Tourism e-Commerce: Study of Peruvian Tourism Operators

Abstract: The incorporation of information and communication technologies (ICT) has generated new opportunities and innovation in business models, such as electronic commerce (EC). Despite the benefits that the EC offers PYMEs, its adoption is low. Several authors have argued that many factors condition the adoption of EC in developing countries. This study proposes a model of factors associated with the adoption of EC by tourism operators based on factors categorized into organizational, individual, environmental, and technological factors. In this study, a structural equation modeling and confirmatory factor analysis tools were used to analyze the data. The data collected from 116 participants (69%) males and 31% females, managers of tourism operators). The results reveal that 11 factors influence the adoption of EC. Also, the operators currently using EC consider that the most influential factor are related with organizational factors operators that have not implement EC value factors involving skills, knowledge and experience in technology. This study can be used to establish policies on ICT adoption in tourism PYMEs.

Author 1: Sussy Bayona-Oré
Author 2: Romy Estrada

Keywords: Adoption; e-commerce; tourism operators; PYMEs; critical factors; TOE

PDF

Paper 13: A Survey on Computer Vision Architectures for Large Scale Image Classification using Deep Learning

Abstract: The advancement in deep learning is increasing day-by-day from image classification to language understanding tasks. In particular, the convolution neural networks are revived and shown their performance in multiple fields such as natural language understanding, signal processing, and computer vision. The property of translational invariance for convolutions has made a huge advantage in the field of computer vision to extract feature invariances appropriately. When these convolu-tions trained using back-propagation tend to prove their results ability to outperform existing machine vision techniques by overcoming the various hand-engineered machine vision models. Hence, a clear understanding of current deep learning methods is crucial. These convolution neural networks have proven to show their performance by attaining state-of-the-art performance in computer vision over years when applied on humongous data. Hence in this survey, we detail a set of state-of-the-art models in image classification evolved from the birth of convolutions to present ongoing research. Each state-of-the-art model evolved in the successive year is illustrated with architecture schema, implementation details, parametric tuning and their performance. It is observed that the neural architecture construction i.e. a supervised approach for an image classification problem is evolved as data construction with cautious augmentations i.e., a self-supervised approach. A detailed evolution from neural architecture construction to augmentation construction is il-lustrated by provided appropriate suggestions to improve the performance. Additionally, the implementation details and the appropriate source for the execution and reproducibility of results are tabulated.

Author 1: D. Dakshayani Himabindu
Author 2: S. Praveen Kumar

Keywords: Image classification; deep learning; computer vision survey; convolution neural networks; IMAGENET dataset

PDF

Paper 14: University Course Timetabling Model in Joint Courses Program to Minimize the Number of Unserved Requests

Abstract: This work proposes a novel course timetable model for the national joint courses program. In this model, the participants, both students and lecturers, come from different universities. It is different from most existing university course timetabling models where the environment is physical, and the system can dictate the timeslots and classrooms for the students and lecturers. The courses are delivered online in this model, so physical classrooms are no longer required, as was the case in most previous course timetabling studies. In this model, the matching process is conducted based on the assigned timeslots and the requested courses. The courses are elective rather than mandatory. Three metaheuristic methods are used to optimize this model: artificial bee colonies, cloud theory-based simulated annealing, and genetic algorithms. Due to the simulation process, the cloud theory-based simulated annealing performs best in minimizing the number of unserved requests. This method outperforms the two other metaheuristic methods, the genetic algorithm, and the artificial bee colony algorithm. According to the simulation results, when the number of students is low, the cloud theory-based simulated annealing has 91 percent fewer unserved requests than the genetic algorithm. When the number of students is large, this figure drops to 62%.

Author 1: Purba Daru Kusuma
Author 2: Abduh Sayid Albana

Keywords: Course timetabling; joint course program; artificial bee colony; simulated annealing; genetic algorithm; online course

PDF

Paper 15: Symbolic Representation-based Melody Extraction using Multiclass Classification for Traditional Javanese Compositions

Abstract: Traditional Javanese compositions contain melodies and skeletal melodies. Skeletal melodies are an extraction form of melodies. The melody extraction problem is similar to the chord detection in Western music, where chords are extracted from a melody. This research aims to develop a melody extraction system for traditional Javanese compositions. Melodies which have a time series data structure were designed as a part of the supervised learning problem to be solved using the pattern recognition technique and the Feed-Forward Neural Networks method. The melody data source uses a symbolic format in the form of sheet music. The beats in melodies data are used as the input and notes in skeletal melodies are used as the target. An FFNN multi-class classifier was built with six classes as the targets, where the class represents notes of the musical scale system. The network evaluation was conducted using accuracy, precision, recall, specificity and F-1 score measurements.

Author 1: Arry Maulana Syarif
Author 2: Azhari Azhari
Author 3: Suprapto Suprapto
Author 4: Khafiizh Hastuti

Keywords: Melody extraction; symbolic representation-based; multiclass classification; feed-forward neural network; Gamelan

PDF

Paper 16: LightGBM-based Ransomware Detection using API Call Sequences

Abstract: Along with the development of technology as well as the explosion in digital data in the era of fourth industrial revolution, cyberattacks using ransomware are emerging as a serious threat to many agencies and organizations. The harm of ransomware is not limited to the areas of information technology and finance but also affects areas related to people's lives, such as the medical field. Therefore, research to identify and detect these types of malicious code is urgent. this paper present a novel approach of identifying and classifying ransomware based on dynamic analysis techniques combined with the use of machine learning algorithms. First, this research focused on the Application programming interface (API) call functions that are extracted during a dynamic analysis of executable samples using the Cuckoo sandbox. Second, research used LightGBM, a gradient boosting decision tree algorithm, for training and then detecting and classifying normal software and eight different types of ransomware. Experimental results showed that the proposed approach achieves an overall accuracy rate of 98.7% when performing multiclass classification. In particular, the detection rates of ransomware and normalware were both 99.9%. At the same time, the accuracy in identifying two specific types of ransomware, WannaCry and Win32:FileCoder, reached 100%.

Author 1: Duc Thang Nguyen
Author 2: Soojin Lee

Keywords: Ransomware; machine learning; API call; dynamic analysis technique; gradient boosting decision tree; GBDT; lightGBM

PDF

Paper 17: Integrated Document-based Electronic Health Records Persistence Framework

Abstract: Electronic health record systems work beyond just recording patients` health data. They have multiple secondary functionalities, such as data reporting and clinical decision support. As each of these systems` workloads has contradictory different needs, managing a multipurpose electronic health record is a challenge. This paper proposes a unified healthcare data framework that can simplify health information system infrastructure. It investigates the suitability of the document-based NoSQL persistence mechanism, storing electronic health records data as a design choice for managing varied complexity ad hoc queries used in operational business intelligence. The performance of the most popular two document-based NoSQL back-ends, Couchbase Server and MongoDB, is compared according to the size of the database and query execution time. Results showed that while MongoDB can execute simple single-document queries nearly in milliseconds. It does not provide satisfactory response time for unplanned complex queries spanning multiple documents. By utilizing its analytics services and multi-dimensional scaling architecture, Couchbase Server multi-node cluster outperforms the response times of MongoDB for both simple and complex healthcare data access patterns. The primary advantage of the proposed tightly coupled EHRs processing framework is its flexibility to manage workload according to changing requirements.

Author 1: Aya Gamal
Author 2: Sherif Barakat
Author 3: Amira Rezk

Keywords: Electronic health records; operational business intelligence; document data model; NoSQL; health information system; persistence framework; Couchbase server

PDF

Paper 18: Cyber Threat Intelligence in Risk Management

Abstract: Cyber Threat Information (CTI) has emerged to help cybersecurity professionals keep abreast of and respond to rising cyber threats (CT) in real-time. This paper aims to study the impact of cyber threat intelligence on risk management in Saudi universities in mitigating cyber risks. In this survey, a comprehensive review of CTI concepts and challenges, as well as risk management and practices in higher education, is presented. Previous literature was reviewed from theses, reviews, and books on the factors influencing the increase of cyber threats and CTI as well as risk management in higher education. A brief discussion of previous studies and their contribution to the current paper and the impact of CTI on risk management to reduce risk. An extensive search of more than 65 research papers was conducted and 28 were cited in this survey. Cyber threats are changing in addition to the huge flow of information about them and dealing with these threats on time requires advance and deep information about the nature of these threats and how to take appropriate defensive measures, and this is what defines the concept of CTI. The use of cyber threat information in risk management enhances the ability of defenders to mitigate growing cyber threats.

Author 1: Amira M. Aljuhami
Author 2: Doaa M. Bamasoud

Keywords: Cyber threat intelligence; risk management; cyberthreat; cyber security

PDF

Paper 19: Expert’s Usability Evaluation of the Pelvic Floor Muscle Training mHealth App for Pregnant Women

Abstract: Pelvic floor muscle training (PFMT) is the first line in managing urinary incontinence. Unfortunately, personal, and social barriers involvement hinder pregnant women to perform PFMT. Therefore, a Kegel Exercise Pregnancy Training (KEPT) app was developed to bridge the accessibility barriers among incontinent pregnant women. This study aimed to evaluate the usability properties of the KEPT app developed for pregnant women to improve their pelvic floor muscle training. A purposive sampling method of the experts was conducted from a sample of experts in informatics and a physician with a special interest in informatics. The design activities were planned in the following sequence: cognitive walkthrough for learnability of the app, heuristic evaluation for the interface of the app and usability questionnaire to evaluate the usability properties (quantitative assessment) of the app. The mHealth application usability questionnaire (MAUQ) was used as its assessment tool to assess the application usability. A total of four experts were involved in this study. Cognitive walkthrough revealed that the KEPT app has several major learnability issues especially the training interface and language consistency to ensure its learnability. Heuristic evaluation showed that the training interface must provide additional information regarding the displayed icon. KEPT app was rated by MAUQ being as ease-of-use, the interface and satisfaction with the usefulness by all the experts which scored 5.80/7.0, 5.57/7.0, and 5.83/7.0, respectively. The suggestions were shared to assist future researchers and developers in developing PFMT mHealth app.

Author 1: Aida Jaffar
Author 2: Sherina Mohd Sidik
Author 3: Novia Admodisastro
Author 4: Evi Indriasari Mansor
Author 5: Lau Chia Fong

Keywords: Pregnant women; pelvic floor muscle training; mHealth app; usability evaluation; cognitive walkthrough; heuristic evaluation

PDF

Paper 20: Aligning Software System Level with Business Process Level through Model-Driven Architecture

Abstract: Information systems are intended to provide organisations with a new way of sustaining themselves, by helping them manage their activities using innovative technologies. Information systems require aligned levels for maximum effectiveness. In this context, business and information technology (IT) alignment is an important issue for the success of organisations. This paper presents the first step of the proposed approach to align the software system level, modelled by a Unified Modeling Language (UML) class diagram, with the business process level, modelled by the Business Process Model and Notation (BPMN) model. A model-driven architecture approach is proposed as a means to transform a set of BPMN models into a UML class diagram. A set of transformation rules is proposed, followed by guidelines that help apply those rules.

Author 1: Maryam Habba
Author 2: Samia Benabdellah Chaouni
Author 3: Mounia Fredj

Keywords: Information system alignment; business process; software system; Business Process Model and Notation (BPMN); Unified Modeling Language (UML); class diagram

PDF

Paper 21: A Review of Modern DNA-based Steganography Approaches

Abstract: In the last two decades, the field of DNA-based steganography has emerged as a promising domain to provide security for sensitive information transmitted over an untrusted channel. DNA is strongly nominated by researchers in this field to exceed other data covering mediums like video, image, and text due to its structural characteristics. Features like enormous hiding capacity, high computational power, and the randomness of its building contents, all sustained to prove DNA supremacy. There are mainly three types of DNA-based algorithms. These are insertion, substitution, and complementary rule-based algorithms. In the last few years, a new generation of DNA-based steganography approaches has been proposed by researchers. These modern algorithms overpass the performance of the old ones either by exploiting a biological factor that exists in the DNA itself or by using a suitable technique available in another field of computer science like artificial intelligence, data structure, networking, etc. The main goal of this paper is to thoroughly analyze these modern DNA-based steganography approaches. This will be achieved by explaining their working mechanisms, stating their pros and cons, and proposing suggestions to improve these methods. Additionally, a biological background about DNA structure, the main security parameters, and classical concealing approaches will be illustrated to give a comprehensive picture of the field.

Author 1: Omar Haitham Alhabeeb
Author 2: Fariza Fauzi
Author 3: Rossilawati Sulaiman

Keywords: Information security in bioinformatics; deoxyribonucleic acid-based steganography; modern hiding approaches

PDF

Paper 22: Adaptive Logarithmic-Power Algorithm for Preserving the Brightness in Contrast Distorted Images

Abstract: The digital images get distorted due to non-uniform light conditions or improper acquisition settings of the digital camera. Such factors lead to distorted contrast objects. In this work, we proposed adaptive enhancement algorithm to improve the contrast while preserving the mean brightness in the image. The method developed is a combination of discrete wavelet transform and gamma correction. Firstly, the gamma scale is computed from multi-scale decomposition using 2D-discrete wavelet transform. The value of scale parameter in gamma was computed from combination of logarithmic and power function. Secondly, the gamma correction is implemented to improve the contrast in the image. Lastly, bilateral filtering is utilized for smoothness of edges in the image. The approach effectively preserved the brightness and optimized the contrast in the image. The objective quality measures used as Peak SNR, AMBE, entropy, entropy based contrast measure and median absolute deviation is computed and compared with other state-of-the-art techniques.

Author 1: Navleen S Rekhi
Author 2: Jagroop S Sidhu

Keywords: Non-uniform images; gamma correction; multi-scale 2D- discrete wavelet transform; logarithmic-power; quality metrics

PDF

Paper 23: A Meta-analytic Review of Intelligent Intrusion Detection Techniques in Cloud Computing Environment

Abstract: Security and data privacy continue to be major considerations in the selection and study of cloud computing. Organizations are migrating more critical operations to the cloud, resulting in increase in the number of cloud vulnerability incidents. In recent years, there have been several technological advancements for accurate detection of attacks in the cloud. Intrusion Detection Systems (IDS) are used to detect malicious attacks and reinstate network security in the cloud environment. This paper presents a systematic literature review and a meta-analysis to shed light on intelligent approaches for IDS in cloud. This review focuses on three intelligent IDS approaches- Machine Learning Algorithms, Computational Intelligence Algorithms and Hybrid Meta-Heuristic Algorithms. A qualitative review synthesis was carried out on a total of 28 articles published between 2016 and 2021. This study concludes that IDS based on Hybrid Meta-Heuristic Algorithms have increased Accuracy, decreased False Positivity Rate and increased Detection Rate.

Author 1: Meghana G Raj
Author 2: Santosh Kumar Pani

Keywords: Intrusion detection system (IDS); machine learning; computational intelligence algorithms; hybrid meta-heuristic algorithms; cloud security; cloud computing

PDF

Paper 24: Machine Learning Mini Batch K-means and Business Intelligence Utilization for Credit Card Customer Segmentation

Abstract: An effective marketing strategy is a method to identify the customers well. One of the methods is by performing a customer segmentation. This study provided an illustration of customer segmentation based on the RFM (Recency, Frequency, Monetary) analysis using a machine learning clustering that can be combined with customer segmentation based on demography, geography, and customer habit through data warehouse-based business intelligence. The purpose of classifying the customers based on the RFM and machine learning clustering analyses was to make a customer level. Meanwhile, customer segmentation based on demography, geography, and behavior was to classify the customers with the same characteristics. The combination of both provided a better analysis result in understanding customers. This study also showed a result that minibatch k-means was the machine learning model with the rapid performance in clustering 3-dimension data, namely recency, frequency, and monetary.

Author 1: Firman Pradana Rachman
Author 2: Handri Santoso
Author 3: Arko Djajadi

Keywords: Customer segmentation; machine learning; business intelligence; data warehouse

PDF

Paper 25: Intrusion Detection System for Energy Efficient Cluster based Vehicular Adhoc Networks

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: M V B Murali Krishna M
Author 2: C. Anbu Ananth
Author 3: N. Krishna Raj

Keywords: Clustering; intrusion detection; vehicular communication; VANET; machine learning; krill herd optimization; fuzzy logic

PDF

Paper 26: Chest Diseases Prediction from X-ray Images using CNN Models: A Study

Abstract: Chest Disease creates serious health issues for human beings all over the world. Identifying these diseases in earlier stages helps people to treat them early and save their life. Conventional Neural Networks play an important role in the health sector especially in predicting diseases in the earlier stages. X-rays are one of the major parameters which help to identify Chest diseases accurately. In this paper, we study the prediction of chest diseases such as Pneumonia, COVID-19, and Tuberculosis (TB) from the X-ray images. The prediction of these diseases is analyzed with the support of three CNN Models such as VGG19, Resnet50V2, and Densenet201, and results are elaborated in the terms of Accuracy and Loss. Though all three models are highly accurate and consistent, considering the factors like architectural size, training speed, etc. Resnet50V2 is the best model for all three diseases. It trained with F1 score accuracies of 0.98,0.92,0.97 for pneumonia, tuberculosis, covid respectively.

Author 1: Latheesh Mangeri
Author 2: Gnana Prakasi O S
Author 3: Neeraj Puppala
Author 4: Kanmani P

Keywords: Convolutional neural networks; VGG19; ResNet50V2; DenseNet201

PDF

Paper 27: Detection of Acute Myeloid Leukemia based on White Blood Cell Morphological Imaging using Naïve Bayesian Algorithm

Abstract: The process of diagnosing AML is based on the complete blood-count analysis of the patients. As such, it involves high energy consumption, long completion times, and is rather expensive compared to conventional medical practices. One of the methods for identifying tumor cells involves the utilization of image-processing techniques based on the morphology of white blood cells (WBCs). The principal objective of this study involves the identification of AML cells—especially of the AML M1 and AML M2 types—through morphological imaging of WBCs using the Naïve Bayes' Classifier. The Image-processing methods used in this study include YCbCr color space classification, image thresholding, morphological operations, chain code representation, and the use of bounding boxes. Regardless of the processing technique used, all identification procedures, performed in this study, were based on the Naïve Bayes' Classifier. The test process was performed on 30 images of each of the AML M1 and M2 cell types. The use of the cell identification method proposed in this study demonstrated an accuracy of 73.33%. While the accuracy of cell type identification is 54.92%. Based on the results obtained in this study, it is inferred that the Naïve Bayes' Classifier method can be employed in the process of identifying dominant AML cell types amongst AML M1 and AML M2 (myeloblast, promyelocyte, myelocyte, and metamyelocyte) based on the morphology of WBCs.

Author 1: Esti Suryani
Author 2: Wiharto
Author 3: Adi Prasetya Putra
Author 4: Wisnu Widiarto

Keywords: Leukemia; acute myeloid leukemia; morphology; image processing; Naïve Bayes

PDF

Paper 28: Automatic Essay Scoring: A Review on the Feature Analysis Techniques

Abstract: Automatic Essay Scoring (AES) is the automatic process of identifying scores for a particular essay answer. Such a task has been extensively addressed by the literature where two main learning paradigms have been utilized: Supervised and Unsupervised. Within these paradigms, there is a wide range of feature analyses has been utilized, Morphology, Frequencies, Structure, and semantics. This paper aims at addressing these feature analysis types with their subcomponent and corresponding approaches by introducing a new taxonomy. Consequentially, a review of recent AES studies is being conducted to highlight the utilized techniques and feature analysis. The finding of such a critical analysis showed that the traditional morphological analysis of the essay answer would lack semantic analysis. Whereas, utilizing a semantic knowledge source such as ontology would be restricted to the domain of the essay answer. Similarly, utilizing semantic corpus-based techniques would be impacted by the domain of the essay answer as well. On the other hand, using essay structural features and frequencies alone would be insufficient, but rather as an auxiliary to another semantic analysis technique would bring promising results. The state-of-the-art in AES research concentrated on neural-network-based-embedding techniques. Yet, the major limitations of these techniques are represented as (i) finding an adequate sentence-level embedding when using models such as Word2Vec and Glove, (ii) ‘out-of-vocabulary when using models such as Doc2Vec and GSE, and lastly, (iii) ‘catastrophic forgetting’ when using BERT model.

Author 1: Ridha Hussein Chassab
Author 2: Lailatul Qadri Zakaria
Author 3: Sabrina Tiun

Keywords: Automatic essay scoring; automatic essay grading; semantic analysis; structure analysis; string-based; corpus-based; word embedding

PDF

Paper 29: Forensic Analysis on False Data Injection Attack on IoT Environment

Abstract: False Data Injection Attack (FDIA) is an attack that could compromise Advanced Metering Infrastructure (AMI) devices where an attacker may mislead real power consumption by falsifying meter usage from end-users smart meters. Due to the rapid development of the Internet, cyber attackers are keen on exploiting domains such as finance, metering system, defense, healthcare, governance, etc. Securing IoT networks such as the electric power grid or water supply systems has emerged as a national and global priority because of many vulnerabilities found in this area and the impact of the attack through the internet of things (IoT) components. In this modern era, it is a compulsion for better awareness and improved methods to counter such attacks in these domains. This paper aims to study the impact of FDIA in AMI by performing data analysis from network traffic logs to identify digital forensic traces. An AMI testbed was designed and developed to produce the FDIA logs. Experimental results show that forensic traces can be found from the evidence logs collected through forensic analysis are sufficient to confirm the attack. Moreover, this study has produced a table of attributes for evidence collection when performing forensic investigation on FDIA in the AMI environment.

Author 1: Saiful Amin Sharul Nizam
Author 2: Zul-Azri Ibrahim
Author 3: Fiza Abdul Rahim
Author 4: Hafizuddin Shahril Fadzil
Author 5: Haris Iskandar Mohd Abdullah
Author 6: Muhammad Zulhusni Mustaffa

Keywords: Advanced Metering Infrastructure (AMI); False Data Injection Attack (FDIA); man in the middle (MITM); internet of things (IoT); forensic analysis

PDF

Paper 30: Design of Decentralized Application for Telemedicine Image Record System with Smart Contract on Ethereum

Abstract: This paper discusses the implementation of smart contracts on the Ethereum blockchain system for telemedicine data storage. Telemedicine is one of the currently developing digital technologies in the health and medical sectors. Telemedicine can be more efficient when seeking treatment because patients do not need to see a doctor face to face. When using blockchain technology, the stored data becomes more transparent for each node in the blockchain network but has verification on every transaction which takes time and gas costs. However, telemedicine has several risks and problems, one of which is long data storage process time because there must be a verification process first to ensure data security. Another problem faced is the issue of the gas fee of the blockchain telemedicine system which is billed in every data storage transaction. In this study, a blockchain system was introduced for managing and securing databases on telemedicine. The implementation of this blockchain system was carried out on a website page that can add data to and retrieve data from the blockchain system. The results of this study showed that blockchain was successfully implemented to store telemedicine data with Ethereum. The analysis in this paper refers to the set and gets functions. The set function is used to send data to the blockchain, and the get function is used to retrieve data from the blockchain. From testing, the Get function has a much faster execution time than the Set function because the Get function does not require verification to retrieve its data. In the iterations carried out—namely 1, 10, and 100—the longest time on average was at 100 iterations when compared to the other iterations. In the tests carried out, the more characters that were stored, the more gas costs must be paid. In the tests, the percentage increase in costs was 0.34% per character.

Author 1: Darrell Yonathan
Author 2: Diyanatul Husna
Author 3: Fransiskus Astha Ekadiyanto
Author 4: I Ketut Eddy Purnama
Author 5: Afif Nurul Hidayati
Author 6: Mauridhi Hery Purnomo
Author 7: Supeno Mardi Susiki Nugroho
Author 8: Reza Fuad Rachmadi
Author 9: Ingrid Nurtanio
Author 10: Anak Agung Putri Ratna

Keywords: Blockchain; Ethereum; smart contract; telemedicine

PDF

Paper 31: Multi-lane LBP-Gabor Capsule Network with K-means Routing for Medical Image Analysis

Abstract: Medical images naturally occur in smaller quantities and are not balanced. Some medical domains such as radiomics involve the analysis of images to diagnose a patient’s condition. Often, images of sick inaccessible parts of the body are taken for analysis by experts. However, medical experts are scarce, and the manual analysis of the images is time-consuming, costly, and prone to errors. Machine learning has been adopted to automate this task, but it is tedious, time-consuming, and requires experienced annotators to extract features. Deep learning alleviates this problem, but the threat of overfitting on smaller datasets and the existence of the “black box” still lingers. This paper proposes a capsule network that uses Local Binary Pattern (LBP), Gabor layers, and K-Means routing in an attempt to alleviate these drawbacks. Experimental results show that the model produces state-of-the-art accuracy for the three datasets (KVASIR, COVID-19, and ROCT), does not overfit on smaller and imbalanced datasets, and has reduced complexity due to fewer parameters. Layer activation maps, a cluster of features, predictions, and reconstruction of the input images, show that our model is interpretable and has the credibility and trust required to gain the confidence of practitioners for deployment in critical areas such as health.

Author 1: Patrick Kwabena Mensah
Author 2: Anokye Acheampong Amponsah
Author 3: Kwame Baffour Agyemang
Author 4: Gabriel Kofi Armah
Author 5: Abra Ayidzoe
Author 6: Faiza Umar Bawah
Author 7: Adebayor Felix Adekoya
Author 8: Benjamin Asubam Weyori
Author 9: Mark Amo-Boateng

Keywords: Convolutional neural networks; deep learning; Gabor filters; k-means routing; local binary pattern; power squash introduction

PDF

Paper 32: Healthcare Misinformation Detection and Fact-Checking: A Novel Approach

Abstract: Information gets spread rapidly in the world of the internet. The internet has become the first choice of people for medication tips related to their health problems. However, this ever-growing usage of the internet has also led to the spread of misinformation. The misinformation in healthcare has severe effects on the life of people, thus efforts are required to detect the misinformation as well as fact-check the information before using it. In this paper, the authors proposed a model to detect and fact-check the misinformation in the healthcare domain. The model extracts the healthcare-related URLs from the web, pre-processes it, computes Term-Frequency, extracts sentimental and grammatical features to detect misinformation, and computes distance measures viz. Euclidean, Jaccard, and Cosine similarity to fact-check the URLs as True or False based on the manually generated dataset with expert’s opinions. The model was evaluated using five state-of-the-art machine learning classifiers Logistic Regression, Support Vector Machine, Naïve Bayes, Decision Tree, and Random forest. The experimental results showed that the sentimental features are crucial while detecting misinformation as more negative words are found in URLs containing misinformation compared to the URLs having true information. It was observed that Naïve Bayes outperformed all other models in terms of accuracy showing 98.7% accuracy whereas the decision tree classifier showed less accuracy compared to all other models showing an accuracy of 92.88%. Also, the Jaccard Distance measure was found to be the best distance measure algorithm in terms of accuracy compared to Euclidean distance and Cosine similarity measures.

Author 1: Yashoda Barve
Author 2: Jatinderkumar R. Saini

Keywords: Misinformation detection; sentiment analysis; document similarity; fact-check; healthcare

PDF

Paper 33: Evaluating Deep and Statistical Machine Learning Models in the Classification of Breast Cancer from Digital Mammograms

Abstract: The application of artificial intelligence techniques in computer aided detection and diagnosis problems has been among the most promising areas with interest from the scientific community and healthcare industry. Recently, deep learning has become the prime tool for such application with many studies focusing on developing variants that optimize diagnostic performance. Despite the widely accepted success of this class of techniques in this application by the scientific community, it is not prudent to consider it as the only tool available for such purpose. In particular, statistical machine learning offers a variety of techniques that can also be applied at a much lower computational cost. Unfortunately, the results from both strategies cannot be directly compared due to the differences in experimental setups and datasets used in available research studies. Therefore, we focus in this study on this direct comparison using the same dataset and similar data preprocessing as the input to both. We compare statistical machine learning to deep learning in the context of computer-aided detection of breast cancer from mammographic images. The results are compared using diagnostic performance metrics and suggest that simpler statistical machine learning techniques may provide better performance with simpler architectures that allow explanation of results.

Author 1: Amel A. Alhussan
Author 2: Nagwan M. Abdel Samee
Author 3: Vidan F. Ghoneim
Author 4: Yasser M. Kadah

Keywords: Computer-aided detection; computer-aided diagnosis; statistical machine learning; deep learning

PDF

Paper 34: Arabic Document Classification by Deep Learning

Abstract: In this paper, we show how to classify Arabic document images using a convolutional neural network, which is one of the most common supervised deep learning algorithms. The main goal of using deep learning is its ability to automatically extract useful features from images, which eliminates the need for a manual feature extraction process. Convolutional neural networks can extract features from images through a convolution process involving various filters. We collected a variety of Arabic document images from various sources and passed them into a convolutional neural network classifier. We adopt a VGG16 pre-trained network trained on ImageNet to classify the dataset of four classes as handwritten, historical, printed, and signboard. For the document image classification, we used VGG16 convolutional layers, ran the dataset through them, and then trained a classifier on top of it. We extract features by fixing the pre-trained network's convolutional layers, then adding the fully connected layers and training them on the dataset. We update the network with the addition of dropout by adding after each max-pooling layer and to the fourteen and the seventeenth layers which are the fully connected layers. The proposed approach achieved a classification accuracy of 92%.

Author 1: Taghreed Alghamdi
Author 2: Samia Snoussi
Author 3: Lobna Hsairi

Keywords: Arabic document; document classification; deep learning; convolutional neural network (CNN); pre_trained network

PDF

Paper 35: Comparative Analysis of Data Mining Algorithms for Cancer Gene Expression Data

Abstract: Cancer is amongst the most challenging disorders to diagnose nowadays, and experts are still struggling to detect it on early stage. Gene selection is significant for identifying cancer-causing different parameters. The two deadliest cancers namely, colorectal cancer and breast malignant, is found in male and female, respectively. This study aims at predicting the cancer at an early stage with the help of cancer bioinformatics. According to the complexity of illness metabolic rates, signaling, and interaction, cancer bioinformatics is among strategies to focus bioinformatics technologies like data mining in cancer detection. The goal of the proposed study is to make a comparison between support vector machine, random forest, decision tree, artificial neural network, and logistic regression for the prediction of cancer malignant gene expression data. For analyzing data against algorithms, WEKA is used. The findings show that smart computational data mining techniques could be used to detect cancer recurrence in patients. Finally, the strategies that yielded the best results were identified.

Author 1: Preeti Thareja
Author 2: Rajender Singh Chhillar

Keywords: Colorectal cancer; breast cancer; bioinformatics; data mining; WEKA; machine learning

PDF

Paper 36: Proactive Virtual Machine Scheduling to Optimize the Energy Consumption of Computational Cloud

Abstract: The rapid expansion of communication and computational technology provides us the opportunity to deal with the bulk nature of dynamic data. The classical computing style is not much effective for such mission-critical data analysis and processing. Therefore, cloud computing is become popular for addressing and dealing with data. Cloud computing involves a large computational and network infrastructure that requires a significant amount of power and generates carbon footprints (CO2). In this context, we can minimize the cloud's energy consumption by controlling and switching off ideal machines. Therefore, in this paper, we propose a proactive virtual machine (VM) scheduling technique that can deal with frequent migration of VMs and minimize the energy consumption of the cloud using unsupervised learning techniques. The main objective of the proposed work is to reduce the energy consumption of cloud datacenters through effective utilization of cloud resources by predicting the future demand of resources. In this context four different clustering algorithms, namely K-Means, SOM (Self Organizing Map), FCM (Fuzzy C Means), and K-Mediod are used to develop the proposed proactive VM scheduling and find which type of clustering algorithm is best suitable for reducing the energy uses through proactive VM scheduling. This predictive load-aware VM scheduling technique is evaluated and simulated using the Cloud-Sim simulator. In order to demonstrate the effectiveness of the proposed scheduling technique, the workload trace of 29 days released by Google in 2019 is used. The experimental outcomes are summarized in different performance matrices, such as the energy consumed and the average processing time. Finally, by concluding the efforts made, we also suggest future research directions.

Author 1: Shailesh Saxena
Author 2: Mohammad Zubair Khan
Author 3: Ravendra Singh
Author 4: Abdulfattah Noorwali

Keywords: Cloud computing; CO2; proactive scheduling; unsupervised learning; clustering; energy; prediction; cloud-sim; performance assessment

PDF

Paper 37: Expert Review on Mobile Augmented Reality Applications for Language Learning

Abstract: Many mobile applications that can increase user engagement and promote self-learning have been developed to date. Nevertheless, mobile applications specific to Malay language learning for non-native speakers with relevant materials are still lacking. Moreover, expert reviews are needed to identify usability issues and check whether such applications can meet the learning goal with relevant materials features. This study developed an augmented reality (AR)-based mobile application called RakanBM for learning the Malay language (i.e. the language officially spoken in Malaysia), and then performed an expert review on the application contents, text presentations, learning outcomes, assessments, effectiveness, efficiency, and satisfaction. The expert review was conducted by a panel of six experts from two specific fields, namely the Malay language and Human-Computer Interaction (HCI), using methods such as cognitive walkthrough (CW), semi-structured interviews, think-aloud protocols, and survey. The results from CW, semi-structured interviews, think-aloud protocols shows that enhancement was needed on user interface and user experience in term of aesthetic and interactivity. The survey results were classified into two levels: high (mean > 4.0) and satisfied (mean > 3.5). Application factors that were recorded as satisfied were the application contents, text presentations, and satisfaction, while the factors recorded as high were the learning outcomes, assessments, effectiveness, and efficiency. The comments or suggestions for improvement were mainly around the contents of the application. Nevertheless, the application received good comments on its usefulness and the topics covered, which were suitable and best for non-native speakers. The findings of this study can guide developers and researchers in the development of future applications that can support language learning for non-native speakers in particular.

Author 1: Nur Asylah Suwadi
Author 2: Nazatul Aini Abd Majid
Author 3: Meng Chun Lam
Author 4: Nor Hashimah Jalaluddin
Author 5: Junaini Kasdan
Author 6: Aznur Aisyah Abdullah
Author 7: Afifuddin Husairi Hussain
Author 8: Azlan Ahmad
Author 9: Daing Zairi Maarof

Keywords: AR; expert review; HCI; language learning; mobile application; self-learning

PDF

Paper 38: Arabic Semantic Similarity Approach for Farmers’ Complaints

Abstract: Semantic similarity is applied for many areas in Natural Language Processing, such as information retrieval, text classification, plagiarism detection, and others. Many researchers used semantic similarity for English texts, but few used for Arabic due to the ambiguity of Arabic concepts in both sense and morphology. Therefore, the first contribution in this paper is developing a semantic similarity approach between Arabic sentences. Nowadays, the world faces a global problem of coronavirus disease. In light of these circumstances and distancing's imposition, it is difficult for farmers to physically communicate with agricultural experts to provide advice and find suitable solutions for their agricultural complaints. In addition, traditional practices still are used by most farmers. Thus, our second contribution is helping the farmers solve their Arabic agricultural complaints using our proposed approach. The Latent Semantic Analysis approach is applied to retrieve the most problem-related semantic to a farmer's complaint and find the related solution for the farmer. Two methods are used in this approach as a weighting schema for data representation are Term Frequency and Term Frequency-Inverse Document Frequency. The proposed model has also classified the big agricultural dataset and the submitted farmer complaint according to the crop type using MapReduce Support Vector Machine to improve the performance of semantic similarity results. The proposed approach's performance with Term Frequency-Inverse Document Frequency-based Latent Semantic Analysis achieved better than its counterparts with an F-measure of 86.7%.

Author 1: Rehab Ahmed Farouk
Author 2: Mohammed H. Khafagy
Author 3: Mostafa Ali
Author 4: Kamran Munir
Author 5: Rasha M.Badry

Keywords: Semantic similarity; latent semantic analysis; big data; MapReduce SVM; COVID-19; agriculture farmer's complaint

PDF

Paper 39: An NB-ANN based Fusion Approach for Disease Genes Prediction and LFKH-ANFIS Classifier for Eye Diseases Identification

Abstract: A key step to apprehend the mechanisms of cells related to a particular disease is the disease gene identification. Computational forecast of disease genes are inexpensive and also easier compared to biological experiments. Here, an effectual deep learning-centered fusion algorithm called Naive Bayes-Artificial Neural Networks (NB-ANN) is proposed aimed at disease gene identification. Additionally, this paper proposes an effectual classifier, namely Levy Flight Krill herd (LFKH) based Adaptive Neuros-Fuzzy Inferences System (ANFIS), for the prediction of eye disease that are brought about by the human disease genes. Utilizing this technique, completely '10' disparate sorts of eye diseases are identified. The NB-ANN includes these ‘4’ steps: a) construction of ‘4’ Feature Vectors (FV), b) selection of negative data, c) training of FV utilizing NB, and d) ANN aimed at prediction. The LFKH-ANFIS undergoes Feature Extraction (FE), Feature Reduction (FR), along with classification for eye disease prediction. The experimental outcomes exhibit that method’s efficiency with regard to precision and recall.

Author 1: Samar Jyoti Saikia
Author 2: S. R. Nirmala

Keywords: Disease gene identification; eye disease identification; deep learning; adaptive neuro-fuzzy inferences system (ANFIS); levy flight based krill herd (LFKH); principle component analysis (PCA)

PDF

Paper 40: Load Balanced and Energy Aware Cloud Resource Scheduling Design for Executing Data-intensive Application in SDVC

Abstract: Cloud computational platform provisions numerous cloud-based Vehicular Adhoc Network (VANET) applications. For providing better bandwidth and connectivity in dynamic manner, Software Defined VANET (SDVN) is developed. Using SDVN, new VANET framework are modeled; for example, Software Defined Vehicular Cloud (SDVC). In SDVC, the vehicle enables virtualization technology through SDVN and provides complex data-intensive workload execution in scalable and efficient manner. Vehicular Edge Computing (VEC) addresses various challenges of fifth generation (5G) workload applications performance and deadline requirement. VEC aid in reducing response time, delay with high reliability for workload execution. Here the workload tasks are executed to nearby edge devices connected to Road Side Unit (RSU) with limited computing capability. If the resources are not available in RSU, then the task execution is offloaded through SDN toward heterogeneous cloud server. Existing workload scheduling in cloud environment are designed considering minimizing cost and delay; however, very limited work has been done considering energy minimization for workload execution. This paper presents a Load Balanced and Energy Aware Cloud Resource Scheduling (LBEACRS) design for heterogeneous cloud framework. Experiment outcome shows the LBEACRS achieves better makespan and energy efficiency performance when compared with standard cloud resource scheduling design.

Author 1: Shalini. S
Author 2: Annapurna P Patil

Keywords: Cloud computing; data-intensive applications; heterogenous server; IEEE 802.11p; software defined network; software defined vehicular cloud; vehicular adhoc network; workload scheduling; road side unit; vehicular edge cloud

PDF

Paper 41: Design and Implementation of Collaborative Management System for Effective Learning

Abstract: Recently, the need for online collaborative learning in educational systems have increased greatly because of COVID-19 pandemic. The pandemic has provided an opportunity for introducing online collaboration and learning among instructors and students in Nigeria. Currently, several schools, colleges, universities in Nigeria have discontinued face to face teaching and learning. Many schools resorted to ineffective alternatives such as the use of televisions and radio programmes to carry out distance education (DE). These alternatives have challenges such as lack of monitoring and evaluation of students’ learning. Collaborative Learning Management System (CLMS) is a research project that aims to assist instructors in achieving their pedagogical goals, organizing course content, collaborating, monitoring, and supporting students' online learning. It is an interactive, online based as well as android based system that has been designed, implemented, tested. The system demonstrates that it is robust, interactive, and achieves the predefined goals. As a Software Development Approach, it was created using the Rapid Application Development (RAD) Methodology. It also provides a secured and reliable platform for the schools, colleges, and universities to implement an online learning system.

Author 1: Tochukwu A. Ikwunne
Author 2: Wilfred Adigwe
Author 3: Christopher C. Nnamene
Author 4: Noah Oghenefego Ogwara
Author 5: Henry A. Okemiri
Author 6: Chinedu E. Emenike

Keywords: Collaborative learning; conventional education; effective learning; e-portfolio; interactive board

PDF

Paper 42: Selection of Learning Apps to Promote Critical Thinking in Programming Students using Fuzzy TOPSIS

Abstract: The aim of this research was to use intelligent decision support systems to obtain student-centred preferences for learning applications to promote critical thinking in first year programming students. This study focuses on the visual programming environment and critical thinking as the gateway skill for student success in understanding programming. Twenty-five critical thinking criteria were synthesized from the literature. As a quantitative study, 217 randomly selected students from an approximate target population of 500 programming students to rate four learning Apps, namely, Scratch, Alice, Blockly and MIT App Inventor, against critical thinking criteria to establish the App that best promotes critical thinking among first year programming students. There were 175 responses received from the 217 randomly chosen programming students who willingly contributed to the study. Consequently, the distinctiveness of this paper lies in its use of the Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) multi-criteria decision-making algorithm to rank criteria for critical thinking, calculate their weights on the basis of informed opinion and hence scientifically deduce the best rated App among the available alternatives that promote critical thinking among first year programming students. The results showed that Scratch promoted critical thinking skills the best in first year programming students whilst Blockly promoted critical thinking skills the least. As a contribution to the study, policy-makers and academic staff can be potentially supported to make informed decisions about the types of learning Apps to consider for students when confronted with multiple selection criteria.

Author 1: Kesarie Singh
Author 2: Nalindren Naicker
Author 3: Mogiveny Rajkoomar

Keywords: Critical thinking; visual programming environment; multi-criteria decision-making; fuzzy TOPSIS

PDF

Paper 43: Complex Plane based Realistic Sound Generation for Free Movement in Virtual Reality

Abstract: A binaural rendering is a technology that generates a realistic sound for a user with a stereo headphone, so it is essential for the stereo headphone based virtual reality (VR) service. However, the binaural rendering has a problem that it cannot reflect the user's free movement in the VR. Because the VR sound does not match with the visual scene when the user moves freely in the VR space, the performance of the VR may be degraded. To reduce the mismatch problem in the VR, the complex plane based stereo realistic sound generation method was proposed to allow the user’s free movement in the VR causing the change of the distance and azimuth between the user and the speaker. For the calculation of the distance and the azimuth between the user and the speaker by the user’s position change, the 5.1 multichannel speaker playback system and the user are placed in the complex plane. Then, the distance and the azimuth between the user and the speaker can be simply calculated as the distance and the angle between two points in the complex plane. The 5.1 multichannel audio signals are scaled by the estimated five distances according to the inverse square law, and the scaled multichannel audio signals are mapped to the newly generated virtual 5.1 multichannel speaker layout using the measured five azimuths and the azimuth by the head movement. Finally, we can successfully obtain the stereo realistic sound to reflect the user’s position change and the head movement through the binaural rendering using the scaled and mapped 5.1 multichannel audio signals and the HRTF coefficients. Experimental results show that the proposed method can generate the realistic audio sound reflecting the user’s position and azimuth change in the VR only with less than about 5 % error rate.

Author 1: Kwangki Kim

Keywords: Virtual reality; realistic sound; binaural rendering; constant power panning; head related transfer function

PDF

Paper 44: Reverse Vending Machine Item Verification Module using Classification and Detection Model of CNN

Abstract: Reverse vending machine (RVM) is an interactive platform that can boost recycling activities by rewarding users that return the recycle items to the machine. To accomplish that, the RVM should be outfitted with material identification module to recognize different sort of recyclable materials, so the user can be rewarded accordingly. Since utilizing combination of sensors for such a task is tedious, a vision-based detection framework is proposed to identify three types of recyclable material which are aluminum can, PET bottle and tetra-pak. Initially, a self-collected of 5898 samples were fed into classification and detection framework which were divided into the ratio of 85:15 of training and validation samples. For the classification model, three pre-trained models of AlexNet, VGG16 and Resnet50 were used, while for the detection model YOLOv5 architecture is employed. As for the dataset, it was gathered by capturing the recycle material picture from various point and information expansion of flipping and pivoting the pictures. A progression of thorough hyper parameters tuning were conducted to determine an optimal structure that is able to produce high accuracy. From series of experiments it can be concluded that, the detection model shows promising outcome compare to the classification module for accomplishing the recycle item verification task of the RVM.

Author 1: Razali Tomari
Author 2: Nur Syahirah Razali
Author 3: Nurul Farhana Santosa
Author 4: Aeslina Abdul Kadir
Author 5: Mohd Fahrul Hassan

Keywords: Convolutional neural network (CNN); classification; detection; reverse vending machine (RVM); You Only Look Once (YOLO)

PDF

Paper 45: How to Analyze Air Quality During the COVID-19 Pandemic? An Answer using Grey Systems

Abstract: The Peruvian government declared a State of National Emergency due to the spread of COVID-19 where the closure of businesses, companies and home isolation was imposed from 03/15/2020 to 06/30/2020. In this context, the research focused on analyzing the characteristics of the air quality in Lima during said period compared to its similar in 2018 and 2019, for this purpose, data from two air quality monitoring stations in PM2.5, PM10, CO and NO2 concentrations and the quality levels given by the Air Quality Index (INCA) were used for further processing with the Grey Clustering method, which is based on grey systems. The results showed that during the quarantine, air quality improved significantly, specifically the northern area of Lima, which was favored by the meteorological conditions that will be classified as good quality as well as the reduction of PM10 by 46% and PM2.5 in 45% to a lesser extent, NO2 by 17% and CO in 11%, unlike the southern zone which, although it showed an improvement, it is still classified as moderate quality with reductions in PM10 by 26%, PM2.5 by 27%, CO by 19%,; however the concentration NO2 registered a non-significant increase of 2%. This behaviour is explained by the lower height of the thermal inversion layer, therefore less space for the dispersion of pollutants. Finally, the study obtains essential information for regulatory agencies as it allows understanding the relationship between air quality and control measures at emission sources for the development of public policies on public health and the environment.

Author 1: Alexi Delgado
Author 2: Denilson Pongo
Author 3: Katherine Felipa
Author 4: Kiara Saavedra
Author 5: Lorena Torres
Author 6: Ch. Carbajal

Keywords: Air quality; COVID 19; grey systems; grey clustering

PDF

Paper 46: Indonesia Sign Language Recognition using Convolutional Neural Network

Abstract: In daily life, the deaf use sign language to communicate with others. However, the non-deaf experience difficulties in understanding this communication. To overcome this, sign recognition via human-machine interaction can be utilized. In Indonesia, the deaf use a specific language, referred to as Indonesia Sign Language (BISINDO). However, only a few studies have examined this language. Thus, this study proposes a deep learning approach, namely, a new convolutional neural network (CNN) to recognize BISINDO. There are 26 letters and 10 numbers to be recognized. A total of 39,455 data points were obtained from 10 respondents by considering the lighting and perspective of the person: specifically, bright and dim lightning, and from first and second-person perspectives. The architecture of the proposed network consisted of four convolutional layers, three pooling layers, and three fully connected layers. This model was tested against two common CNNs models, AlexNet and VGG-16. The results indicated that the proposed network is superior to a modified VGG-16, with a loss of 0.0201. The proposed network also had smaller number of parameters compared to a modified AlexNet, thereby reducing the computation time. Further, the model was tested using testing data with an accuracy of 98.3%, precision of 98.3%, recall of 98.4%, and F1-score of 99.3%. The proposed model could recognize BISINDO in both dim and bright lighting, as well as the signs from the first-and second-person perspectives.

Author 1: Suci Dwijayanti
Author 2: Hermawati
Author 3: Sahirah Inas Taqiyyah
Author 4: Hera Hikmarika
Author 5: Bhakti Yudho Suprapto

Keywords: Indonesia sign language (BISINDO); recognition; CNN; lighting

PDF

Paper 47: Increasing Randomization of Ciphertext in DNA Cryptography

Abstract: Deoxyribonucleic acid (DNA) cryptography is becoming an emerging area in hiding messages, where DNA bases are used to encode binary data to enhance the randomness of the ciphertext. However, an extensive study on existing algorithms indicates that the encoded ciphertext has a low avalanche effect of providing a desirable confusion property of an encryption algorithm. This property is crucial to randomize the relationship between the plaintext and the ciphertext. Therefore, this research aims to reassess the security of the existing DNA cryptography by modifying the steps in the DNA encryption technique and utilizing an existing DNA encoding/decoding table at a selected step in the algorithm to enhance the overall security of the cipher. The modified and base DNA cryptography techniques are evaluated for frequency analysis, entropy, avalanche effect, and hamming weight using 100 different plaintexts with high density, low density, and random input data, respectively. The result introduces good performances to the frequency analysis, entropy, avalanche effect, and hamming weight, respectively. This work shows that the ciphertext generated from the modified model yields better randomization and can be adapted to transmit sensitive information.

Author 1: Maria Imdad
Author 2: Sofia Najwa Ramli
Author 3: Hairulnizam Mahdin

Keywords: DNA cryptography; avalanche effect; frequency test; entropy; hamming weight

PDF

Paper 48: Multistage Sentiment Classification Model using Malaysia Political Ontology

Abstract: Now-a-days, people use social media platforms such as Facebook, Twitter, and Instagram to share their opinions on particular entities or services. The sentiment analysis can get the polarity of these opinions, especially in the political domain. However, in Malaysia, current sentiment analysis can be inaccurate when the netizen tempts to use the combination of Malay words in their comments. It is due to the insufficient Malay corpus and sentiment analysis tools. Therefore, this study aims to construct a multistage sentiment classification model based on Malaysia Political Ontology and Malay Political Corpus. The reviews are carried out in sentiment analysis, classification techniques, Malay sentiment analysis, and sentiment analysis on politics. It starts with the data preparation for Malay tweets to produce tokenized Malay words and then, the construction of corpus using corpus filtering, web search, and filtering using linguistic patterns before enhancing with political lexicons. The process continues with the classifier construction. It started with a generic ontology with Malaysia's political context. Lastly, twelve features are identified. Then the extracted features are tested using different classifiers. As a result, Linear Support Vector Machine yields an accuracy of 86.4% for the classification. It proved that the multistage sentiment classification model improved the Malay tweets classification in the political domain.

Author 1: Nur Farhana Ismail
Author 2: Nur Atiqah Sia Abdullah
Author 3: Zainura Idrus

Keywords: Malay corpus; political ontology; sentiment analysis; sentiment classification; social media

PDF

Paper 49: Factors Impacting Users’ Compliance with Information Security Policies: An Empirical Study

Abstract: One of the main concerns for organizations in today's connected world is to find out how employees follow the information security policy (ISP), as the internal employee has been identified as the weakest link in all breaches of the security policies. Several studies have examined ISP compliance from a dissuasive perspective; however, the results were mixed. This empirical study analyses the impact of organisational security factors and individual non-compliance on users’ intentions toward information security policies. A research model and hypotheses have been developed in this quantitative study. Data from 352 participants was collected through a questionnaire, which then validated the measurement model. The findings revealed that while security system anxiety and non-compliant peer behaviours negatively impact users’ compliance intentions, work impediments positively influence these intentions. Security visibility negatively influences users’ non-compliance, and security education systems positively impact work impediments. This research will help information security managers address the problem of information security compliance because it provides them with an understanding of one of the many factors underlying employee compliance behaviors.

Author 1: Latifa Alzahrani

Keywords: Information security; users’ compliance; compliance factors; security education systems; information security policies

PDF

Paper 50: The Application of Image Processing in Liver Cancer Detection

Abstract: Hepatic cancer is caused by the uncontrolled growth of liver cells, an HCC is the most common form of malignant liver cancer, accounting for 75 percent of cases. This tumor is difficult to diagnose, and it is often discovered at an advanced stage, posing a life-threatening danger. As a result, early diagnosis of liver cancer increases life expectancy. So, using a digital image processing method, we suggest an automated computer-aided diagnosis of liver tumors from MRI images. Magnetic Resonance Imaging (MRI) images are used to identify liver tumors in this case. The image goes through image preprocessing, image segmentation, and feature extraction, all of which are done within the layers of an Artificial Neural Network, making it an automated operation. To make the edge continuous, this operation combines two processes: edge and manual labeling. On the basis of statistical characteristics, tumors are often divided into four categories: cyst, adenoma, hemangioma, and malignant liver tumor. The aim of this proposed technique is to automatically highlight and categorize tumor regions in Magnetic Resonance Imaging images without the need for a medical practitioner.

Author 1: Meenu Sharma
Author 2: Rafat Parveen

Keywords: Liver cancer; digital image processing; magnetic resonance imaging; early stage

PDF

Paper 51: Automating Time Series Forecasting on Crime Data using RNN-LSTM

Abstract: Criminal activities, be it violent or non-violent are major threats to the safety and security of people. Frequent Crimes are the extreme hindrance to the sustainable development of a nation and thus need to be controlled. Often Police personnel seek the computational solution and tools to realize impending crimes and to perform crime analytics. The developed and developing countries experimenting their tryst with predictive policing in the recent times. With the advent of advanced machine and deep learning algorithms, Time series analysis and building a forecasting model on crime data sets has become feasible. Time series analysis is preferred on this data set as the crime events are recorded with respect to time as significant component. The objective of this paper is to mechanize and automate time series forecasting using a pure DL model. N-Beats Recurrent Neural Networks (RNN) are the proven ensemble models for time series forecasting. Herein, we had foreseen future trends with better accuracy by building a model using NBeats algorithm on Sacremento crime data set. This study applied detailed data pre-processing steps, presented an extensive set of visualizations and involved hyperparameter tuning. The current study has been compared with the other similar works and had been proved as a better forecasting model. This study varied from the other research studies in the data visualization with the enhanced accuracy.

Author 1: J Vimala Devi
Author 2: K S Kavitha

Keywords: Time series analysis; deep learning; RNN; forecasting; crime data; predictive policing; machine learning

PDF

Paper 52: Level Transducer Circuit Implemented by Ultrasonic Sensor and Controlled with Arduino Nano for its Application in a Water Tank of a Fire System

Abstract: This article aims to describe the design of a circuit of a level transducer implemented by means of an ultrasonic sensor and controlled by Arduino Nano, applied to a water tank of a firefighting system. Initially, the integration of the Siemens 1212C programmable logic controller is described, in the connection between the sensor, the controller and the interfaces that allow to generate the monitoring, control and data recording, conditioned by a PWM pulse width modulated signal controlled by Arduino Nano. When developing the research and performing an analysis of the linear regression model, it is established that the behavior of the controlled variable with respect to time, generates a linear voltage response in the range of 0 to 10 volts; expressing in terms of correlational relationship a factor of R2 equal to 0.997, thus establishing that the designed transducer does not show susceptibility to noise or disturbances in the start-up of the firefighting system.

Author 1: Omar Chamorro-Atalaya
Author 2: Dora Arce-Santillan
Author 3: Guillermo Morales-Romero
Author 4: Adrián Quispe-Andía
Author 5: Nicéforo Trinidad-Loli
Author 6: Elizabeth Auqui-Ramos
Author 7: César León-Velarde
Author 8: Edith Gutiérrez-Zubieta

Keywords: Level transducer; ultrasonic sensor; Arduino Nano; control; pulse width

PDF

Paper 53: Improvement of Deep Learning-based Human Detection using Dynamic Thresholding for Intelligent Surveillance System

Abstract: Human detection plays an important role in many applications of the intelligent surveillance system (ISS), such as person re-identification, human tracking, people counting, etc. On the other hand, the use of deep learning in human detection has provided excellent accuracy. Unfortunately, the deep-learning method is sometimes unable to detect objects that are too far from the camera. It is because the threshold selection for confidence value is statically determined at the decision stage. This paper proposes a new strategy for using dynamic thresholding based on geometry in the images. The proposed method is evaluated using the dataset we created. The experiment found that the use of dynamic thresholding provides an increase in F-measure of 0.11 while reducing false positives by 0.18. This shows that the proposed strategy effectively detects human objects, which is applied to the ISS.

Author 1: Wahyono
Author 2: Moh. Edi Wibowo
Author 3: Ahmad Ashari
Author 4: Muhammad Pajar Kharisma Putra

Keywords: Human detection; YOLO; dynamic thresholding; intelligent surveillance system

PDF

Paper 54: Forecast Breast Cancer Cells from Microscopic Biopsy Images using Big Transfer (BiT): A Deep Learning Approach

Abstract: Now-a-days, breast cancer is the most crucial problem amongst men and women. A massive number of people are invaded with breast cancer all over the world. An early diagnosis can help to save lives with proper treatment. Recently, computer-aided diagnosis is becoming more popular in medical science as well as in cancer cell identification. Deep learning models achieve excessive attention because of their performance in identifying cancer cells. Mammography is a significant creation for detecting breast cancer. However, due to its complex structure, it is challenging for doctors to identify. This study provides a convolutional neural network (CNN) approach to detecting cancer cells early. Dividing benign and malignant mammography images can significantly improve detection and accuracy levels. The BreakHis 400X dataset is collected from Kaggle and DenseNet-201, NasNet-Large, Inception ResNet-V3, Big Transfer (M-r101x1x1); these architectures show impressive performance. Among them, M-r101x1x1 provides the highest accuracy of 90%. The main priority for this research work is to classify breast cancer with the highest accuracy with selected neural networks. This study can improve the systematic way of early-stage breast cancer detection and help physicians' decision-making.

Author 1: Md. Ashiqul Islam
Author 2: Dhonita Tripura
Author 3: Mithun Dutta
Author 4: Md. Nymur Rahman Shuvo
Author 5: Wasik Ahmmed Fahim
Author 6: Puza Rani Sarkar
Author 7: Tania Khatun

Keywords: Convolutional neural network (CNN); breast cancer; Big Transfer (BiT); densenet-201; NasNet-Large; Inception-Resnet-v3; mammography

PDF

Paper 55: Mobile Application with Augmented Reality to Improve Learning in Science and Technology

Abstract: Education has taken a big turn due to the current health situation, and as a result the use of technology has become a great ally of education, achieving important benefits. Augmented reality is being used by teachers and students especially in distance and/or face-to-face learning through didactic learning, self-instruction and the promotion of research. This article shows the development and influence of a mobile application with augmented reality that serves as a reinforcement for the learning of Science and Technology in students of sixth grade of Primary and first year of Secondary School. The Mobile D methodology is used during the development process of the application, the research design is Pre-Experimental since the Pre-Test and Post-Test tests are performed to a single group of students being the total of 30, obtaining as final result the increase in the level of interest of the students to 100%, in the level of understanding there was an improvement of 50% and the level of satisfaction is maintained in a range of 40% satisfaction and very satisfied of 60%, which implies that the application helps them to improve their learning.

Author 1: Miriam Gamboa-Ramos
Author 2: Ricardo Gómez-Noa
Author 3: Orlando Iparraguirre-Villanueva
Author 4: Michael Cabanillas-Carbonell
Author 5: José Luis Herrera Salazar

Keywords: Augmented reality; learning; mobile application; Mobile D methodology

PDF

Paper 56: Learning Pick to Place Objects using Self-supervised Learning with Minimal Training Resources

Abstract: Grasping objects is a critical but challenging aspect of robotic manipulation. Recent studies have concentrated on complex architectures and large, well-labeled data sets that need extensive computing resources and time to achieve generalization capability. This paper proposes an effective grasp-to-place strategy for manipulating objects in sparse and chaotic environments. A deep Q-network, a model-free deep reinforcement learning method for robotic grasping, is employed in this paper. The proposed approach is remarkable in that it executes both fundamental object pickup and placement actions by utilizing raw RGB-D images through an explicit architecture. Therefore, it needs fewer computing processes, takes less time to complete simulation training, and generalizes effectively across different object types and scenarios. Our approach learns the policies to experience the optimal grasp point via trial-and-error. The fully conventional network is utilized to map the visual input into pixel-wise Q-value, a motion agnostic representation that reflects the grasp's orientation and pose. In a simulation experiment, a UR5 robotic arm equipped with a Parallel-jaw gripper is used to assess the proposed approach by demonstrating its effectiveness. The experimental outcomes indicate that our approach successfully grasps objects with consuming minimal time and computer resources.

Author 1: Marwan Qaid Mohammed
Author 2: Lee Chung Kwek
Author 3: Shing Chyi Chua

Keywords: Self-supervised; pick-to-place; robotics; deep q-network

PDF

Paper 57: Time Line Correlative Spectral Processing for Stratification of Blood Pressure using Adaptive Signal Conditioning

Abstract: Stratification of Blood Pressure is essential input in most of the cardiovascular diseases detection and prediction and is also a great aid to medical practitioners in dealing with Hypertension. Denoising based on spectral coding is developed based on frequency spectral decomposition and a spectral correlative approach based on wavelet transform. The existing approaches perform a standard deviation and mean of peak correlation in signal conditioning. The artifact filtrations were developed based on thresholding. Filtration of coefficients has an impact on accuracy of estimation and hence proper signal conditioning is a primal need. Wherein threshold is measured with discrete monitoring, time line observation could improve the accuracy of filtration efficiency under varying interference condition. Dynamic interference due to capturing or processing source results in jitter type noises which are short period deviations with varying frequency component. Hence a time-frequency analysis for filtration is adapted for filtration. This paper presents an approach of spectral correlation approach for signal condition in stratification of blood pressure under cuff less monitoring. This presented approach operates on the spectral distribution of finer resolution bands for monitoring signal in denoising and decision making. Existing approaches lacks the capability of loss-less denoising which is efficiently worked out in this paper.

Author 1: Santosh Shinde
Author 2: Pothuraju RajaRajeswari

Keywords: Stratification of blood pressure; discrete wavelet transform; spectral coding; and selective correlative approach

PDF

Paper 58: SMAD: Text Classification of Arabic Social Media Dataset for News Sources

Abstract: Due to the advances in technology, social media has become the most popular means for the propagation of news. Many news items are published on social media like Facebook, Twitter, Instagram, etc. but are not categorized into various different domains, such as politics, education, finance, art, sports, and health. Thus, text classification is needed to classify the news into different domains to reduce the huge amount of news available over social media, reduce time and effort for recognizing the category or domain, and present data to improve the searching process. Most existing datasets don’t follow pre-processing and filtering processes and aren’t organized based on classification standards to be ready for use. Thus, the Arabic Natural Processing Language (ANLP) phases will be used to pre-process, normalize, and categorize the news into the right domain. This paper proposes an Arabic Social Media Dataset (SMAD) for text classification purposes over the social media using ANLP steps. The SMAD dataset consists of 15,240 Arabic news items categorized over the Facebook social network. The experimental results illustrate that the SMAD corpus gives accuracy of about 98% in five domains (Art, Education, Health, Politics, and Sport). The SMAD dataset has been trained tested and is ready for use.

Author 1: Amira M. Gaber
Author 2: Mohamed Nour El-din
Author 3: Hanan Moussa

Keywords: Text classification; Arabic text classification; Arabic Natural Language Processing (ANLP)

PDF

Paper 59: P Systems Implementation: A Model of Computing for Biological Mitochondrial Rules using Object Oriented Programming

Abstract: Membrane computing is a computational framework that depends on the behavior and structure of living cells. P systems are arising from the biological processes which occur in the living cells’ organelles in a non-deterministic and maximally parallel manner. This paper aims to build a powerful computational model that combines the rules of active and mobile membranes, called Mutual Dynamic Membranes (MDM). The proposed model will describe the biological mechanisms of the metabolic regulation of mitochondrial dynamics made by mitochondrial membranes. The behaviors of the proposed model regulate the mitochondrial fusion and fission processes based on the combination of P systems variants. The combination of different variants in our computational model and their high parallelism lead to provide the possibility for solving problems that belong to NP-complete classes in polynomial time in a more efficient way than other conventional methods. To evaluate this model, it was applied to solve the SAT problem and calculate a set of computational complexity results that approved the quality of our model. Another contribution of this paper, the biological models of mitochondrial is presented in the formal class relationship diagrams were designed and illustrated using Unified Modeling Language (UML). This mechanism will be used to define a new specification of membrane processes into Object-Oriented Programming (OOP) to add the functionality of a common programming methodology to solve a large category of NP-hard problems as an interesting initiative of future research.

Author 1: Mohammed M. Nasef
Author 2: Bishoy El-Aarag
Author 3: Amal Hashim
Author 4: Passent M. El Kafrawy

Keywords: Computational biology; P systems; membranes fusion – fission; mitochondria; Mutual Dynamic Membranes (MDM); NP- complete problems

PDF

Paper 60: Skin Lesions Classification and Segmentation: A Review

Abstract: An automated intelligent system based on imaging input for unbiased diagnosis of skin-related diseases is an essential screening tool nowadays. This is because visual and manual analysis of skin lesion conditions based on images is a time-consuming process that puts a significant workload on health practitioners. Various machine learning and deep learning techniques have been researched to reduce and alleviate the workloads. In several early studies, the standard machine learning techniques are the more popular approach, which is in contrast to the recent studies that rely more on the deep learning approach. Although the recent deep learning approach, mainly based on convolutional neural networks has shown impressive results, some challenges remain open due to the complexity of the skin lesions. This paper presents a wide range of analyses that cover classification and segmentation phases of skin lesion detection using deep learning techniques. The review starts with the classification techniques used for skin lesion detection, followed by a concise review on lesions segmentation, also using the deep learning techniques. Finally, this paper examined and analyzed the performances of state-of-the-art methods that have been evaluated on various skin lesion datasets. This paper has utilized performance measures based on accuracy, mean specificity, mean sensitivity, and area under the curve of 12 different Convolutional Neural Network based classification models.

Author 1: Marzuraikah Mohd Stofa
Author 2: Mohd Asyraf Zulkifley
Author 3: Muhammad Ammirrul Atiqi Mohd Zainuri

Keywords: Lesion segmentation; lesion classification; machine learning; deep learning; skin lesions

PDF

Paper 61: The Development of Borneo Wildlife Game Platform

Abstract: Games are a unique, interesting, and fun entertainment medium. Games can contain education, introduction to certain flora and fauna, work and daily life, intelligence and dexterity. The game built in this study aims to introduce the flora and fauna found in the forests of East Borneo (Kalimantan), Indonesia as the object of a plat former game. Games are built using the Game Development Life Cycle method in order to make good and organized games. The GDLC method contain 6 stages, first is the initiation for the initial idea, second is to preproduction for the asset creation, third stage is production for the system creation, forth is the testing for the trial, fifth is the beta for the external trial, and the sixth stage is to release for publication. The results of the study resulted in the Borneo Wildlife game platform. This game introduces the unique flora and fauna in East Borneo, Indonesia, such as Black Orchids, Ironwood trees, Proboscis monkeys, Mahakam dolphins and Hornbills, as well as how to protect and preserve their nature. The game received 46 downloads from March 1, 2021 to May 24, 2021.

Author 1: Ramadiani Ramadiani
Author 2: Erdinal Respatti
Author 3: Gubta Mahendra Putra
Author 4: Muhammad Labib Jundillah
Author 5: Tamrin Rahman
Author 6: Muhammad Dahlan Balfas
Author 7: Arda Yunianta
Author 8: Hasan Jamal Alyamani

Keywords: Game development; Kalimantan; Borneo; wildlife game

PDF

Paper 62: Design of a Novel Architecture for Cost-Effective Cloud-based Content Delivery Network

Abstract: Content Delivery Network (CDN) offers faster transmission of massive content from content providers to users using servers that are distributed geographically to offer seamless relay of service. However, conventional CDN is not capable of catering to the larger scope of demand for data delivery, and hence cloud-based CDN evolves as a solution. In a real-world scenario, each requested content has different popularity for different users. The problem arises with deciding which content objects should be placed in each content server to minimize delivery delays and storage costs. A review of existing approaches in cloud-based CDN shows that yet the problem associated with content placement is not solved. In this regard, a precise strategy is required to select the contents objects to be placed in a content server to achieve higher efficiency without affecting the CCDN performance. Therefore, the proposed system introduces a novel architecture that addresses this practical problem of content placement. The study considers placement problem as optimization problem with the ultimate purpose of maximizing the user content requests served and reducing the overall cost associated with content and data delivery. With an inclusion of a bucket-based concept for cache proxy and content provider, a novel topology is constructed where an optimal algorithm for placement of content is implemented using matrix operation of row reduction and column reduction. Simulation outcome shows that the proposed system excels better performance in contrast to the existing content placement strategy for cloud-based CDN.

Author 1: Suman Jayakumar
Author 2: Prakash S
Author 3: C. B Akki

Keywords: Content delivery network; content placement; cloud; optimization; data delivery; cost

PDF

Paper 63: Intelligent Locking System using Deep Learning for Autonomous Vehicle in Internet of Things

Abstract: Now-a-days, we are using modern locking system application to lock and unlock our vehicle. The most common method is by using key to unlock our car from outside, pressing unlock button inside our car to unlock the door and many vehicles are using keyless entry remote control for unlocking their vehicle. However, all of this locking system is not user friendly in impaired situation for example when the user hand is full, lost the key, did not bring the key or even conveniently suited for special case like disable driver. Hence, we are proposing a new way to unlock the vehicle by using face recognition. Face recognition is the one of the key components for future intelligent vehicle application in the Autonomous Vehicle (AV) and is very crucial for next generation of AV to promote user convenience. This paper proposes a locking system for AV by using face deep learning approach that adapt face recognition technique. This paper aims to design and implement face recognition procedural steps using image dataset that consist of training, validation and test dataset folder. The methodology used in this paper is Convolution Neural Network (CNN) and we were program it by using Python and Google Colab. We create two different folders to test either the methodology capable to recognize difference faces. Finally, after dataset training a testing was conducted and the works shows that the data trained was successful implemented. The models predict an accurate output result and give significant performance. The data set consist of every face angle from the front, right (30-45 degrees) and left (30-45 degrees).

Author 1: S. Zaleha. H
Author 2: Nora Ithnin
Author 3: Nur Haliza Abdul Wahab
Author 4: Noorhazirah Sunar

Keywords: Face recognition; deep learning; internet of things; convolution neural networks

PDF

Paper 64: A Case Study on Social Media Analytics for Malaysia Budget

Abstract: Malaysia citizen always looks forward to the budget announcement, which is presented by the government each year. Due to the direct effect on the economy, the citizens' opinions are crucial in understanding what they want and whether the budget satisfies them or not. Social media analytics can gather netizens’ opinions on Twitter and conduct sentiment analysis. Most of the corpora in previous sentiment analysis research use English-based corpus. However, the current scenario of tweets in Malaysia uses a combination of English-Malay words. Therefore, this study uses a hybrid of the corpus-based and support vector machine approach. Semantic corpus-based combines the Malay and English words. Then, the domain-specific corpus on Malaysia Budget is constructed, which is budget corpus. Two separate analyses include category classification and sentiment analysis. Overall, most netizens have a positive sentiment about Malaysia's Budget with 56.28% of the tweets being positive sentiments. The majority of the netizens focus on social welfare and education that have the highest tweets. The discussion highlights the suggestion to improve the accuracy of this study.

Author 1: Ahmad Taufiq Mohamad
Author 2: Nur Atiqah Sia Abdullah

Keywords: Malaysia budget; twitter; social media analytics; sentiment analysis; category classification; budget corpus

PDF

Paper 65: A Pattern Language for Class Responsibility Assignment for Business Applications

Abstract: Assigning class responsibility is a design decision to be made early in the design phase in software development, which bridges requirements and an analysis model. In general, assigning class responsibility relies heavily on the expertise and experience of the developer, and it is often ad-hoc. Class responsibility assignment rules are hard to be uniformly defined across the various domains of systems. Thus, the existing work describes general stepwise guidelines without concrete methods, which imposes the limit in deriving an analysis model from requirements specification without any loss of information and providing sufficient quality of the analysis model. This study tried to grasp the commonality and variations in analyzing the business application domain. By narrowing the subject of the solution, the presented patterns can help identify and assign class responsibilities for a system belonging to the business application domain. The presented pattern language consists of six segmented patterns, including 19 variations of relationship type among conceptual classes. Each sequence of a use case specification could be analyzed as the result of weaving a set of the six segmented patterns. A case study with a payroll system is presented to prove the patterns' feasibility, explaining how the proposed patterns can develop an analysis model. The coverage of the proposing CRA patterns and enhancement of implementation code quality is discussed as the benefit.

Author 1: Soojin Park

Keywords: Class responsibility assignment; analysis pattern; business application; sequence diagram

PDF

Paper 66: Implementing Flipped Classroom Strategy in Learning Programming

Abstract: Novice students encountered many difficulties and challenges when learning to program. They face problems in terms of high cognitive load in learning and lack of prior programming knowledge. Various strategies and approaches are implemented to overcome the difficulties and challenges in programming. A flipped classroom is an active learning strategy implemented in many subjects and courses, including programming. The flipped classroom strategy consists of three phases, namely, pre-class, in-class, and post-class. A focus group discussion is conducted involving 13 participants from various learning institutions. The purpose of the study is to discuss the implementation of flipped classroom strategy in programming. The study also identifies a technique for monitoring students' involvement in activities outside the classroom and proper motivation to engage students in programming. Related research questions are constructed as guidelines for the discussion. The deductive thematic analysis is performed on the transcripts of the discussion. As a result, four pre-determine codes and two codes were generated from the analysis. This study identifies suitable activities, tools, monitoring strategies, and motivation to support the implementation of a flipped classroom in programming. There is good potential through flipped classrooms in learning programming with a systematic and careful planned implementation.

Author 1: Rosnizam Eusoff
Author 2: Syahanim Mohd Salleh
Author 3: Abdullah Mohd Zin

Keywords: Flipped classroom; learning programming; cognitive load; active learning; focus group discussion

PDF

Paper 67: High Density Impulse Noise Removal from Color Images by K-means Clustering based Detection and Least Manhattan Distance-oriented Removal Approach

Abstract: Removal of impulse noise from color images is a stringent job in the arena of image processing. Impulse noise is fundamental of two types: Salt and pepper noise (SAPN) and Random valued impulse noise (RVIN). The key challenge in impulse noise removal from color images lies in tackling out the randomness in the noise pattern and in handling multiple color channels efficiently. Over the years, several filters have been designed to remove impulse noise from color images, but still, the researchers face a stringent challenge in designing a filter effective at high noise densities. In this study, a combination of K-means clustering-based detection followed by a minimum distance-based approach for removal is taken for high-density impulse noise removal from color images. In the detection phase, K-means clustering is applied on combined data consisting of elements from designated 5 × 5 windows of all the planes from RGB color images to segregate noisy and non-noisy elements. In the removal phase, noisy pixels are replaced by taking the average of medians of all non-noisy pixels and non-noisy pixels under 7 × 7 windows residing at least Manhattan distance from the inspected noisy pixel. Performance of the proposed method is evaluated and compared up against the latest filters, on the basis of well-known metrices, such as Peak signal to noise ratio (PSNR) and Structural similarity index measurement (SSIM). Based on these comparisons, the proposed filter is found superior than the compared filters in removing impulse noise at high noise densities.

Author 1: Aritra Bandyopadhyay
Author 2: Kaustuv Deb
Author 3: Atanu Das
Author 4: Rajib Bag

Keywords: Impulse noise; color image; salt and pepper noise; random valued impulse noise

PDF

Paper 68: MultiStage Authentication to Enhance Security of Virtual Machines in Cloud Environment

Abstract: The adoption of cloud computing in different areas has shown benefits and given solutions to applications. The cloud provider offers virtualized platforms through virtual machines for the cloud users to store the data and perform computations. Due to the distributed nature of cloud, there are many challenges and security is one of the challenges. To address this challenge, verification method is implemented to achieve high level security in the cloud environment. Many researchers have provided different authentication mechanisms to safeguard virtual machines from attacks. In this paper, Multi Stage Authentication is proposed to overcome the threats from attackers towards virtual machines. In order to authorize and access the virtual machine, multistage authentication incorporating the factors like username, email id, password and OTP is carried out. Mealy Machine model is applied to analyze the state changes with factors supplied at multiple stages and trust built with each stage. Experimental results prove that system is safe achieving data integrity and privacy. The proposed work gives the protection against unauthorized users, provides secure environment to the cloud users accessing the virtual machines.

Author 1: Anitha HM
Author 2: P Jayarekha

Keywords: Authentication; multi stage authentication; one time password; finite state machine; mealy machine

PDF

Paper 69: Computer Vision based Polyethylene Terephthalate (PET) Sorting for Waste Recycling

Abstract: Recycling plays a vital role in saving the planet for future generations as it allows keeping a clean environment, reducing energy consumption, and saving materials. Of special interest is the plastic material which may take centuries to decompose. In particular, the Polyethylene Terephthalate (PET) is a widely used plastic for packaging various products that can be recycled. Sorting PET can be performed, either manually or automatically, at recycling facilities where the post-consumed objects are moving on the conveyor belt. In particular, automated sorting can process a large amount of PET objects without human intervention. In this paper, we propose a computer vision system for recognizing PET objects placed on a conveyor belt. Specifically, DeepLabv3+ is deployed to segment PET objects semantically. Such system can be exploited using an autonomous robot to compensate for human intervention and supervision. The conducted experiments showed that the proposed system outperforms the state of the art semantic segmentation approaches with weighted IoU equals to 97% and Mean BFscore equals to 89%.

Author 1: Ouiem Bchir
Author 2: Shahad Alghannam
Author 3: Norah Alsadhan
Author 4: Raghad Alsumairy
Author 5: Reema Albelahid
Author 6: Monairh Almotlaq

Keywords: PET; recycling; computer vision; machine learning

PDF

Paper 70: A New Approach for Training Cobots from Small Amount of Data in Industry 5.0

Abstract: Machine learning is a vital part of today's world. Although the current Machine Learning slogan is “big data is required for a smarter AI”. All Artificial Intelligence learning techniques require the training of algorithms with huge data. Collecting and storing this data takes time and requires increasing computer memory. In Industry 5.0, human-robot collaboration is a challenge for artificial intelligence (AI) and its subdomains. Indeed, integration of its domains is required. Many AI techniques are needed, ranging from visual processing to symbolic reasoning, task planning to mind building theory, reactive control to action recognition and learning. Otherwise, the main two obstacles to this natural workflow interaction are big data memorization and time Learning that grows exponentially with the problem complexity especially. In this article, we propose a new approach for training Cobots from Small Amount of Data in the context of industry 5.0 based on common-sense capability inspired by human learning.

Author 1: Khalid Jabrane
Author 2: Mohammed Bousmah

Keywords: Small data; industry 5.0; common-sense capability; machine learning

PDF

Paper 71: Evaluation of using Parametric and Non-parametric Machine Learning Algorithms for Covid-19 Forecasting

Abstract: Machine learning prediction algorithms are considered powerful tools that could provide accurate insights about the spread and mortality of the novel Covid-19 disease. In this paper, a comparative study is introduced to evaluate the use of several parametric and non-parametric machine learning methods to model the total number of Covid-19 cases (TC) and total deaths (TD). A number of input features from the available Covid-19 time sequence are investigated to select the most significant model predictors. The impact of using the number of PCR tests as a model predictor is uniquely investigated in this study. The parametric regression including the Linear, Log, Polynomial, Generative Additive Regression, and Spline Regression and the non-parametric K-Nearest Neighborhood (KNN), Support Vector machine (SVM) and the Decision Tree (DT) have been utilized for building the models. The findings show that, for the used dataset, the linear regression is more accurate than the non-parametric models in predicting TC & TD. It is also found that including the total number of tests in the mortality model significantly increases its prediction accuracy.

Author 1: Ghada E. Atteia
Author 2: Hanan A. Mengash
Author 3: Nagwan Abdel Samee

Keywords: Covid-19; parametric regression; non-parametric regression; linear regression; log regression; polynomial regression; generative additive regression; spline regression; k-nearest neighborhood; KNN; support vector machine; SVM; decision trees; DT

PDF

Paper 72: Comparison of Machine Learning Algorithms for Sentiment Classification on Fake News Detection

Abstract: With the wide usage of World Wide Web (WWW) and social media platforms, fake news could become rampant among the users. They tend to create and share the news without knowing the authenticity of it. This would become the most critical issues among the societies due to the dissemination of false information. In that regard, fake news needs to be detected as early as possible to avoid negative influences on people who may rely on such information while making important decisions. The aim of this paper is to develop an automation of sentiment classifier model that could help individuals, or readers to understand the sentiment of the fake news immediately. The Cross-Industry Standard Process for Data Mining (CRISP-DM) process model has been applied for the research methodology. The dataset on fake news detection were collected from Kaggle website. The dataset was trained, tested, and validated with cross-validation and sampling methods. Then, comparison model performance using four machine learning algorithms which are Naïve Bayes, Logistic Regression, Support Vector Machine and Random Forest was constructed to investigate which algorithms has the most efficiency towards sentiment text classification performance. A comparison between 1000 and 2500 instances from the fake news dataset was analyzed using 200 and 500 tokens. The result showed that Random Forest (RF) achieved the highest accuracy compared to other machine learning algorithms.

Author 1: Yuzi Mahmud
Author 2: Noor Sakinah Shaeeali
Author 3: Sofianita Mutalib

Keywords: Data mining; fake news; sentiment classification; supervised machine learning; text mining

PDF

Paper 73: Performance Analysis of IoT-based Healthcare Heterogeneous Delay-sensitive Multi-Server Priority Queuing System

Abstract: Previous studies have considered scheduling schemes for Internet of Things (IoT)-based healthcare systems like First Come First Served (FCFS), and Shortest Job First (SJF). However, these scheduling schemes have limitations that range from large requests starving short requests, process starvation that results in long time to complete if short processes are continuously added, and performing poorly under overloaded conditions. To address the mentioned challenges, this paper proposes an analytical model of a prioritized scheme that provides service differentiation in terms of delay sensitive packets receiving service before delay tolerant packets and also in terms of packet size with the short packets being serviced before large packets. The numerical results obtained from the derived models show that the prioritized scheme offers better performance than FCFS and SJF scheduling schemes for both short and large packets, except the shortest short packets that perform better under SJF than the prioritized scheme in terms of mean slowdown metric. It is also observed that the prioritized scheme performs better than FCFS and SJF for all considered large packets and the difference in performance is more pronounced for the shortest large packets. It is further observed that reduction in packet thresholds leads to decrease in mean slowdown and the decrease is more pronounced for the short packets with larger sizes and large packets with shorter sizes.

Author 1: Barbara Kabwiga Asingwire
Author 2: Alexander Ngenzi
Author 3: Louis Sibomana
Author 4: Charles Kabiri

Keywords: Delay tolerant; delay sensitive; internet of things; mean slowdown; prioritized scheme

PDF

Paper 74: A Survey on Sentiment Analysis Approaches in e-Commerce

Abstract: Sentiment analysis represents the process of judging customers’ behavior expression and feeling as either positive, negative or neutral. Hence, a tangle of different approaches for sentiment analysis is being used, reflecting analysis of unstructured customers’ reviews dataset to guide and generate insightful and helpful information. The aim of this paper is to highlight research design of sentiment analysis and choice of methodological by other researchers in E-Commerce customers’ reviews to guide future development. This paper presents a study of sentiment analysis approaches, process challenges and trends to give researchers a review and survey in existing literature. Next, this study will discuss on feature extraction and classification method of sentiment analysis of customers’ reviews to have an exhaustive view of their methods. The knowledge on challenges of sentiment analysis underpins to clarify future directions.

Author 1: Thilageswari a/p Sinnasamy
Author 2: Nilam Nur Amir Sjaif

Keywords: Sentiment analysis; e-Commerce; feature extraction; classification; customers’ reviews

PDF

Paper 75: Assessment Framework for Defining the Maturity of Information Technology within Enterprise Risk Management (ERM)

Abstract: The process of reviewing, assessing and improving the organization's IT risk management requires some basic information summarized in a process maturity profile. In general, IT risk management standards or frameworks do not include a mechanism for assessing the maturity level of process implementations. This study was conducted to develop a framework, which can be applied to assess the maturity level of IT risk management under ISO / IEC 27005. A standards-based management system implementation can be represented as a model cycle of planning, implementation, validation and also action plan. The proposed evaluation framework consists of templates, methods, and working papers. Therefore, the template focus on the evaluation areas, which are planning, execution, validation, and execution, then evaluation area details (8 domains, 35 subdomains, 82 items), and evaluation metrics and criteria. Meanwhile, a working paper has been created to assist in conducting the evaluation. Actually, by using this evaluation framework, it can provide a representation of the maturity level from the entire process in managing IT risk, based on the provisions of ISO/IEC 27005. This framework complements the existing model with the representation of (1) providing a single-cycle planning, establishment, validation, and execution, (2) evaluation tools, (3) more comprehensive data collection methods, and (4) priority list of elements to be reformed and/or improved.

Author 1: Rokhman Fauzi
Author 2: Muharman Lubis

Keywords: Risk management; assessment framework; maturity level; PDCA cycles; ISO/IEC 27005

PDF

Paper 76: Using Eye Tracking Approach in Analyzing Social Network Site Area of Interest for Consumers’ Decision Making in Social Commerce

Abstract: The growing popularity of social network site (SNS) in social commerce (s-commerce) has intensified interest in understanding consumers decision making based on the SNS seller generated content (SGC) and user generated content (UGC). This study examines consumers’ decision making while doing online shopping by analyzing both SNS’s seller-user generated content on SNS utilizing eye tracking approach. Based on eye tracking experimental with 50 participants, gaze map in term of fixation time were collected and analyzed to measure the order of identified Area of interest (AOI) by which consumer viewed and heat map to measure the consumer intensity when looking at the identified AOIs. The results identify that SCG is most important AOI compare to UGC and that product image and description receive the greatest attention from consumers when making decision. Furthermore, seller information serves as a key entry point for SNS-based commerce based on fixation time. The analysis result shows that there is no significant influence of AOIs order based on consumers’ viewed on the intensity which consumers look at the AOIs. The comparison between Facebook and Instagram reveals some substantial differences in mean between AOIs based on fixation time and intensity. The findings suggest several AOIs should be addressed and emphasized for sellers and companies who interested in utilizing SNS for their s-commerce strategy.

Author 1: Suaini Binti Sura
Author 2: Nona M. Nistah
Author 3: Sungwon Lee
Author 4: Daimler Benz Alebaba

Keywords: Eye tracking; SNS-based commerce; seller generated content; user generated content; social commerce

PDF

Paper 77: Chatbot Design for a Healthy Life to Celiac Patients: A Study According to a New Behavior Change Model

Abstract: There is an absolute need for technology in our daily life that makes people busy with their smartphones all day long. In the healthcare field, mobile apps have been widely used for the treatment of many diseases. Most of these apps were designed without considering health behavior change models. Celiac disease is a significant public health problem worldwide. In Saudi Arabia, the incidence of celiac disease is 1.5%. Celiac patients have a natural demand for resources to facilitate care and research; however, they have not received much attention in the field of healthcare apps. This study introduced a new health behavior change model based on the existing common models and adapted it to the use of technology for the changing behavior of celiac patients towards healthy suitable food. As proof of concept, the new model was applied to the WhatsApp chatbot for patients with celiac disease. To test the impact of the chatbot, 60 Saudi celiac patients participated in three steps. First, they completed a pre-test questionnaire. Then, the participants were divided into two groups: the control group, which was left without any intervention, and the test group, who used the chatbot for 90 days. Finally, all participants completed the post-test questionnaire. The results confirmed a significant statistical difference between both groups, and the test group improved their healthy life in terms of eating habits, reduced celiac symptoms, and commitment to the treatment plan.

Author 1: Eythar Alghamdi
Author 2: Reem Alnanih

Keywords: Celiac disease; health behavior changes models; healthcare apps; user-centered design; experiment test; WhatsApp chatbot

PDF

Paper 78: Design of Optimal Control of DFIG-based Wind Turbine System through Linear Quadratic Regulator

Abstract: This paper is devoted to implement an optimal control approach applying Linear Quadratic Regulator (LQR) to control a DFIG based Wind Turbine. The main goal of proposed LQR Controller is to achieve the active and reactive power and the DC-link voltage control of DFIG system in order to extract the maximum power from the wind turbine. In fact, the linearized state-space model of studied system in the d-q rotating reference frame is established. However, the overall system is controlled using MPPT strategy. The simulation results are obtained with Sim-Power-System and Simulink of MATLAB in terms of steady-state values, Peak amplitude, settling time and rise-time. Finally, the eigenvalue analysis and the simulation results are rated to ensure studied system robustness and stability, and the effectiveness of the control strategy.

Author 1: Ines Zgarni
Author 2: Lilia ElAmraoui

Keywords: Wind turbine system; doubly fed induction generator; DFIG; optimal control; linear quadratic regulator; LQR

PDF

Paper 79: Mask RCNN with RESNET50 for Dental Filling Detection

Abstract: Teeth are very important for humans to eat food. However, teeth do get damaged for several reasons, like poor maintenance. Damaged teeth can cause severe pain and make it difficult to eat food. To safeguard the tooth from minor damages, an inert material is used to close the gap between the live part of the teeth or sometimes even the nerve and enamel. Although, long-time ignorance can increase the damage and inevitably result in root canal or tooth replacement. In the case of root canal, the gap between nerve and enamel is filled with an inert material. To check if the filling has been done properly, an X-ray is taken and verify. As technology is developing, robots are being introduced into many fields. In the medical field, there are instances where robots do surgery. For dental treatment, as an X-ray is taken to determine the filing, this work introduces a model to analyze the X-ray taken and estimate the level of filing done. The model is constructed using Mask RCNN with ResNet50 architecture. A dataset of different kinds of filings is taken and trained using the model. This model can be used to enable machines to perform dental operations as it works on pixel-level classification.

Author 1: S Aparna
Author 2: Kireet Muppavaram
Author 3: Chaitanya C V Ramayanam
Author 4: K Satya Sai Ramani

Keywords: Dental x-rays; deep learning; mask RCNN; RESNET50

PDF

Paper 80: Performance Analysis of Qualitative Evaluation Model for Software Reuse with AspectJ using AHP

Abstract: Reusability is necessary for developing advance software. Aspect Oriented programming is an emerging approach which understand the problem of arrangement of scattered software modules and tangled code. The aim of this paper is to explore the AOP approach with implementation of real life projects in AspectJ language and its impact on software quality in form of reusability. In this paper, experimental results are evaluated of 11 projects (Java and AspectJ) using proposed Quality Evaluation Model for Software Reuse (QEMSR) and existing Aspect Oriented Software Quality Model (AOSQ). To evaluate AOP quality model QEMSR based on developers AOP projects by using Analytic Hierarchy Process (AHP) tools. Paper provides the evaluation of software reusability and positive impact on software quality. QEMSR model is used to assess Aspect Oriented reusability quality issues, which helps developers to adapt for software development. The overall quality of three models QEMSR, existing AOSQ and PAOSQMO are 0.62552223, 0.5283693, and 0.505815 calculated. According to this, QEMSR model is best in form of quality in same characteristics and sub-characteristics.

Author 1: Ravi Kumar
Author 2: Dalip

Keywords: Reusability; AspectJ; software quality metrics; analytic hierarchy process

PDF

Paper 81: Analysis of the Asynchronous Motor Controlled by Frequency Inverter Applied to Fatigue Test System

Abstract: This research focuses on analyzing the functional and operational parameters of the three-phase induction motor, squirrel cage type; Where the experimental model consists of a fatigue test system operated by two types of control: Control by Frequency Inverter and Classic star-delta control, where the engine load consists of a standard specimen, corresponding to 61.9% of the nominal load of the object of study. Experimental evaluations of this rotary machine are at regular operating conditions. Managing to Record electrical, mechanical, thermal variables; in a database where they were classified, developed, analyzed and interpreted; Highlighting from the graphs, the quasi-constant behavior of the Cos(φ) at 0.754 at different regulated frequency values which lead to a low energy consumption of current 1.88 Ampere with variator with respect to the weighted of 2.04 Ampere without inverter; even with improvements in torque when you are opting to use the drive from a 0.71 N-m to a 0.94 N-m. Likewise, the operation of this machine at low frequencies manifests some damages to normal operation, such as the rate of increase in the operating temperature of 78.76 °C in a short time and with projection to increase. Similarly, the injection of harmonic distortion into the network as a result of using electronic equipment, contributes to the detriment of energy quality.

Author 1: Nel Yuri Huaita Ccallo
Author 2: Omar Chamorro-Atalaya

Keywords: Induction; torque; frequency inverter (VDF); current; harmonic; temperature

PDF

Paper 82: Heuristic Algorithm for Automatic Extraction Relational Data from Spreadsheet Hierarchical Tables

Abstract: Spreadsheets are contained critical information on various topics and were most broadly utilized in numerous spaces. There are a huge amount of spreadsheet clients everywhere in the world. Spreadsheets provide considerable flexibility for data structure organization. As well as it gives their makers an enormous level of opportunity to encode their data as it is simple to utilize and easy to store the data in a table format. Because of this flexibility, tables with very complex and hierarchical data structures could be generated. Thusly, such complexity makes table processing and reusing this data is a difficult task. Therefore, the expansion in volume and complexity of these tables has prompted the necessity to preserve this data and reuse it. As a result, this paper implemented a novel algorithm-based heuristic technique and cell classification strategy to automate relational data extraction from spreadsheet hierarchical tables and without need any programming language experience. Finally, the paper does experiments on 2 different real public datasets. The percentage of average accuracy using the proposed approach on the two datasets is 95 % and 94.2% respectively.

Author 1: Arwa Awad
Author 2: Rania Elgohary
Author 3: Ibrahim Moawad
Author 4: Mohamed Roushdy

Keywords: Spreadsheet table analysis; hierarchal table structure; cell classification; heuristic algorithm; relational data extraction

PDF

Paper 83: Effective Controlling Scheme to Mitigate Flood Attack in Delay Tolerant Network

Abstract: Conventional routing protocols breaks down in opportunistic networks due to long delays, frequent disconnectivity and resource scarcity. Delay Tolerant Network (DTN) has been developed to cope with these mentioned features. In the absence of connected link between the sender and the receiver, in DTN mobile nodes replicate bundles and work cooperatively to improve the delivery probability. Malicious nodes may flood the network as possible by a huge number of unwanted bundles (messages) or bundle replicas which waste the limited resources. DOS (Denial of Service) attack especially Flooding attack attempt to compromise the availability service of the network. Traditional congestion control strategies are not suitable for DTN, so developing new mechanisms to detect and to control flooding attack is a major challenge in DTN network. In this paper, we presented a comprehensive overview of the existing solutions for dealing with flooding attack in delay tolerant network, and we proposed an effective controlling mechanism to mitigate this threat. The main goal of this mechanism is first to detect malicious nodes that flood the network by unwanted messages, and then to limit the damage caused by this attack. We also ran a large number of simulations with the ONE simulator to investigate how changing buffer capacity, message lifetime, message size, and message replicas affect DTN network performance metrics.

Author 1: Hanane ZEKKORI
Author 2: Saïd AGOUJIL
Author 3: Youssef QARAAI

Keywords: DTN; flooding attack; DOS; congestion; buffer capacity; bundle; ONE

PDF

Paper 84: Efficient DNN Ensemble for Pneumonia Detection in Chest X-ray Images

Abstract: Pneumonia is a disease caused by a variety of organisms, including bacteria, viruses, and fungi, which could be fatal if timely medical care is not provided. According to the World Health Organization (WHO) report, the most common diagnosis for severe COVID-19 is severe pneumonia. The most common method of detecting Pneumonia is through chest X-ray which is a very time intensive process and requires a skilled expert. The rapid development in the field of deep learning and neural networks in recent years has led to drastic improvement in automation of pneumonia detection from analysing chest x- rays. In this paper, a pre-trained Convolutional Neural Networks (CNN) on chest x-ray images is used as feature extractors which are then further processed to classify the images in order to predict whether a person has pneumonia or not. The different pre- trained Convolutional Neural Networks used are assessed with various parameters regarding their predictions on the images. The results of pre-trained neural networks were examined, and an ensemble model was proposed that combines the predictions of the best pre-trained models to produce better results than individual models.

Author 1: V S Suryaa
Author 2: Arockia Xavier Annie R
Author 3: Aiswarya M S

Keywords: Deep neural networks; ensemble learning; pneumonia detection using x-ray images; transfer learning

PDF

Paper 85: Delivery of User Intentionality between Computer and Wearable for Proximity-based Bilateral Authentication

Abstract: Recent research discovers that delivering user intentionality for authentication resolves a random authentication problem in a proximity-based authentication. However, they still have limitations – energy issue, inaccurate data consistency, and vulnerability to shoulder surfing. To resolve them, this paper proposes a new method for user intent delivery and a new proximity-based bilateral authentication system by adopting it. The proposed system designs a protocol for authentication to reduce energy consumption in a power-constrained wearable, applies a Needleman-Wunsch algorithm to the matching of time values as well, and introduces randomness to a user behavior that a user must perform for authentication. We developed a prototype of our authentication system on which a list of experiments was conducted. Experimental results show that the proposed method results in more accurate data consistency than conventional methods for user authentication intent delivery. Eventually, our system reduces authentication failure rate by 66.7% compared to conventional ones.

Author 1: Jaeseong Jo
Author 2: Eun-Kyu Lee
Author 3: Junghee Jo

Keywords: Security; authentication; internet of things; user intentionality; proximity-based authentication; bilateral

PDF

Paper 86: Digital Preoperative Planning for High Tibial Osteotomy using 2D Medical Imaging

Abstract: The pre-operative planning process for High Tibial Osteotomy (HTO) is vital to correct the deformity of the long bones. The most important process is needed to find the Centre Of Rotation of Angulation (CORA) and display the forecast result based on the value of the correction angles simultaneously. Presently, these practices should be done manually because current software only can define either CORA’s point or correction angle at one time. This paper proposed to use computer-aided software to make the fully digitized process of pre-operative planning for HTO to be done. For this purpose, we introduced OsteoAid software. This software enables the user to define the mechanical or anatomical axes and define the CORA’s point and the angle at one time. For testing purposes, we compared the reliability of osteotomy's correction angle between this two software (MedWeb and OsteoAid) in preoperative planning open-wedge high tibial osteotomy. This is to ensure that the new software is reliable for the correction. Thirteen digital long leg radiographs with long-standing positions from the frontal axis showing patients with both tibia deformities were examined using intra-class correlation. Those images are accessed from the picture archiving and communication system (PACS). Three medical officers (raters) who were involved in an osteotomy used the same medical image format twice with a two-week interval. Using the MedWeb software, the mean correction angle score of each rater is at excellent level: 0.989 (intra-rater1), 0.982 (intra-rater2) and 0.972 (intra-rater3). Scores of each rater for OsteoAid are also excellent: 0.949, 0.987 and 0.986 respectively. The inter-rater reliabilities of the correction angle were 0.820 and 0.979 (p<0.001) respectively for each software. The principal finding of this study was that the new software (OsteoAid) showed excellent reliabilities and good consistency in preoperative planning in finding CORA and correction angle.

Author 1: Norazimah Awang
Author 2: Faudzi Ahmad
Author 3: Rosnita A. Rahaman
Author 4: Riza Sulaiman
Author 5: Azrulhizam Shapi’i
Author 6: Abdul Halim Abdul Rashid

Keywords: Center of rotation of angulation; CORA; HTO; software; digital; medical image

PDF

Paper 87: Using Transfer Learning for Nutrient Deficiency Prediction and Classification in Tomato Plant

Abstract: Plants need nutrients to develop normally. The essential nutrients like carbon, oxygen, and hydrogen are obtained from sunlight, air, and water to prepare food and plant growth. For healthy growth, plants also need macronutrients such as Potassium, Calcium, Nitrogen, Sulphur, Magnesium, and Phosphorus in relatively great quantities. When a plant doesn’t find necessary nutrients for its growth inadequate amount, deficiency of plant nutrients occur. Plants exhibit various symptoms to indicate the deficiency. Automatic identification and differentiation of these deficiencies are very important in the greenhouse environment. Deep Neural Networks are extremely efficient in image categorization problems. In this work, we used the part of the pre-trained deep learning model i.e. Transfer Learning model to detect the nutrient stress in the plant. We compared three different architectures including Inception-V3, ResNet50, and VGG16 with two classifiers: RF and SVM to improve, classification accuracy. A total of 880 images of Calcium and Magnesium deficiencies in the Tomato plant from the greenhouse were collected to form a dataset. For training, 704(80%) images are used and for testing, 176(20%) images are used to examine the model performance. Experimental results demonstrated that the largest accuracy of 99.14% has resulted for the VGG16 model with SVM classifier and 98.71% for Inception-V3 with Random Forest Classifier. For a batch size of 8 and epochs equal to 10, the Inception -V3 architecture attained the highest validation accuracy of 99.99% and the least validation loss of 0.0000384 on an average.

Author 1: Vrunda Kusanur
Author 2: Veena S Chakravarthi

Keywords: Nutrient deficiency; plant nutrients; deep neural networks; transfer learning; random forest (RF); support vector machine (SVM)

PDF

Paper 88: A New Protection Scheme for Biometric Templates based on Random Projection and CDMA Principle

Abstract: Although biometric technologies have revolution-ized the world of communication and dematerialized exchanges, authentication by biometrics still has many limitations, particu-larly in terms of privacy concerns, due to the various potential threats to which biometric templates are subject. The existence of these vulnerabilities has created an enormous need for biometric data protection. Indeed, several protection schemes have been proposed, which are normally supposed to offer certain guaran-tees, including the confidentiality of the collected personal data and the reliability of the recognition system. The challenge for all these techniques is to achieve a trade-off between performance ac-curacy and robustness against vulnerabilities, which is not always obvious. In this paper, we propose a theoretical protection model dedicated to biometric authentication systems. The objective is to ensure a high level of security for the stored reference data in such a way that it complies with the non-invertibility and revocability properties. The main idea is to incorporate a discretization tool, namely the spread spectrum technology and in particular the Code Division Multiple Access (CDMA), into a biometric system based on Random Projection. We introduce and demonstrate the proposed scheme as a non-invertible transform, while proving its effectiveness and ability to meet the requirements of revocability and unlinkability.

Author 1: Ayoub Lahmidi
Author 2: Khalid Minaoui
Author 3: Chouaib Moujahdi
Author 4: Mohammed Rziza

Keywords: Biometric template; security; authentication; CDMA; random projection

PDF

Paper 89: Verifiable Homomorphic Encrypted Computations for Cloud Computing

Abstract: Cloud computing is becoming an essential part of computing, especially for enterprises. As the need for cloud computing increases, the need for cloud data privacy, con-fidentially, and integrity are also becoming essential. Among potential solutions, homomorphic encryption can provide the needed privacy and confidentiality. Unlike traditional cryptosys-tem, homomorphic encryption allows computation delegation to the cloud provider while the data is in its encrypted form. Unfortunately, the solution is still lacking in data integrity. While on the cloud, there is a possibility that valid homomorphically encrypted data beings swapped with other valid homomorphically encrypted data. This paper proposes a verification scheme based on the modular residue to validate homomorphic encryption computation over integer finite field to be used in cloud computing so that data confidentiality, privacy, and data integrity can be en-forced during an outsourced computation. The performance of the proposed scheme varied based on the underlying cryptosystems used. However, based on the tested cryptosystems, the scheme has 1.5% storage overhead and a computational overhead that can be configured to work below 1%. Such overhead is an acceptable trade-off for verifying cloud computation which is highly needed in cloud computing.

Author 1: Ruba Awadallah
Author 2: Azman Samsudin
Author 3: Mishal Almazrooie

Keywords: Cloud computing; computation verification; data confidentiality; data integrity; data privacy; distributed processing; homomorphic encryption

PDF

Paper 90: Multi-logic Rulesets based Junction-point Movement Controller Framework for Traffic Streamlining in Smart Cities

Abstract: In the internet era, Intelligent Transportation Sys-tem (ITS) for smart cities is gaining tremendous attention since it offers intelligent smart services for traffic monitoring and management with the help of different technologies such as micro-electronics, sensors and IoT. However, in the existing literature, very few attempts are made towards effective traffic monitoring at road junctions in terms of providing faster decision making so that the traffic present in heavily congested urban environments can be dynamically rerouted. In order to tackle this issue, this article proposes a new Controller framework that can be applied at junction-points in order to the control the traffic movement. Specifically, the proposed framework utilizes a multi-logic ruleset database to estimate the traffic density dynamically at the first stage followed by the usage of signal-time computation algorithm at the second stage in order to streamline the traffic and achieve faster clearance at the junction-points. The experimental results conducted with the help of test environment using MEMSIC nodes clearly demonstrate the improved efficiency of the proposed framework in terms various performance metrics including move command frequency, ruleset score and fluctuation score.

Author 1: Sreelatha R
Author 2: Roopalakshmi R

Keywords: Intelligent transportation systems; junction-point traffic monitoring; ruleset database; traffic density estimation

PDF

Paper 91: Employing Video-based Motion Data with Emotion Expression for Retail Product Recognition

Abstract: Mining approaches based on video data can serve in identifying stores’ performance by gaining insight into what needs to be proceeded to further enhance customers’ experience, leading to increased business profits. To this end, this paper proposes an association rule mining approach, depending on video analytic techniques, for detecting store-items that are likely to be out of demand. Our approach is developed upon motion-tracking and facial emotion expression methods. We used a motion-tracking technique to record information related to customers’ regions of interest inside the store and customers’ interactions with the on-shelf products. Besides, we have implemented an emotion classification model, trained on recorded video data, to identify customers’ emotions towards items. Results of our conducted experiments yielded several scenarios representing customer behavior towards out-of-demand stores’ items.

Author 1: Ahmad B. Alkhodre
Author 2: Abdullah M. Alshanqiti

Keywords: Shopper Behavior; motion tracking; emotion clas-sification; machine learning; association rule learning

PDF

Paper 92: Hybrid Model of Quantum Transfer Learning to Classify Face Images with a COVID-19 Mask

Abstract: The problem of the COVID-19 disease has deter-mined that about 219 million people have contracted it, of which 4.55 million died. This importance has led to the implementation of security protocols to prevent the spread of this disease. One of the main protocols is to use protective masks that properly cover the nose and mouth. The objective of this paper was to classify images of faces using protective masks of COVID-19, in the classes identified as correct mask, incorrect mask, and no mask, with a Hybrid model of Quantum Transfer Learning. To do this, the method used has made it possible to gather a data set of 660 people of both sexes (man and woman), with ages ranging from 18 to 86 years old. The classic transfer learning model chosen was ResNet-18; the variational layers of the proposed model were built with the Basic Entangler Layers template for four qubits, and the optimization of the training was carried out with the Stochastic Gradient Descent with Nesterov Momentum. The main finding was the 99.05% accuracy in classifying the correct Protective Masks using the Pennylane quantum simulator in the tests performed. The conclusion reached is that the proposed hybrid model is an excellent option to detect the correct position of the protective mask for COVID-19.

Author 1: Christian Soto-Paredes
Author 2: Jose Sulla-Torres

Keywords: hybrid; quantum; classify; face; COVID-19; mask

PDF

Paper 93: Code Optimizations for Parallelization of Programs using Data Dependence Identifier

Abstract: In a Parallelizing Compiler, code transformations help to reduce the data dependencies and identify parallelism in a code. In our earlier paper, we proposed a model Data Dependence Identifier (DDI), in which a program P is represented as graph GP . Using 􀀀􀀀 , we could identify data dependencies in a program and also perform transformations like dead code elimination and constant propagation. In this paper, we present algorithms for loop invariant code motion, live range analysis, node splitting and loop fusion code transformations using DDI in polynomial time.

Author 1: Kavya Alluru
Author 2: Jeganathan L

Keywords: Automatic parallelization; parallelizing compilers; code optimizations; data dependence; loop invariant code motion; node splitting; live range analysis; loop fusion

PDF

Paper 94: A Novel Deep Learning-based Online Proctoring System using Face Recognition, Eye Blinking, and Object Detection Techniques

Abstract: Distance and online learning (or e-learning) has become a norm in training and education due to a variety of benefits such as efficiency, flexibility, affordability, and usability. Moreover, the COVID-19 pandemic has made online learning the only option due to its physical isolation requirements. However, monitoring of attendees and students during classes, particularly during exams, is a major challenge for online systems due to the lack of physical presence. There is a need to develop methods and technologies that provide robust instru-ments to detect unfair, unethical, and illegal behaviour during classes and exams. We propose in this paper a novel online proctoring system that uses deep learning to continually proctor physical places without the need for a physical proctor. The system employs biometric approaches such as face recognition using the HOG (Histogram of Oriented Gradients) face detector and the OpenCV face recognition algorithm. Also, the system incorporates eye blinking detection to detect stationary pictures. Moreover, to enforce fairness during exams, the system is able to detect gadgets including mobile phones, laptops, iPads, and books. The system is implemented as a software system and evaluated using the FDDB and LFW datasets. We achieved up to 97% and 99.3% accuracies for face detection and face recognition, respectively.

Author 1: Istiak Ahmad
Author 2: Fahad AlQurashi
Author 3: Ehab Abozinadah
Author 4: Rashid Mehmood

Keywords: Online learning; online proctor; student authen-tication; face detection; face recognition; eye blinking detection; object detection; distance learning; e-learning

PDF

Paper 95: Faculty e-Learning Adoption During the COVID-19 Pandemic: A Case Study of Shaqra University

Abstract: e-Learning can generally be applied by employ-ing learning management system (LMS) platforms designed to support an instructor to develop, manage, and provide online courses to learners. During the COVID-19 pandemic, several LMS platforms were adopted in Saudi Arabian institutions, such as Moodle and Blackboard. However, in order to adopt e-learning and operate LMS platforms, there is a need to investigate factors that influence the capability of faculty to utilize e-learning and its perceived benefits on students. This paper examines how training support and LMS readiness factors influence the capability of faculty to adopt e-learning and student perceived benefits. A quantitative research method was conducted using an online questionnaire survey method. Research data was collected from 274 faculty members, who used Moodle as a main LMS platform, at Shaqra University in the Kingdom of Saudi Arabia (KSA). The results reveal that training support and LMS readiness have a positive influence on the faculty’s capability to adopt e-learning, which leads to enhancing students’ perceived benefits. By identifying the factors that influence e-learning adoption, universities can provide enhanced e-learning services to students and support faculty through providing adequate training and powerful e-learning platform.

Author 1: Asma Hassan Alshehri
Author 2: Saad Ali Alahmari

Keywords: e-Learning; Learning Management System (LMS); distance learning; LMS readiness; training

PDF

Paper 96: Joint Deep Clustering: Classification and Review

Abstract: Clustering is a fundamental problem in machine learning. To address this, a large number of algorithms have been developed. Some of these algorithms, such as K-means, handle the original data directly, while others, such as spectral clustering, apply linear transformation to the data. Still others, such as kernel-based algorithms, use nonlinear transformation. Since the performance of the clustering depends strongly on the quality of the data representation, representation learning approaches have been extensively researched. With the recent advances in deep learning, deep neural networks are being increasingly utilized to learn clustering-friendly representation. We provide here a review of existing algorithms that are being used to jointly optimize deep neural networks and clustering methods.

Author 1: Arwa Alturki
Author 2: Ouiem Bchir
Author 3: Mohamed Maher Ben Ismail

Keywords: Clustering; deep learning; deep neural network; representation learning; clustering loss; reconstruction loss

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org