The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 10

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Hand Motion Estimation using Super-Resolution of Multipoint Surface Electromyogram by Deep Learning

Abstract: This paper proposes a method for hand motion estimation using super-resolution of multipoint surface electromyogram for prosthetic hands. In general, obtaining more EMGs (ElectroMyoGraphy) improves the accuracy of hand motion estimation, but it is costly and hard to use. Therefore, this method improves the accuracy of hand motion estimation by estimating a large number of EMG signals from a small number of EMG signals using super-resolution. This super-resolution is achieved by learning the relationship between few and many myoelectric signals using a deep neural network. Then, hand motions are estimated from the high-resolution signal using a deep neural network. Experiments using actual EMG signals show that the proposed method improves the accuracy of hand motion estimation.

Author 1: Keigo FUKUSHIMA
Author 2: Yoshiaki YASUMURA

Keywords: Hand motion estimation; super-resolution; deep neural network; prosthetic hand; electromyography

Download PDF

Paper 2: IoTCID: A Dynamic Detection Technology for Command Injection Vulnerabilities in IoT Devices

Abstract: The pervasiveness of IoT devices has brought us convenience as well as the risks of security vulnerabilities. However, traditional device vulnerability detection methods cannot efficiently detect command injection vulnerabilities due to heavy execution overheads or false positives and false negatives. Therefore, we propose a novel dynamic detection solution, IoTCID. First, it generates constrained models by parsing the front-end files of the IoT device, and a static binary analysis is performed towards the back-end programs to locate the interface processing function. Then, it utilizes a fuzzing method based on the feedback from Distance Function, which selects high-quality samples through various scheduling strategies. Finally, with the help of the probe code, it compares the parameter of potential risk functions with samples to confirm the command injection vulnerabilities. We implement a prototype of IoTCID and evaluate it on real-world IoT devices from three vendors and confirm six vulnerabilities. It shows that IoTCID are effective in discovering command injection vulnerabilities in IoT devices.

Author 1: Hao Chen
Author 2: Jinxin Ma
Author 3: Baojiang Cui
Author 4: Junsong Fu

Keywords: Firmware vulnerability mining; command injection; dynamic detection

Download PDF

Paper 3: English and Romanian Brain-to-Text Brain-Computer Interface Word Prediction System

Abstract: Brain-Computer Interface (BCI) can recognise the thoughts of a human through various electrophysiological signals. Electrodes (sensors) placed on the scalp are used to detect these signals, or by using electrodes implanted inside the brain. Usually, BCI can detect brain activity through different neuroimage methods, but the most preferred is Electroencephalography (EEG) because it is a non-invasive and non-critical method. BCI systems applications are very helpful in restoring functionalities to people suffering from disabilities due to different reasons. In this study, a novel brain-to-text BCI system is presented to predict the word that the subject is thinking. This brain-to-text can assist mute people or those who cannot communicate with others due to different diseases to restore some of their abilities to interact with the surrounding environment and express themselves. In addition, brain-to-text may be used in different control or entertainment applications. EMOTIV™ Insight headset has been used to collect EEG signals from the subject’s brain. Feature extraction of EEG signals for BCI systems is very important to classification performance. Statistical-based feature extraction has been used in this system to extract valuable features to be used for classification. The datasets are sentences involving some commonly used words in English and Romanian languages. The results of the English language elucidated that K-Nearest neighbour (KNN) has a prediction accuracy of 86.7%, 86.1% for Support Vector Machine (SVM), and 79.2% for Linear discriminant analysis (LDA), while the Romanian language has a prediction accuracy of 96.1%, 97.1%, and 94.8% for SVM, LDA, and KNN respectively. This system is a step forward in developing advanced brain-to-text BCI prediction systems.

Author 1: Haider Abdullah Ali
Author 2: Nicolae Goga
Author 3: Andrei Vasilateanu
Author 4: Ali M. Muslim
Author 5: Khalid Abdullah Ali
Author 6: Marius-Valentin Dragoi

Keywords: Brain-to-text; Brain-Computer Interface (BCI); Electroencephalography (EEG); Natural Language Processing (NLP); English language; Romanian language

Download PDF

Paper 4: Fine-grained Access Control Method for Blockchain Data Sharing based on Cloud Platform Big Data

Abstract: Blockchain technology has the advantages of decentralization, de-trust, and non-tampering, which breaks through the limitations of traditional centralized technology, so it has gradually become the key technology of power data security storage and privacy protection. In the existing smart grid framework, the grid operator is a centralized key distribution organization, which is responsible for sending all the secret credentials, so it is easy to have a single point of failure, resulting in a large number of personal information losses. To solve the problems of inflexible access control in smart grid data-sharing framework and considering the limitation of multi-party cooperation among grid operators and efficiency, an attribute-based access control scheme supporting privacy preservation in smart grid is constructed in this paper. A fine-grained access control scheme supporting privacy protection is designed and extended to the smart grid system, which enables the system to achieve fine-grained access control of power data. A decryption test algorithm is added before the decryption algorithm. Finally, through performance analysis and comparison with other schemes, it is verified that the performance of this system is 7% higher than the traditional method, and the storage cost is 9.5% lower, which reflects the superiority of the system. Full optimization of the access policy is achieved. It is proved that the scheme is more efficient to implement the coordination and cooperation of multiple authorized agencies in the system initialization.

Author 1: Yu Qiu
Author 2: Biying Sun
Author 3: Qian Dang
Author 4: Chunhui Du
Author 5: Na Li

Keywords: Power grid data; blockchain technology; data sharing; fine-grained access control; game strategy; ciphertext key

Download PDF

Paper 5: Smart System for Emergency Traffic Recommendations : Urban Ambulance Mobility

Abstract: With the increasing evolution of advanced technologies and techniques such as the Internet of Things, Artificial Intelligence and Big Data, the traffic management systems industry has acquired new methodologies for creating advanced and intelligent services and applications for traffic management and safety. The current contribution focuses on the implementation of a path recommendation service for paramedics in emergency situations, which is one of the most critical and complex issues in traffic management for the survival of individuals involved in emergency incidents. This work mainly focused on the response time to life-threatening incidents, which is an indicator for emergency ambulance services and for recommending a fastest ambulance route. To this end, we propose a hybrid approach consisting on a local approach using machine learning techniques to predict the congestion of different sections of a map from an origin to a destination, and a global approach to suggest the fastest path to ambulance drivers in real time as they move in OpenStreetMap.

Author 1: Ayoub Charef
Author 2: Zahi Jarir
Author 3: Mohamed Quafafou

Keywords: Recommendation systems; emergency urban traffic; ambulance mobility; emergency navigation services

Download PDF

Paper 6: A Review of Automatic Question Generation in Teaching Programming

Abstract: Computer programming is a complex field that requires rigorous practice in programming code writing and learning skills, which can be one of the critical challenges in learning and teaching programming. The complicated nature of computer programming requires an instructor to manage its learning resources and diligently generate programming-related questions for students that need conceptual programming and procedural programming rules. In this regard, automatic question generation techniques help teachers carefully align their learning objectives with the question designs in terms of relevancy and complexity. This also helps in reducing the costs linked with the manual generation of questions and fulfills the need of supplying new questions through automatic question techniques. This paper presents a theoretical review of automatic question generation (AQG) techniques, particularly related to computer programming languages from the year 2017 till 2022. A total of 18 papers are included in this study. one of the goals is to analyze and compare the state of the field in question generation before COVID-19 and after the COVID-19 period, and to summarize the challenges and future directions in the field. In congruence to previous studies, there is little focus given in the existing literature on generating questions related to learning programming languages through different techniques. Our findings show that there is a need to further enhance experimental studies in implementing automatic question generation especially in the field of programming. Also, there is a need to implement an authoring tool that can demonstrate designing more practical evaluation metrics for students.

Author 1: Jawad Alshboul
Author 2: Erika Baksa-Varga

Keywords: Question generation; question generation techniques; automatic question generation; teaching programming

Download PDF

Paper 7: Application of Stacking Ensemble Machine in Big Data: Analyze the Determinants for Vitalization of the Multicultural Support Center

Abstract: For multicultural families to successfully promote social adaptation and achieve desirable social integration, the role of the multicultural family support center (Multi-FSC) is crucial. In addition, it's important to examine the factors that will contribute to the multicultural support center's vitality from the standpoint of the customers. In this study, machine learning models based on a single machine learning model and stacking ensemble using survey data from all multicultural families were used to examine the determinants for the utilization of multicultural family support centers for multicultural families. Additionally, based on the constructed prediction model, this study offered the foundational data for the revitalization of the multicultural support center. In this study, 281,606 adults (19 years or older), 56,273 of whom were married immigrants or naturalized citizens as of 2012, were examined. The stacking ensemble method was employed in this work to forecast the use of multicultural family support centers. In the base stage (model) of this model, logistic regression was employed, along with Classification and Regression Tree (CART), Radial Basis Function Neural Network (RBF-NN), and Random Forest (RF) model. The RBF-NN-Logit reg model had the best prediction performance, according to the study's findings (RMSE = 0.20, Ev = 0.45, and IA = 0.68). The findings of this study suggested that the prediction performance of the stacking ensemble can be improved when creating classification or prediction models using epidemiological data from a community.

Author 1: Raeho Lee
Author 2: Haewon Byeon

Keywords: Stacking ensemble machine; radial basis function neural network; random forest; multicultural family support centers; prediction model

Download PDF

Paper 8: Research on Precision Marketing based on Big Data Analysis and Machine Learning:Case Study of Morocco

Abstract: With the growth of the Internet industry and the informatization of services, online services and transactions have become the mainstream method used by clients and companies. How to attract potential customers and keep up with the Big Data era are the important challenges and issues for the banking sector. With the development of artificial intelligence and machine learning, it has become possible to identify potential customers and provide personalized recommendations based on transactional data to realize precision marketing in banking. The current study aims to provide a potential customer’s prediction algorithm (PCPA) to predict potential clients using big data analysis and machine learning techniques. Our proposed methodology consists of five stages: data preprocessing, feature selection using Grid search algorithm, data splitting into two parts train and test set with the ratio of 80% and 20% respectively, modeling, evaluations of results using confusion matrix. According to the obtained results, the accuracy of the final model is the highest (98,9%). The dataset used in this research about banking customers has been collected from a Moroccan bank. It contains 6000 records, 14 predictor variables, and one outcome variable.

Author 1: Nouhaila El Koufi
Author 2: Abdessamad Belangour
Author 3: Mounir Sdiq

Keywords: Precision marketing; big data analysis; machine learning; potential customers prediction algorithm (PCPA)

Download PDF

Paper 9: A Sequence-Aware Recommendation Method based on Complex Networks

Abstract: Online stores and service providers rely heavily on recommendation software to guide users through the vast number of available products. Consequently, the field of recommender systems has attracted increased attention from the industry and academia alike, but despite this joint effort, the field still faces several challenges. For instance, most existing work models the recommendation problem as a matrix completion problem to predict the user preference for an item. This abstraction prevents the system from utilizing the rich information from the ordered sequence of user actions logged in online sessions. To address this limitation, researchers have recently developed a promising new breed of algorithms called sequence-aware recommender systems to predict the user’s next action by utilizing the time series composed of the sequence of actions in an ongoing user session. This paper proposes a novel sequence-aware recommen-dation approach based on a complex network generated by the hidden metric space model, which combines node similarity and popularity to generate links. We build a network model from data and then use it to predict the user’s subsequent actions. The network model provides an additional information source that improves the recommendations’ accuracy. The proposed method is implemented and tested experimentally on a large dataset. The results prove that the proposed approach performs better than state-of-the-art recommendation methods.

Author 1: Abdullah Alhadlaq
Author 2: Said Kerrache
Author 3: Hatim Aboalsamh

Keywords: Sequence-aware recommender systems; complex networks; similarity-popularity

Download PDF

Paper 10: Mobile Applications for Cybercrime Prevention: A Comprehensive Systematic Review

Abstract: Now-a-days, cybercrime, cyberattacks, cyber security, phishing and malware are taking a more notorious role in people's daily lives, not only at the international level. The great technological leaps brought with them new modalities of cybercrime, the number of victims of cybercriminals has increased considerably. The objective of this study is to determine the state of the art about Mobile Applications and their impact on Computer Crime Prevention. Therefore, it has become necessary to know what preventive measures are being taken, such as techniques for detecting computer crimes, their modalities and their classification. To close this knowledge gap, a systematic literature review (SLR), a methodology proposed by Kitchenham & Charters, was proposed to obtain the detection techniques and classification of computer crimes based on the review of 68 papers published between the years 2017 and 2022. Likewise, different tables and graphs of the selected studies are provided, which offer additional information such as the most used keywords per paper, biometric networks, among others.

Author 1: Irma Huamanñahui Chipa
Author 2: Javier Gamboa-Cruzado
Author 3: Jimmy Ramirez Villacorta

Keywords: Computer crimes; cyberattacks; cyber security; mobile apps; phishing; machine learning; malware; systematic literature review

Download PDF

Paper 11: Evaluation of the Efficiency of the Optimization Algorithms for Transfer Learning on the Rice Leaf Disease Dataset

Abstract: To improve the model's efficiency, people use many different methods, including the Transfer Learning algorithm, to improve the efficiency of recognition and classification of image data. The study was carried out to combine optimization algorithms with the Transfer Learning model with MobileNet, MobileNetV2, InceptionV3, Xception, ResNet50V2, DenseNet201 models. Then, testing on rice disease data set with 13.186 images, background removed. The result obtained with high accuracy is the RMSprop algorithm, with an accuracy of 88% when combined with the Xception model, similar to the F1, Xception model, and ResNet50V2 score of 87% when combined with the Adam algorithm. This shows the effect of gradients on the Transition learning model. Research, evaluate and draw the optimal model to build a website to identify diseases on rice leaves, with the main functions including images and recording of disease identification points for better management purposes on diseased areas of rice.

Author 1: Luyl-Da Quach
Author 2: Khang Nguyen Quoc
Author 3: Anh Nguyen Quynh
Author 4: Hoang Tran Ngoc

Keywords: Optimization algorithm; transfer learning; RMSprop; rice leaf disease; Adam

Download PDF

Paper 12: Associating User’s Preference and Satisfaction into Quality of Experience: A Shoulder-surfing Resistant Authentication Scheme by Visual Perception

Abstract: Authentication acts as a secured method of usability concepts to certain transactions, especially for online banking transaction. Existing method is lacking in terms of usability, thus, making the goal of usability for authentication activities unsuccessful. A study has discovered some key concepts of usability in terms of Human Computer Interaction (HCI) by comparing two existing models of two different factors: environmental factors and display factors. An algorithm shows the authentication step during the online transaction activity. This paper is to prove that shoulder-surfing resistant authentication scheme that uses visual colour-blind mode-based model meets all the requirements of usability, hence, achieved the goal of usability of authentication. This study will bring forward an algorithm that examined the stated authentication scheme with the two factors, i.e environmental and display, during the authentication activity.

Author 1: Juliana Mohamed
Author 2: Mohd Farhan Md Fudzee
Author 3: Sofia Najwa Ramli
Author 4: Mohd Norasri Ismail
Author 5: Muhamad Hanif Jofri

Keywords: Authentication; usability; algorithm; model

Download PDF

Paper 13: Integrating Computer-aided Argument Mapping into EFL Learners’ Argumentative Writing: Evidence from Saudi Arabia

Abstract: This paper aims to examine the effects of Computer-Aided Argument Mapping (CAAM) on Saudi EFL learners’ argumentative writing performance across the development of writing content and coherence and their self-regulated learning skills. A total of 40 second-year university EFL learners were purposively selected as a one-group of pre- and post-test design. Using a mixed-method approach, three research tools were utilized: pre- and post-writing tests, a Self-regulated Learning Scale (SRLS), and semi-structured interviews. Quantitative results demonstrated that EFL learners’ argumentative writing performance made noteworthy gains, as manifested by the statistically significant differences between their pre- and post-test scores. Significant positive correlations were also found between the EFL learners’ overall argumentative writing performance and the SRL factor subscales, indicating an increase in the self-regulation mechanism relative to planning, self-monitoring, evaluation, effort, and self-efficacy. Qualitative results indicate that the participants have positively embraced the integration of CAAM to improve their writing skills and self-regulation processes. Recommendations for implementing digital mapping to revolutionize EFL learning classrooms in this digital era are provided.

Author 1: Nuha Abdullah Alsmari

Keywords: Argumentative writing; argument mapping; computer-aided argument mapping; self-regulated learning; Saudi EFL learners

Download PDF

Paper 14: An Efficient Computational Method of Motif Finding in the Zika Virus Genome

Abstract: The Zika virus (ZIKV) outbreak and spread is a global health emergency declared by World Health Organization. ZIKV rapidly spread across the world, causing neurological disorders. It is gaining public and scientific consideration. ZIKV genome biology and molecular structure are better understood with published papers. Genetic regulation is better understood by finding the motif in the DNA Genome sequence. The transcription factor binding sites need to be identified to understand the genetic code. There is diversity in gene expression. Motif-finding methods work towards efficiently identifying the repeated patterns in the genome. ZIKV genome sequence is used in the study. Identifying the motif is still a difficult task. There is a low probability of identifying the binding sites. Finding all possible solutions is challenging as it requires a lot of time and has high space complexity for finding long motifs. The Greedy search technique with pseudocount finds the motif in real-time. The count matrix is computed, and the profile matrix is constructed from the genome of the Zika virus. The calculated consensus string helps in calculating the score of the motif. The Greedy motif search technique is applied in this paper to find the motifs in the Zika virus Genome. This technique is not applied earlier to find the motifs in Zika Virus. The motifs are identified using a Greedy motif search without pseudocount and with pseudocount.

Author 1: Pushpa Susant Mahapatro
Author 2: Jatinderkumar R. Saini

Keywords: Consensus string; genome study; greedy search technique; motif search; pseudocount; regulatory proteins; ZIKV; Zika virus

Download PDF

Paper 15: Classification of Agriculture Area based on Superior Commodities in Geographic Information System

Abstract: Research carries a classification model that combines LQ analysis and hierarchical classification using a single linkage. The classification results are a basis for mapping the potential of agriculture areas based on superior food commodities in Merauke Regency, Indonesia. LQ analysis is used to select food commodities. In contrast, the application of single linkage uses the production of three features, rice, corn, and peanuts, which have an LQ value>1, to group sub-districts based on agricultural potential. Intelligent mapping is represented by mapping the sub-districts agricultural areas according to the cluster. The classification results show that the first cluster has sixteen sub-district members, the second consists of three sub-districts, and the third cluster consists of one sub-district. Each cluster member has similarities based on the distance measurement with the smallest value using the Euclidean distance. The proposed classification model is a creative idea to map agricultural areas, which can present information on regional potential based on superior food crop commodities.

Author 1: Lilik Sumaryanti
Author 2: Rosmala Widjastuti
Author 3: Firman Tempola
Author 4: Heru Ismanto

Keywords: Classification; agriculture; location qoutient; single linkage; geographic information system

Download PDF

Paper 16: Analysis of Unsupervised Machine Learning Techniques for an Efficient Customer Segmentation using Clustering Ensemble and Spectral Clustering

Abstract: Customer segmentation is key to a corporate decision support system. It is an important marketing technique that can target specific client categories. We create a novel consumer segmentation technique based on a clustering ensemble; in this stage, we ensemble four fundamental clustering models: DBSCAN, K-means, Mini Batch K-means, and Mean Shift, to deliver a consistent and high-quality conclusion. Then, we use spectral clustering to integrate numerous clustering findings and increase clustering quality. The new technique is more flexible with client data. Feature engineering cleans, processes, and transforms raw data into features. These traits are then used to form clusters. Adjust Rand Index (ARI), Normalized Mutual Information (NMI), Dunn's Index (DI), and Silhouette Coefficient (SC) were utilized to evaluate our model's performances with individual clustering approaches. The experimental analysis found that our model has the best ARI (70.14%), NMI (71.75), DI (75.15), and SC (72.89%). After retaining these results, we applied our model to an actual dataset obtained from Moroccan citizens via social networks and email boxes between 03/06/2022 and 19/08/2022.

Author 1: Nouri Hicham
Author 2: Sabri Karim

Keywords: Machine learning; customer segmentation; marketing; clustering ensemble; spectral clustering

Download PDF

Paper 17: Complexity of Web-based Application for Research and Community Service in Academic

Abstract: Research data and community service in academic environment is a very important asset that must be managed properly. They have to be applied synergically in order to obtain as quality standards of higher education. A centralized web-based application designed for research data management and community service have been applied in terms of supporting the managerial of activities. To make the application suitable for users, it is necessary to estimate the size of the software built. This study aimed at measuring the consistency of the apps based on feature point analysis method which is owned by research and community service in Indonesia. Fourteen Modification Complexity Adjustment Factor (MCAF) were used for calculating a program scale with adequate precision. The main cost is determining the quality of application sequentially, which includes measuring the weighted value of feature point components, namely, Crude Function Points (CFPs), calculating the Relative Complexity Adjustment Factor (RCAF), and estimating the Function Point (FP) by using the formula itself. The results depict that the size of application was estimated about 18381 lines using FPA methods and achieved successful estimation with 2.2 percent of deviation.

Author 1: Fitriana Fitriana
Author 2: Sukarni Sukarni
Author 3: Zulkifli Zulkifli

Keywords: Application complexity; program scale; software size; function point analysis

Download PDF

Paper 18: Brain Tumor Detection using Integrated Learning Process Detection (ILPD)

Abstract: Brain tumor detection becomes more complicated process in medical image processing. Analyzing brain tumors is very difficult task because of the unstructured shape of the tumors. Generally, tumors are of two types such as cancerous and non-cancerous. Cancerous tumors are called malignant and non-cancerous are called benign tumors. Malignant tumors are more complex to the patients if these are not detected in the early stages. Precancerous are the other types of tumors that may become cancerous if the treatment is not taken in the early stages. Machine Learning (ML) approaches are most widely used to detect complex patterns but ML has various disadvantages such as time taking process to detect brain tumors. In this paper, integrated learning process detection (ILPD) is introduced to detect the tumors in the brain and analyzes the shape and size of the tumors, and find the stage of the tumors in the given input image. To increase the tumor detection rate advanced image filters are adopted with Deep Convolutional Neural Networks (D-CNN) to improve the detection rate. A pre-trained model called VGG19 is applied to train the MRI brain images for effective detection of tumors. Two benchmark datasets are collected from Kaggle and BraTS 2019 contains MRI brain scan images. The performance of the proposed approach is analyzed by showing the accuracy, f1-score, sensitivity, dice similarity score and specificity.

Author 1: M. Praveena
Author 2: M. Kameswara Rao

Keywords: Machine learning (ML); deep convolutional neural network (D-CNN); brain-tumor-detection; integrated learning process detection (ILPD)

Download PDF

Paper 19: Towards Home-based Therapy: The Development of a Low-cost IoT-based Transcranial Direct Current Stimulation System

Abstract: Transcranial direct current stimulation (tDCS), a neuromodulation technique that is painless and noninvasive, has shown promising results in assisting patients suffering from brain injuries and psychiatric conditions. Recently, there has been an increased interest in home-based therapeutic applications in various areas. This study proposes a low-cost, internet of things (IoT)-based tDCS prototype that provides the basic tDCS features with internet connectivity to enable remote monitoring of the system's usage and adherence. An IoT-enabled microcontroller was programmed with C++ to supply a specific dose of direct current between the anode and cathode electrodes for a predefined duration. Each tDCS session's information was successfully synchronized with an IoT cloud server to be remotely monitored. The accuracy of the resulting stimulation currents was close to the expected values with an acceptable error range. The proposed IoT-based tDCS system has the potential to be used as a telerehabilitation approach to enhance safety and adherence to home-based noninvasive brain stimulation techniques.

Author 1: Ahmad O. Alokaily
Author 2: Ghala Almeteb
Author 3: Raghad Althabiti
Author 4: Suhail S. Alshahrani

Keywords: IoT; Internet of medical things; tDCS; home-based; brain stimulation; cloud

Download PDF

Paper 20: Semi-supervised Text Annotation for Hate Speech Detection using K-Nearest Neighbors and Term Frequency-Inverse Document Frequency

Abstract: Sentiment analysis can detect hate speech using the Natural Language Processing (NLP) concept. This process requires annotation of the text in the labeling. However, when carried out by people, this process must use experts in the field of hate speech, so there is no subjectivity. In addition, if processed by humans, it will take a long time and allow errors in the annotation process for extensive data. To solve this problem, we propose an automatic annotation process with the concept of semi-supervised learning using the K-Nearest Neighbor algorithm. This process requires feature extraction of term frequency-inverse document frequency (TF-IDF) to obtain optimal results. KNN and TF-IDF were able to annotate and increase the accuracy of < 2% from the initial iteration of 57.25% to 59.68% in detecting hate speech. This process can annotate the initial dataset of 13169 with the distribution of 80:20 of training and testing data. There are 2370 labeled datasets; for testing, there are 1317 unannotated data; after preprocessing, there are 9482. The final results of the KNN and TF-IDF annotation processes have a length of 11235 for annotated data.

Author 1: Nur Heri Cahyana
Author 2: Shoffan Saifullah
Author 3: Yuli Fauziah
Author 4: Agus Sasmito Aribowo
Author 5: Rafal Drezewski

Keywords: Natural language processing; text annotation; semi-supervised learning; TF-IDF; K-NN

Download PDF

Paper 21: Comparison of Edge Detection Algorithms for Texture Analysis on Copy-Move Forgery Detection Images

Abstract: Feature extraction in Copy-Move Forgery Detection (CMFD) is crucial to facilitate image forgery analysis. Edge detection is one of the processes to extract specific information from Copy-Move Forgery (CMF) Images. It sensitizes the amount of information in the image and filters out useless ones while preserving the important structural properties in the image. This paper compares five edge detection methods: Robert, Sobel, Prewitt (first Derivative), Laplacian, and Canny edge detectors (second Derivatives). CMFD evaluation datasets images (MICC-F220) are tested with both methods to facilitate comparison. The edge detection operators were implemented with their respective convolution masks. Robert with a 2x2 mask, The Prewitt and Sobel with a 3x3 mask, while Laplacian and canny used adjustable masks. These masks determine the quality of the detected edges. Edges reflect a great-intensity contrast that is either darker or brighter.

Author 1: Bashir Idris
Author 2: Lili N. Abdullah
Author 3: Alfian Abdul Halim
Author 4: Mohd Taufik Abdullah Selimun

Keywords: Edge detection; first derivative; second derivatives; robert; sobel; prewitt; laplacian; canny edge detector

Download PDF

Paper 22: Deep Learning Model for Predicting Consumers’ Interests of IoT Recommendation System

Abstract: This electronic the Internet of Things (IoT) technology has contributed to several domains such as health, energy, education, transportation, industry, and other domains. However, with the increased number of IoT solutions worldwide, IoT consumers find it difficult to choose the technology that suits their needs. This article describes the design and implementation of an IoT recommendation system based on consumer interests. In particular, the knowledge-based IoT recommendation system exploits a Service Oriented Architecture (SOA) where IoT device and service providers use a registry to advertise their products. Moreover, the proposed model uses a Long Short-term Memory (LSTM) deep learning technique to predict the consumer's interest based on the consumer's data. Then the recommendation system do the mapping between the consumers and the related IoT devices based on the consumer interests. The proposed Knowledge-based IoT recommendation system has been validated using a real-world IoT dataset collected from Twitter Application Programming Interface (API) that include more than 15,791 tweets. Overall the results of our experiment are promising in terms of precision and recall. Furthermore, the proposed model achieved the highest accuracy score compared with other state-of-the-art methods.

Author 1: Talal H. Noor
Author 2: Abdulqader M. Almars
Author 3: El-Sayed Atlam
Author 4: Ayman Noor

Keywords: Internet of things; IoT; knowledge-based; recommendation system; service-oriented architecture; SOA; long short-term memory; LSTM; deep learning

Download PDF

Paper 23: A Machine Learning Model for Personalized Tariff Plan based on Customer’s Behavior in the Telecom Industry

Abstract: In the telecommunication industry, being able to predict customers’ behavioral pattern to successfully design and recommend a suitable tariff plan is the ultimate target. The behavioral pattern has a vital connection with the customers’ demographic background. Different researches have been done based on hypothesis testing, regression analysis, and conjoint analysis to determine the interdependencies among them and the effects on the customers’ behavioral needs. This has presented us with ample scope for research using numerous classification-based techniques. This work proposes a model to predict customer’s behavioral pattern by using their demographic data. This model was built after investigating various types of classification-based machine learning techniques including the traditional ones like decision tree, k-nearest neighbor, logistic regression, and artificial neural networks along with some ensemble techniques such as random forest, adaboost, gradient boosting machine, extreme gradient boosting, bagging, and stacking. They are applied to a dataset collected using a questionnaire in India. Among the traditional classifiers, decision tree gave the best result of 81% accuracy and random forest showed the best result among the ensemble learning techniques with an accuracy of 83%. The proposed model has shown a very positive outcome in predicting the customers’ behavioral pattern.

Author 1: Lewlisa Saha
Author 2: Hrudaya Kumar Tripathy
Author 3: Fatma Masmoudi
Author 4: Tarek Gaber

Keywords: Customer behavior; data analytics; ensemble learning; machine learning; telecommunication industry

Download PDF

Paper 24: A Systematic Literature Review of Deep Learning-Based Detection and Classification Methods for Bacterial Colonies

Abstract: Deep learning is an area of machine learning that has substantial potential in various fields of study such as image processing and computer vision. A large number of studies are published annually on deep learning techniques. The focus of this paper is on bacteria detection, identification, and classification. This paper presents a systematic literature review that synthesizes the evidence related to bacteria colony identification and detection published in the year 2021. The aim is to aggregate, analyse, and summarize the evidence related to deep learning detection, identification, and classification of bacteria and bacteria colonies. The significance is that the review will help experts and technicians to understand how deep learning techniques can apply in this regard and potentially further support more accurate detection of bacteria types. A total of 38 studies are analysed. The majority of the published studies focus on supervised-learning–based convolutional neural networks. Furthermore, a large number of studies make use of laboratory-prepared datasets as compared to open-source and industrial datasets. The results also indicate a lack of tools, which is a barrier in adapting academic research in industrial settings.

Author 1: Shimaa A. Nagro

Keywords: AI; bacterial-colonies; classification; deep learning; detection; literature review

Download PDF

Paper 25: Optimization of Multilayer Perceptron Hyperparameter in Classifying Pneumonia Disease Through X-Ray Images with Speeded-Up Robust Features Extraction Method

Abstract: Pneumonia is an illness that affects practically everyone, from children to the elderly. Pneumonia is an infectious disease caused by viruses, bacteria, or fungi that affect the lungs. It is quite difficult to recognize someone who has pneumonia. This is because pneumonia has multiple levels of classification, and so the symptoms experienced may vary. The multilayer perceptron approach will be used in this study to categorize Pneumonia and determine the level of accuracy, which will contribute to scientific development. The Multilayer Perceptron is employed as the classification method with hyperparameter learning rate and momentum, while SURF is used to extract the features in this classification. Based on the experiments that have been carried out, in general, the learning rate value is not very influential in the learning process, both at the momentum values of 0.1, 0.3, 0.5, 0.7, and 0.9. The best desirable accuracy value for momentum 0.1 is at a learning rate of 0.05. The best desirable accuracy value for momentum 0.3 is at a learning rate of 0.09. The most desirable accuracy value for momentum 0.3 is at a learning rate of 0.05 and 0.07. At a learning rate of 0.03 the highest ideal accuracy value is obtained. The best desirable accuracy value for momentum 0.9 is at a learning rate of 0.09. this research should be redone using the number of hidden layers and nodes in each hidden layer. The addition of a hidden layer, as well as variations in the number of nodes in the hidden layer, will affect computation time and yield more optimal accuracy results.

Author 1: Mutammimul Ula
Author 2: Muhathir
Author 3: Ilham Sahputra

Keywords: Multilayer perceptron; SURF; pneumonia hyperparameter

Download PDF

Paper 26: Forecasting Covid-19 Time Series Data using the Long Short-Term Memory (LSTM)

Abstract: Confirmed statistical data of Covid-19 cases that have accumulated sourced from (https://corona.riau.go.id/data-statistik/) in Riau Province on June 7, 2021, there were 63441 cases, on June 14, 2021, it increased to 65883 cases, on June 21, 2021, it increased to 67910, and on June 28, 2021, it increased to 69830 cases. Since the beginning of this pandemic outbreak, it has been observed that the case data continues to increase every week until this July. This study predicts cases of Covid-19 time series data in Riau Province using the LSTM algorithm, with a dataset of 64 lines. Long-Short Term Memory has the ability to store memory information for patterns in the data for a long time at the same time. Tests predicting historical data for Covid-19 cases in Riau Province resulted in the lowest RMSE value in the training data, which was 8.87, and the test data, which was 13.00, in the death column. The evaluation of the best MAPE value in the training data, which is 0.23%, is in the recovered column, and the evaluation of the best MAPE value in the test data, which is 0.27%, in the positive_number column. In the test to predict the next 30 days using the LSTM model that has been trained, it was found that the performance evaluation of the prediction results for the positive_number column and the death column was very good, the recovery column was categorized as good, the independent_isolation column and the care_rs column were categorized as poor.

Author 1: Harun Mukhtar
Author 2: Reny Medikawati Taufiq
Author 3: Ilham Herwinanda
Author 4: Doni Winarso
Author 5: Regiolina Hayami

Keywords: Time series prediction; forecasting; recurrent neural network; long short-term memory

Download PDF

Paper 27: Design of a Dense Layered Network Model for Epileptic Seizures Prediction with Feature Representation

Abstract: Epilepsy is a neurological disorder that influences about 60 million people all over the world. With this, about 30% of the people cannot be cured with surgery or medications. The seizure prediction in the earlier stage helps in disease prevention using therapeutic interventions. Certain studies have sensed that abnormal brain activity is observed before the initiation of seizure which is medically termed as a pre-ictal state. Various investigators intend to predict the baseline for curing the pre-ictal seizure stage; however, an effectual prediction model with higher specificity and sensitivity is still a challenging task. This work concentrates on modelling an efficient dense layered network model (DLNM) for seizure prediction using deep learning (DL) approach. The anticipated framework is composed of pre-processing, feature representation and classification with support vector based layered model (dense layered model). The anticipated model is tested for roughly about 24 subjects from CHBMIT dataset which outcomes in attaining an average accuracy of 96% respectively. The purpose of the research is to make earlier seizure prediction to reduce the mortality rate and the severity of the disease to help the human community suffering from the disease.

Author 1: Summia Parveen
Author 2: S. A. Siva Kumar
Author 3: P. MohanRaj
Author 4: Kingsly Jabakumar
Author 5: R. Senthil Ganesh

Keywords: Epilepsy seizure; pre-ictal state; deep learning; feature representation; vector model

Download PDF

Paper 28: The Effect of the Aesthetically Mobile Interfaces on Students’ Learning Experience for Primary Education

Abstract: Mobile devices such as mobile phones are becoming more important to school students today. This is due to the COVID-19 pandemic, mostly traditional face-to-face learning has shifted to online learning such as learning via a mobile platform. Mobile learning also known as m-learning, is defined as learning in numerous situations through social and content interaction utilizing personal electronic devices. M-learning applications not only need to have efficient functions, but it also has to attract students to learn by providing an attractive interface. An aesthetic of a mobile interface is essential since it could influence the user's learning experiences, but vice versa for non-aesthetic interfaces. User experience (UX) encompasses an extensive range of outcomes of the user-device interaction, including cognitions, attitudes, beliefs, behaviour, behavioural intentions, and affect. However, this study focuses on UX in terms of learnability, satisfaction, and efficiency since most previous studies were not explicitly focused on examining these three (3) UX components. Thus, this study aims to investigate the effect of aesthetically mobile interfaces on the learnability, satisfaction, and efficiency of primary school students, specifically, for Kelas Al-Quran and Fardu Ain (KAFA) students. This study found that aesthetically mobile interfaces significantly affected students’ learning experiences regarding learnability, satisfaction, and efficiency. In conclusion, the findings of this study could serve as guidelines for future research in the field of mobile interface design.

Author 1: Nor Fatin Farzana Binti Zainuddin
Author 2: Zuriana Binti Abu Bakar
Author 3: Noor Maizura Binti Mohammad
Author 4: Rosmayati Binti Mohamed

Keywords: Aesthetic; non-aesthetic; mobile interfaces; primary education

Download PDF

Paper 29: Real Time Customer Satisfaction Analysis using Facial Expressions and Headpose Estimation

Abstract: One of the most exciting, innovative, and promising topics in marketing research is the quantification of customer interest. This work focuses on interest detection and provides a deep learning-based system that monitors client behaviour. By assessing head position, the recommended method assesses customer attentiveness. Customers whose heads are directed toward the promotion or the item of curiosity are identified by the system, which analyses facial expressions and records client interest. An exclusive method is recommended to recognize frontal face postures first, then splits facial components that are critical for detecting facial expressions into iconized face pictures. Mainly consumer interest monitoring will be executed. Finally, the raw facial images are combined with the iconized face image's confidence ratings to estimate facial emotions. This technique combines local part-based characteristics through holistic face data for precise facial emotion identification. The new method provides the dimension of required marketing and product findings indicate that the suggested architecture has the potential to be implemented because it is efficient and operates in real time.

Author 1: Nethravathi P. S
Author 2: Manjula Sanjay Koti
Author 3: Taramol. K.G
Author 4: Soofi Anwar
Author 5: Gayathri Babu J
Author 6: Rajermani Thinakaran

Keywords: Customer monitoring; convolutional neural network; facial expression recognition; facial analysis; head pose estimations component; CNN Model; object localization; face boosting

Download PDF

Paper 30: An Optimized Single Layer Perceptron-based Approach for Cardiotocography Data Classification

Abstract: Uterine Contractions (UC) and Fetal Heart Rate (FHR) are the most common techniques for evaluating fetal and maternal assessment during pregnancy and detecting the changes in fetal oxygenation occurred throughout labor. By monitoring the Cardiotocography (CTG) patterns, doctors can measure fetus state, accelerations, heart rate, and uterine contractions. Several computational and machine learning (ML) methods have been done on CTG recordings to improve the effectiveness of fetus analysis and aid the doctors to understand the variations in their interpretation. However, getting an optimal solution and best accuracy remains an important concern. Among the various ML approaches, artificial neural network (ANN)-based approach has achieved a high performance in several applications. In this paper, an optimized Single Layer Perceptron (SLP)-based approach is proposed to classify the CTG data accurately and predict the fetal state. The approach is able to exploit the advantages of SLP model and optimize the learning rate using a grid search method in which we can arrive at the best accuracy and converge to a local minima. The approach is evaluated on CTG dataset of University of California, Irvine (UCI). The optimized SLP model is trained and tested on the dataset using a 10-fold cross-validation technique to classify the CTG patterns as normal, suspect or pathologic. The experimental results show that the proposed approach achieved 99.20% accuracy compared with the state-of-the-art models.

Author 1: Bader Fahad Alkhamees

Keywords: Cardiotocography; machine learning; artificial neural network (ANN); learning rate; grid search; 10-fold cross-validation

Download PDF

Paper 31: Observe-Orient-Decide-Act (OODA) for Cyber Security Education

Abstract: A cyber range is a term to define an isolated simulation environment that can be used for cybersecurity training. As a training tool, the cyber range has a crucial role in improving the competence of its users. Isolated environmental conditions allow users to increase competence through cybersecurity training based on predetermined scenarios. There is no standard for scenario in training, most of them using common case. In this research, the cyber range is built based on the cyber range taxonomy and uses the observe-orient-decide-act (OODA) loop that has been proven for military education. The OODA loop is implemented and helps guiding each step of the attack and its handling in the built scenario. The scenario chosen is a case of data theft since data theft incidents have often occurred so that it is easier for user to understand. OODA loop for cyber range meets 16 of the 17 characteristics in the cyber range taxonomy. The final cyber range acceptance rate was 81.82%. The results of this acceptance give confidence that this new method can be used as an alternative to learning cybersecurity.

Author 1: Dimas Febriyan Priambodo
Author 2: Yogha Restu Pramadi
Author 3: Obrina Candra Briliyant
Author 4: Muhammad Hasbi
Author 5: Muhammad Adi Yahya

Keywords: Cyber security; cybersecurity education; cyber range; OODA loop; play-role scenario

Download PDF

Paper 32: Module Partition Method of Embedded Multitasking Software Based on Fuzzy Set Theory

Abstract: In order to improve the reliability of embedded multitask software, a module partition method based on fuzzy set theory is proposed to solve the problem of large number of software failures and high frequency of failures. Firstly, the characteristics of embedded multitask software are analyzed, and the constraint parameter distribution model is constructed. Then the reliability parameter analysis model of embedded multitask software is constructed, and the multilevel fuzzy metric structure combined with quantitative recursive analysis method is used to complete the division of embedded multitask software modules. The simulation results show that the accuracy of the proposed method exceeds 99% after the number of iterations exceeds 50. The experimental results show that the method has high reliability, high precision and high practicability.

Author 1: Yunpeng Gu

Keywords: Multitasking; embedded; modular; fuzzy set theory

Download PDF

Paper 33: Comparison of Metaheuristic Techniques for Parcel Delivery Problem: Malaysian Case Study

Abstract: Most people preferred e-commerce ensuing the Coronavirus Disease-2019 (COVID-19) outbreak, resulting in delivery companies receiving large quantities of parcels to be delivered to clients. Hurdle emerges when delivery person needs to convey items to a large number of households in a single journey as they never face this situation before. As a result, they seek the quickest way during the trip to reduce delivery costs and time. Since the delivery challenge has been classified as an NP-hard (non-deterministic polynomial-time hard)) problem, this study aims to search the shortest distance, including the runtime for the real case study located in Melaka, Malaysia. Hence, two metaheuristic approaches are compared in this study namely, Ant-Colony Optimization (ACO) and Genetic Algorithm (GA). The results show that the GA strategy outperforms the ACO technique in terms of distance, price, and runtime for moderate data sizes that is less than 90 locations.

Author 1: Shamine A/P S. Moganathan
Author 2: Siti Noor Asyikin binti Mohd Razali
Author 3: Aida Mustapha
Author 4: Safra Liyana binti Sukiman
Author 5: Rosshairy Abd Rahman
Author 6: Muhammad Ammar Shafi

Keywords: Ant-colony optimization; genetic algorithm; delivery problem; comparison; cost; runtime

Download PDF

Paper 34: An Approach for Optimization of Features using Gorilla Troop Optimizer for Classification of Melanoma

Abstract: The diagnosis and categorization of skin cancer, as well as the difference in skin textures and injuries, is a tough undertaking. Manually detecting skin lesions from dermoscopy images seems to be a difficult and cumbersome challenge. Recent advancements in the internet of things (IoT) and artificial intelligence for clinical applications have shown significant increase in precision and processing time. A lot of attention is given to deep learning models because they are effective at identifying cancer cells. The diagnosis and accuracy levels can be greatly increased by categorizing benign and malignant dermoscopy images. This work suggests an automated classification system based on a deep convolutional neural network (DCNN) in order to precisely perform multi-classification. The DCNN's structure was thoughtfully created by arranging a number of layers that are in charge of uniquely extracting different features from skin lesions. In this paper, we proposed a deep learning approach to tackle the three main tasks-deep extraction of features (task1) using transfer learning, selection of features (task2)-using metaheuristic algorithms such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Gorilla Troop Optimization (GTO) as a feature selector, the extensive feature set is optimized, and the amount of features is reduced to within the range, and a two-level classification (task3) was proposed that are emerging in the field of skin lesion image processing. On the HAM10000 dataset, the proposed deep learning frameworks were assessed. The accuracy achieved on the dataset is 93.58 percent. The proposed method outperforms state-of-the-art (SOTA) techniques in terms of accuracy. The suggested technique is however highly scalable.

Author 1: Anupama Damarla
Author 2: Sumathi D

Keywords: Skin cancer; image enhancement; deep learning; evolutionary algorithms; Particle Swarm Optimization; Ant Colony Optimization; Gorilla Troop Optimization

Download PDF

Paper 35: Review on Multimodal Fusion Techniques for Human Emotion Recognition

Abstract: Emotions play an essential role in human life for planning and decision making. Emotion identification and recognition is a widely explored field in the area of artificial intelligence and affective computing as a means of empathizing with humans and thereby improving human machine interaction. Though audio visual cues are vital for recognizing human emotions, they are sometimes insufficient in identifying emotions of people who are good at hiding emotions or people suffering from Alexithymia. Considering other dimensions like Electroencephalogram (EEG) or text, along with audio visual cues can aid in improving the results in such situations. Taking advantage of the complementarity of multiple modalities normally helps capture emotions more accurately compared to single modality. However, to achieve precise and accurate results, correct fusion of these multimodal signals is solicited. This study provides a detailed review of different multimodal fusion techniques that can be used for emotion recognition. This paper proposes in-depth study of feature-level fusion, decision-level fusion and hybrid fusion techniques for identifying human emotions based on multimodal inputs and compare the results. The study concentrates on three different modalities i.e., facial images, audio and text for experimentation; at least one of which differs in temporal characteristics. The result suggests that hybrid fusion works best in combining multiple modalities which differ in time synchronicity.

Author 1: Ruhina Karani
Author 2: Sharmishta Desai

Keywords: Feature-level fusion; decision-level fusion; hybrid fusion; artificial intelligence; EEG

Download PDF

Paper 36: A Comparative Study of Predictions of Students’ Performances in Offline and Online Modes using Adaptive Neuro-Fuzzy Inference System

Abstract: Predicting a student's performance can help educational institutions to support the students in improving their academic performance and providing high-quality education. Creating a model that accurately predicts a student's performance is not only difficult but challenging. Before the pandemic situation students were more accustomed to offline i.e., physical mode of learning. As covid-19 took over the world the offline mode of education was totally disturbed. This situation resulted into the new beginning towards online mode of teaching over the Internet. In this article, these two modes are analysed and compared with reference to students’ academic performances. The article models a predicting academic performance of students before covid i.e., physical mode and during Covid i.e., online mode, to help the students to improve their performances. The proposed model works in two steps. First, two sets of students’ previous semester end results (SEE) i.e., after offline mode and after online mode, are collected and pre-processed using normalizing the performances in order to improving the efficiency and accuracy. Secondly, Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied to predict the academic result performances in both learning modes. Three membership functions gaussian (Gausmf), triangular (Trimf) and gausian-bell (Gbellmf) of ANFIS are used to generate the fuzzy rules for the prediction process proposed in this paper.

Author 1: Sudhindra B. Deshpande
Author 2: Kiran K.Tangod
Author 3: Neelkanth V.Karekar
Author 4: Pandurang S.Upparmani

Keywords: ANFIS; fuzzy systems; online learning; e-learning; classroom learning; fuzzy rules; predictions; adaptive neuro-fuzzy inference system; education technology; distance education

Download PDF

Paper 37: Combining Innovative Technology and Context based Approaches in Teaching Software Engineering

Abstract: Sustainability in learning is very essential for a sustainable future which largely depends on education. Sustainable learning requires learners to increase and rebuild base-knowledge as the circumstances change and get more complex. This becomes very obvious particularly for in- formation technology (IT) discipline where technology is rapidly changing and practice getting more complicated. Sustainability enables students to use their learning from formal education into practice, provide hands on experience (HOE) and help them rebuild their knowledge base in complex situations. This is also essential to achieve a high graduate outcome rate (GOR) which helps the education sector to become sustainable. In the existing policies and frameworks, institutions are moving towards more off-campus learning and less face-to-face learning. As a result, a downward trend is experienced in students’ engagement across IT discipline. This affects students’ ability in achieving HOE and appears to be one of the reasons of low GOR which poses a threat to the sustainability in the education sector for both stakeholder and learners’ perspectives. This paper presents a combined approach of context-based teaching with incorporation of innovative technology to engage students and achieve a better HOE towards sustainability in learning. The proposed approach was adopted in a software engineering course taught at School of IT at Deakin University, Australia. Students were provided context-based teaching material and industry standard software engineering tools for practice to achieve HOE. Students evaluation and assessment results reports that proposed approach was significantly impacted positively to engage the students in classes towards improved sustainable learning.

Author 1: Shamsul Huda
Author 2: Sultan Alyahya
Author 3: Lei Pan
Author 4: Hmood Al-Dossari

Keywords: Sustainable learning and education; context-base teaching; work integrated learning; hands on experience; graduate outcome rate; positive attitude and engagement

Download PDF

Paper 38: Modified Intrusion Detection Tree with Hybrid Deep Learning Framework based Cyber Security Intrusion Detection Model

Abstract: In modern era, the most pressing issue facing modern society is protection against cyberattacks on networks. The frequency of cyber-attacks in the present world makes the problem of providing feasible security to the computer system from potential risks important and crucial. Network security cannot be effectively monitored and protected without the use of intrusion detection systems (IDSs). DLTs (Deep learning methods) and MLTs (machine learning techniques) are being employed in information security domains for effectively building IDSs. These IDSs are capable of automatically and timely identifying harmful attacks. IntruDTree (Intrusion Detection Tree), a security model based on MLTs that detects attacks effectively, is shown in the existing research effort. This model, however, suffers from an overfitting problem, which occurs when the learning method perfectly matches the training data but fails to generalize to new data. To address the issue, this study introduces the MIntruDTree-HDL (Modified IntruDTree with Hybrid Deep Learning) framework, which improves the performance and prediction of the IDSs. The MIntruDTree-HDL framework predicts and classifies harmful cyber assaults in the network using an M-IntruDtree (Modified IDS Tree) with CRNNs (convolution recurrent neural networks). To rank the key characteristics, first create a modified tree-based generalized IDSs M-IntruDTree. CNNs (convolution neural networks) then use convolution to collect local information, while the RNNs (recurrent neural networks) capture temporal features to increase IDS performance and prediction. This model is not only accurate in predicting unknown test scenarios, but it also results in reduced computational costs due to its dimensionality reductions. The efficacy of the suggested MIntruDTree-HDL schemes was benchmarked on cybersecurity datasets in terms of precisions, recalls, fscores, accuracies, and ROC. The simulation results show that the proposed MIntruDTree-HDL outperforms current IDS approaches, with a high rate of malicious attack detection accuracy.

Author 1: Majed Alowaidi

Keywords: Cybersecurity; IntruDTree model; convolution recurrent neural network (CRNN); MIntruDTree-HDL; deep learning

Download PDF

Paper 39: Performance Comparison between Meta-classifier Algorithms for Heart Disease Classification

Abstract: The rise in heart disease among the general population is alarming. This is because cardiovascular disease is the leading cause of death, and several studies have been conducted to assist cardiologists in identifying the primary cause of heart disease. The classification accuracy of single classifiers utilised in most recent studies to predict heart disease is quite low. The accuracy of classification can be enhanced by integrating the output of multiple classifiers in an ensemble technique. Even though they can deliver the best classification accuracy, the existing ensemble approaches that integrate all classifiers are quite resource-intensive. This study thus proposes a stacking ensemble that selects the optimal subset of classifiers to produce meta-classifiers. In addition, the research compares the effectiveness of several meta-classifiers to further enhance classification. There are ten types of algorithms, including logistic regression (LR), support vector classifier (SVC), random forest (RF), extra tree classifier (ETC), naïve bayes (NB), extreme gradient boosting (XGB), decision tree (DT), k-nearest neighbor (KNN), multilayer perceptron (MLP), and stochastic gradient descent (SGD) are used as a base classifier. The construction of the meta-classifier utilised three different algorithms consisting of LR, MLP, and SVC. The prediction results from the base classifier are then used as input for the stacking ensemble. The study demonstrates that using a stacking ensemble performs better than any other single algorithm in the base classifier. The meta-classifier of logistic regression yielded 90.16% results which is better than any base classifiers. In conclusion, we could assume that the ensemble stacking approach can be considered an additional means of achieving better accuracy and has improved the performance of the classification.

Author 1: Nureen Afiqah Mohd Zaini
Author 2: Mohd Khalid Awang

Keywords: Heart disease prediction; ensemble stacking; multi classifier

Download PDF

Paper 40: Detection of Severity-based Email SPAM Messages using Adaptive Threshold Driven Clustering

Abstract: The classification of emails is one crucial part of the email filtering process, as emails have become one of the key methods of communication. The process for identifying safe or unsafe emails is complex due to the diversified use of the language. Nonetheless, most of the parallel research outcomes have demonstrated significant benchmarks in identifying email spam. However, the standard processes can only identify the emails as spam or ham. Henceforth, a detailed classification of the emails has not been achieved. Thus, this work proposes a novel method for the identification of the emails into various classes using the proposed deep clustering process with the help of the ranking of words into severity. The proposed work demonstrates nearly 99.4% accuracy in detecting and classifying the emails into a total of five classes.

Author 1: I V S Venugopal
Author 2: D Lalitha Bhaskari
Author 3: M N Seetaramanath

Keywords: BoW collection; web crawler; email text extraction; subsetting method; email class detection; ranking method

Download PDF

Paper 41: Alz-SAENet: A Deep Sparse Autoencoder based Model for Alzheimer’s Classification

Abstract: Precise identification of Alzheimer's Disease (AD) is vital in health care, especially at an early stage, since recognizing the likelihood of incidence and progression allows patients to adopt preventive measures before irreparable brain damage occurs. Magnetic Resonance Imaging is an effective and common clinical strategy to diagnose AD due to its structural details. we built an advanced deep sparse autoencoder-based architecture, named Alz-SAENet for the identification of diseased from typical control subjects using MRI volumes. We focused on a novel optimal feature extraction procedure using the combination of a 3D Convolutional Neural Network (CNN) and deep sparse autoencoder (SAE). Optimal features derived from the bottleneck layer of the hyper-tuned SAE network are subsequently passed via a deep neural network (DNN). This approach results in the improved four-way categorization of AD-prone 3D MRI brain images that prove the capability of this network in AD prognosis to adopt preventive measures. This model is further evaluated using ADNI and Kaggle data and achieved 98.9% and 98.215% accuracy and showed a tremendous response in distinguishing the MRI volumes that are in a transitional phase of AD.

Author 1: G Nagarjuna Reddy
Author 2: K Nagi Reddy

Keywords: Alzheimer’s disease; MRI; CNN; sparse autoencoder; DNN; mild cognitive impairment

Download PDF

Paper 42: Development of Path Loss Prediction Model using Feature Selection-Machine Learning Approach

Abstract: Wireless network planning requires accurate coverage predictions to get good quality. The path loss accurate model requires a flexible model for each area including land and water. The purpose of this research is to develop a Cost-Hatta model that can be applied to the mixed land-water area. The approach used of this research is the three methods of feature selection of machine learning. The first stage of the research was the collection of field data. The measurement data included system, weather, and geographical parameters. The next stage was feature selection to obtain the best composition of features for the development of the model. The feature selection methods used were Univariate FS, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO). After obtaining the best features from each method, the next stage was to form a model using four machine learning algorithms, namely Random Forest Regression (RF), Deep Neural Network (DNN), K-Nearest Neighbor Regression (KNN), and Support Vector Regression (SVR). The results of the improvements to the path loss prediction model were tested using the evaluation parameters of Root Mean Square Error (RMSE), Mean Square Error (MSE), and Mean Absolute Percentage Error (MAPE). The results of the testing showed that the improved Cost-Hatta model using the proposed Univariate-RF combination produced a very small RMSE value of 1.52. This indicates that the proposed model framework is highly suitable to be used in a mixed land-water area.

Author 1: Bengawan Alfaresi
Author 2: Zainuddin Nawawi
Author 3: Bhakti Yudho Suprapto

Keywords: Path loss; feature selection; machine learning; mixed land-water; Cost-Hatta

Download PDF

Paper 43: Employability Prediction of Information Technology Graduates using Machine Learning Algorithms

Abstract: The ability to predict graduates’ employability to match labor market demands is crucial for any educational institution aiming to enhance students' performance and learning process as graduates’ employability is the metric of success for any higher education institution (HEI). Especially information technology (IT) graduates, due to the evolving demand for IT professionals increased in the current era. Job mismatch and unemployment remain major challenges and issues for educational institutions due to the various factors that influence graduates' employability to match labor market needs. Therefore, this paper aims to introduce a predictive model using machine learning (ML) algorithms to predict information technology graduates’ employability to match the labor market demands. Five machine learning classification algorithms were applied named Decision tree (DT), Gaussian Naïve Bayes (Gaussian NB), Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM). The dataset used in this study is collected based on a survey given to IT graduates and employers. The performance of the study is evaluated in terms of accuracy, precision, recall, and f1 score. The results showed that DT achieved the highest accuracy, and the second highest accuracy was achieved by LR and SVM.

Author 1: Gehad ElSharkawy
Author 2: Yehia Helmy
Author 3: Engy Yehia

Keywords: Machine learning; IT graduates; higher education; employability; labor market

Download PDF

Paper 44: Otsu’s Thresholding for Semi-Automatic Segmentation of Breast Lesions in Digital Mammograms

Abstract: In Maghreb countries, breast cancer considered one of the leading causes of mortality between females. A screening mammogram is a method of taking low energy level x-ray images of the human breast to identify the early symptoms of breast cancer. The shape and contour of the lesion in digitized mammograms are two effective features that allow the radiologists to distinguish between benign and malignant tumors. We propose in this paper a new approach based on Otsu’s thresholding method for semi-automatic extraction of lesion boundaries from mammogram images. This approach attempts to find the best threshold value where the weighted variance between the lesion and normal tissue pixels is the least. In the first step, the median filter is used for removing noise within the region of interest (ROI). In the second step, a global threshold decrement was started in order to get the proper range of pixels in which the breast lesion could be segmented by Otsu’s thresholding method with high accuracy. The technique of mathematical morphology, especially opening operation is used in this work for removing small objects from the ROI while preserving the shape and size of larger objects that represent the tumors. We evaluated our proposal using 21 images obtained from Mini-MIAS database. Experimental results show that our proposal achieves better results than many previously explored methods.

Author 1: Moustapha Mohamed Saleck
Author 2: M. H. Ould Mohamed Dyla
Author 3: EL Benany Mohamed Mahmoud
Author 4: Mohamed Rmili

Keywords: Tumor detection; lesion segmentation; mammogram images; Otsu’s thresholding

Download PDF

Paper 45: Performance Analysis of Software Test Effort Estimation using Genetic Algorithm and Neural Network

Abstract: In present scenario, the software companies are frequently involving software test effort estimation to allocate the resources efficiently during the software development process. Different machine learning models are developed to estimate the total effort that would be required before the software product could be delivered. These computational models are used to use the past data to estimate the efforts. In the current studies, test effort estimation for software is predicted using the Genetic algorithm and Neural Network. The attributes are selected using the Genetic algorithm and similarity measure between the attribute values has been computed using the Cosine Similarity measure. The simulation experiments were done using the PROMISE and Kaggle repository and implementation was done using the MATLAB software. The performance metrics namely, precision, recall, and accuracy are computed to evaluate against the existing techniques. The accuracy of the proposed model is 91.3% and results are improved by 8.9% in comparison to existing technique and comparison has been made for superiority to predict the test effort for software development.

Author 1: Vikas Chahar
Author 2: Pradeep Kumar Bhatia

Keywords: Test effort estimation; software testing; machine learning; computational intelligence; neural network

Download PDF

Paper 46: A Computer Vision System for Street Sweeper Robot

Abstract: With the spread of Covid-19, more people wear personal protective equipment such as gloves and masks. However, they are littering them all over streets, parking lots and parks. This impacts the environment and damages especially the marine ecosystem. Thus, this waste should not be discarded in the environment. Moreover, it should not be recycled with other plastic materials. Actually, they have to be separated from regular trash collection. Furthermore, littering gloves and masks yields more workload for street cleaners and presents potential harm for them. In this paper, we design a computer vision system for a street sweeper robot that picks up the masks and gloves and disposes them safely in garbage containers. This system relies on Deep Learning techniques for object recognition. In particular, three Deep Learning models will be investigated. They are: You Only Look Once (YOLO) model, Faster Region based Convolutional Neural Network (Faster R-CNN) and DeepLab v3+. The experiment results showed that YOLO is the most suitable approach to design the proposed system. Thus, the performance of the proposed system is 0.94 as F1 measure, 0.79 as IoU, 0.94 as mAP, and 0.41 s as Time to process one image.

Author 1: Ouiem Bchir
Author 2: Sultana Almasoud
Author 3: Lina Alyahya
Author 4: Renad Aldhalaan
Author 5: Lina Alsaeed
Author 6: Nada Aldalbahi

Keywords: Covid-19; street sweeper robot; personal protective equipment (PPE); computer vision; deep learning

Download PDF

Paper 47: An Experimental Study with Fuzzy-Wuzzy (Partial Ratio) for Identifying the Similarity between English and French Languages for Plagiarism Detection

Abstract: With the rapid growth of digital libraries and language translation tools, it is easy to translate text documents from one language to other, which results in cross-language plagiarism. It is more challenging to identify plagiarism among documents in different languages. The main aim of this paper is to translate the French documents into English to detect plagiarism and to extract bilingual lexicons. The parallel corpus is used to compare multilingual text, a collection of similar sentences and sentences that complement each other. A comparative study is presented in this paper, the sentences similarity in bilingual content is found out by using the proposed Fuzzy-Wuzzy (Partial Ratio) based string similarity technique and three various techniques like Levenshtein Distance, Spacy and Fuzzy-Wuzzy (Ratio) similarity techniques in the literature. The string similarity method based on Fuzzy-Wuzzy (Partial Ratio) outperforms in terms of accuracy compared to Spacy, and Fuzzy-Wuzzy (Ratio) techniques for identifying language similarity.

Author 1: Peluru Janardhana Rao
Author 2: Kunjam Nageswara Rao
Author 3: Sitaratnam Gokuruboyina

Keywords: Plagiarism; natural language processing; string similarity; levenshtein distance; fuzzy-wuzzy

Download PDF

Paper 48: Comparing LSTM and CNN Methods in Case Study on Public Discussion about Covid-19 in Twitter

Abstract: This study compares two Deep Learning model methods, which include the Long Short-Term Memory (LSTM) method and the Convolution Neural Network (CNN) method. The aim of the comparison is to discover the performance of two different fundamental deep learning approaches which are based on convolutional theory (CNN) and deal with the vanishing gradient problem (LSTM). The purpose of this study is to compare the accuracy of the two methods using a dataset of 4169 obtained by crawling social media using the Twitter API. The Tweets data we've obtained are based on a specific hashtag keyword, namely "covid-19 pandemic”. This study attempts to assess the sentiment of all tweets about the Covid-19 viral epidemic to determine whether tweets about Covid-19 contain positive or negative thoughts. Before classification, the Preprocessing and Word Embedding steps are completed, and this study has determined that the epoch used is 20 and the hidden layer is 64. Following the classification process, this study concludes that the two methods are appropriate for classifying public conversation sentences against Covid-19. According to this study, the LSTM method is superior, with an accuracy of 83.3%, a precision of 85.6%, a recall of 90.6%, and an f1-score of 88.5%. While the CNN method achieved an accuracy of 81%, precision of 71.7%, recall of 72%, and f1-score of 72%.

Author 1: Fachrul Kurniawan
Author 2: Yuliana Romadhoni
Author 3: Laila Zahrona
Author 4: Jehad Hammad

Keywords: COVID-19; LSTM; CNN; sentiment analysis

Download PDF

Paper 49: ACT on Monte Carlo FogRA for Time-Critical Applications of IoT

Abstract: The need for instantaneous processing for Internet of Things (IoT) has led to the notion of fog computing where computation is performed at the proximity of the data source. Though fog computing reduces the latency and bandwidth bottlenecks, the scarcity of fog nodes hampers its efficiency. Also, due to the heterogeneity and stochastic behavior of IoT, traditional resource allocation technique does not suffice the time-sensitiveness of the applications. Therefore, adopting Artificial Intelligence (AI) based Reinforcement Learning approach that has the ability to self-learn and adapt to the dynamic environment is sought. The purpose of the work is to propose an Auto Centric Threshold (ACT) enabled Monte Carlo FogRA system that maximizes the utilization of Fog’s limited resources with minimum termination time for time-critical IoT requests. FogRA is devised as a Reinforcement Learning (RL) problem, that obtains optimal solutions through continuous interaction with the uncertain environment. Experimental results show that the optimal value achieved by the proposed system is increased by 41% more than the baseline adaptive RA model. The efficiency of FogRA is evaluated under different performance metrics.

Author 1: A. S. Gowri
Author 2: P. Shanthi Bala
Author 3: Zion Ramdinthara
Author 4: T. Siva Kumar

Keywords: Cloud; edge; fog; Internet of Things (IoT); Reinforcement Learning (RL)

Download PDF

Paper 50: Delay-Aware and Profit-Maximizing Task Migration for the Cloudlet Federation

Abstract: As the result of Open Edge Computing (OEC) project, cloudlet embodies the middle layer of edge computing architecture. Due to the close deployment to the user side, responding to user requests through cloudlet can reduce delay, improve security, and reduce bandwidth occupancy. In order to improve the quality of user experience, more and more cloudlets need to be deployed, which makes the deployment and management costs of Clouldlet service Providers (CLP) significantly increased. Therefore, the cloudlet federation appears as a new paradigm that can reduce the cost of cloudlet deployment and management by sharing cloudlet resources among CLPs. Facing the cloudlet federation scenario, more attention still needs to be paid to the heterogeneity of resource prices, the balance of benefits among CLPs, and the more complex delay computation when exploring task migration strategies. For delay-sensitive and delay-tolerance tasks, a delay-aware and profit-maximizing task migration strategy is proposed considering the relationship between the delay and the quotation of different tasks, as well as the dynamic pricing mechanism when resources are shared among CLPs. Experimental results show that the proposed algorithm outperforms the baseline algorithm in terms of revenue and response delay.

Author 1: Hengzhou Ye
Author 2: Junhao Guo
Author 3: Xinxiao Li

Keywords: Cloudlet federation; task migration; delay-aware; dynamic pricing; profit-maximizing; edge computing

Download PDF

Paper 51: Collaborative Ontology Construction Framework: An Attempt to Rationalize Effective Knowledge Dissemination

Abstract: Ontologies are domain rich conceptualizations, which can be utilized for effective knowledge dissemination strategies. Knowledge dissemination plays a vital role in any industry. In this research, novel framework is designed and experimented for the collaborative ontology construction. With the iterative and incremental involvement of the domain specialists and ontologists rational process has been discussed and planned for the collaborative ontology construction. Additionally, existing shortcomings associated with the current ontology construction methodologies and frameworks also have been rigorously reviewed to identify the shortcomings. Henceforth the responses received from the domain specialists and ontologists, along with the gaps located from the literatures have been utilized as the backbone in designing this novel framework. Designed ontology increments have the potential of effective knowledge distribution once it`s coupled with technologies like chatbots. In this research, proposed framework has been deployed in three different domains and three different ontology increments have been created for each domain. Consequently, their efficacy have been tested with the involvement of domain specific stakeholders. Overall results have yielded an 82% of acceptance from the stakeholders.

Author 1: Kaneeka Vidanage
Author 2: Noor Maizura Mohamad Noor
Author 3: Rosmayati Mohemad
Author 4: Zuriana Abu Bakar

Keywords: Collaborative; domain-specialists; framework; methodology; ontologies

Download PDF

Paper 52: BiLSTM and Multiple Linear Regression based Sentiment Analysis Model using Polarity and Subjectivity of a Text

Abstract: Sentiment analysis has become more and more requested by companies to improve their services. However, the main contribution of this paper is to present the results of the study which consists in proposing a combined model of sentiment analysis that is able to find the binary polarity of the analyzed text. The proposed model is based on a Bidirectional-Long Short-Term Memory recurrent neural network and the TextBolb model which computes both the polarity and the subjectivity of the input text. These two models are combined in a classification model that implements each of the Logistic Regression, k-Nearest Neighbors, Random Forest, Support Vector Machine, K-means and Naive Bayes algorithms. The training and test data come from the Twitter Airlines Sentiment data set. Experimental results show that the proposed system gives better performance metrics (accuracy and F1 score) than those found with the BiLSTM and TextBlob models used separately. The obtained results well serve organizations, companies and brands to get useful information that helps them to understand a customer's opinion of a particular product or service.

Author 1: Marouane CHIHAB
Author 2: Mohamed CHINY
Author 3: Nabil Mabrouk
Author 4: Hicham BOUSSATTA
Author 5: Younes CHIHAB
Author 6: Moulay Youssef HADI

Keywords: Sentiment analysis; textbolb; long short term memory; logistic regression; k-nearest neighbors; random forest; support vector machine; k-means; naive bayes

Download PDF

Paper 53: Local Texture Representation for Timber Defect Recognition based on Variation of LBP

Abstract: This paper evaluates timber defect classification performance across four various Local Binary Patterns (LBP). The light and heavy timber used in the study are Rubberwood, KSK, Merbau, and Meranti, and eight natural timber defects involved; bark pocket, blue stain, borer holes, brown stain, knot, rot, split, and wane. A series of LBP feature sets were created by employing the Basic LBP, Rotation Invariant LBP, Uniform LBP, and Rotation Invariant Uniform LBP in a phase of feature extraction procedures. Several common classifiers were used to further separate the timber defect classes, which are Artificial Neural Network (ANN), J48 Decision Tree (J48), and K-Nearest Neighbor (KNN). Uniform LBP with ANN classifier provides the best performance at 63.4%, superior to all other LBP types. Features from Merbau provide the greatest F-measure when comparing the performance of the ANN classifier with Uniform LBP across timber fault classes and clean wood, surpassing other feature sets.

Author 1: Rahillda Nadhirah Norizzaty Rahiddin
Author 2: Ummi Raba’ah Hashim
Author 3: Lizawati Salahuddin
Author 4: Kasturi Kanchymalay
Author 5: Aji Prasetya Wibawa
Author 6: Teo Hong Chun

Keywords: Automated visual inspection; local binary pattern; timber defect classification; texture feature; feature extraction

Download PDF

Paper 54: Unethical Internet Behaviour among Students in High Education Institutions: A Systematic Literature Review

Abstract: The modern internet era has several advantages and disadvantages, including the advent of immoral Internet conduct in addition to better, quicker, and increased working capacity in less time. Even though the area of study on unethical Internet activity has advanced, systematic literature reviews from a comprehensive perspective on unethical Internet behaviour among university students are still lacking. As a result, this systematic literature will provide theoretical foundation that address the following research questions: RQ1-How are unethical Internet behaviours among university students classified; RQ2-What are the various theoretical lenses that are used in unethical Internet behaviour research; RQ3-What demographic and risk factors are involved in unethical Internet behaviour research; and RQ4-What are the challenges and research opportunities for unethical Internet behaviour research within university settings? To respond to a formulated set of research questions, a total of 64 publications that were published between 2010 and 2020 underwent a systematic review. The study illustrates how university students’ unethical Internet activity is categorised. This study offers a comprehensive grasp of the factors that affect unethical Internet behaviour and an overview of the theories that have been utilised to explain and forecast unethical Internet behaviours in this sector. This study discusses literature gaps for future research to contribute to human ethical behavioural studies.

Author 1: Zakiah Ayop
Author 2: Aslinda Hassan
Author 3: Syarulnaziah Anawar
Author 4: Nur Fadzilah Othman
Author 5: Rabiah Ahmad
Author 6: Nor Raihana Mohd Ali
Author 7: Maslin Masrom

Keywords: Systematic literature review; unethical Internet behavior; university student; Internet; ethics

Download PDF

Paper 55: Teachers’ Attitudes Towards the Use of Augmented Reality Technology in Teaching Arabic in Primary School Malaysia

Abstract: The era of Industrial Revolution 4.0 has brought the debate of teachers' willingness to use information technology in teaching Arabic. Thus, new technologies have emerged with a positive effect on teaching, such as augmented reality technology applied in the education system, especially in teaching. However, there is still limited research with regards to teaching in a foreign language. Therefore, the present study discusses the readiness of teachers from the aspect of knowledge and their attitude towards the use of augmented reality technology in the teaching of Arabic in Malaysia. The study was carried out using quantitative methodology with the use of survey questionnaire that is distributed to 36 Arabic language teachers as respondents. The usage of questionnaires as a research instrument forms the basis for data collection to identify respondents’ level of readiness. Afterwards, data analysis was carried out using the Statistical Package for the Social Science version 26 (SPSS). The results of the study show that the level of readiness of the teachers in terms of their attitude towards the use of augmented reality technology in the teaching of Arabic in Malaysia is at a moderate level. Nevertheless, teachers' attitudes and knowledge are still found to be at a low level, especially for veteran teachers who have no experience in information technology to influence their enthusiasm towards the use of technology in their teaching. The implication of the current study is hoped to be useful and beneficial as a guide to stakeholders who are responsible for ensuring the process of teaching and learning Arabic which is based on augmented reality technology can be implemented in a meaningful way, thereby improving the performance of students in mastering the Arabic language.

Author 1: Lily Hanefarezan Asbulah
Author 2: Mus’ab Sahrim
Author 3: Nor Fatini Aqilah Mohd Soad
Author 4: Nur Afiqah Athirah Mohd Rushdi
Author 5: Muhammad Afiq Hilmi Mhd Deris

Keywords: Attitude; teachers; augmented reality; Arabic language; primary school

Download PDF

Paper 56: An Algorithm for Shrinking Blood Receptacles using Retinal Internal Pictures for Clinical Characteristics Measurement

Abstract: The manual technique that might use for shrinking vessels blood in the retinal fundus images has significant limitations, such as the high rate of time consumption and the possibility of human error, precisely appear with the sophisticated structure of the blood receptacle and a hung amount of the retinal fundus photograph that needs to be anatomic. Moreover, the automatic proposed algorithm that will utilize shrinking and explore helpful clinical characteristics from retinal fundus photographs in order to lead the eye caregiver to early diagnosis for various retinal disorders and therapy evaluations. A precise, quick, and fully-automatic algorithm for shrinking blood receptacles and clinical characteristics measuring technique for internal retinal pictures is suggested in order to increase the diagnostic accuracy and reduce the ophthalmologist's burden. The proposed algorithm's main pipeline consists of two fundamental stages: picture shrinkage and medical feature elicitation. Many exhaustive practices were conducted to evaluate the efficacy of the sophisticated fully-automated shrinkage system in figuring out retinal blood receptacles using the DRIVE and HRF datasets of exceedingly demanding fundus images. Initially, the accuracy of the created algorithm was tested based on its ability to accurately recognize the retinal structure of blood receptacles. In these attempts, several quantitative performance measures precisely five were computed to validate the efficacy of the exact algorithm, including accuracy (Acc.), sensitivity (Sen.), specificity (Spe.), positive prediction value (PPV), and negative prediction value (NPV). When contrast with modern receptacles shrinking approaches on the DRIVE dataset, the produced results have enormously improved by obtaining accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 98.78%, 98.32%, 97.23%, and 90. Based on five quantitative performance indicators, the HRF dataset led to the following results: 98.76%, 98.87%, 99.17%, 96.88%, and 100%.

Author 1: Aws A. Abdulsahib
Author 2: Moamin A Mahmoud
Author 3: Sally A. Al-Hasnawi

Keywords: Segmentation vessels / shrinking blood receptacles; clinical characteristics measurement; internal pictures for retinal; morphological filtering algorithm

Download PDF

Paper 57: Detection of Cyber-Physical Attacks using Physical Model with Nonparametric EWMA Detector

Abstract: Industrial Control System (ICS) can suffer of cyber-physical attacks resulting in accident, damage, or financial loss. The attacks can be detected in both in physical space or cyberspace of the ICS. The detection in physical space can be based on physical models of the system. To model the physical system this study uses a data-driven modeling approach as an alternative of the analytic one. This study models the system using the dynamic mode decomposition method with control (DMDc) assuming a full state measurement. The attack detector used in some researches with predictive physical models is the cumulative sum (CUSUM), which only applies to normally distribute residual data. To detect any cyber-physical attack, this research uses a nonparametric exponentially weighted moving average (EWMA) detector. This study uses a data set from a testbed of Secure Water Treatment (SWaT). The approach used in this study was successful in detecting 8 out of 10 attacks on the first SWaT subsystem. This study demonstrates that DMDc used in this study results a better goodness of fit and the nonparametric EWMA can be used as an alternative as detector when residual data do not follow a normal distribution.

Author 1: Joko Supriyadi
Author 2: Jazi Eko Istiyanto
Author 3: Agfianto Eko Putra

Keywords: Industrial control systems; cyber-physical attacks; physical model; dynamic mode decomposition method with control (DMDc); nonparametric exponentially weighted moving average (EWMA)

Download PDF

Paper 58: Performance Evaluation of New Feature based on Ordinal Pattern Analysis for Iris Biometric Recognition

Abstract: The Iris recognition technique is currently the most efficient biometric identification system and is a common system on the practical front. Though most of the commercial systems use the patented Daugman’s algorithm, which mainly uses wavelet-based features, research is still active in identifying novel features that can provide personal identification. Here the first novel proposal of using ordinal pattern measure based on nonlinear time series analysis is put forth to characterize the unique pattern of the iris of individuals and thereby perform personal identification. Dispersion Entropy is a nonlinear time-series analysis method highly efficient in the characterization of the complexity of any data series with proven effectiveness in the characterization of model system dynamics as well as real-world data series. The results show that dispersion entropy can be used to identify iris images of specific individuals. The efficiency of this method is evaluated by computing correlation and RMSE between dispersion entropy values of normalized iris image rubber sheet data. The experimental results on the popular IRIS database- CASIA v1- demonstrate that the proposed method can effectively perform differential identification of iris images from different individuals. The results specifically indicate that the density of information along the angular direction of iris images which falls along the rows of rubber sheet data. This can be efficiently utilized with the method or ordinal pattern characterization and proves to be having promising potential for being incorporated into biometrics personal identification systems.

Author 1: Sheena S
Author 2: Sheena Mathew
Author 3: Bindu M Krishna

Keywords: Dispersion entropy; iris recognition; rubber-sheet data; ordinal patterns; correlation

Download PDF

Paper 59: Mitigation of DDoS Attack in Cloud Computing Domain by Integrating the DCLB Algorithm with Fuzzy Logic

Abstract: Cloud computing would be an easy method to obtain services, resources and applications from any location on the internet. In the future of data generation, it is an unavoidable conclusion. Despite its many attractive properties, the cloud is vulnerable to a variety of attacks. One such well-known attack that emphasizes the availability of amenities is the Distributed Denial of Service (DDoS). A DDoS assault overwhelms the server with massive quantities of regular or intermittent traffic. It compromises with the cloud servers’ services and makes it harder to reply to legitimate users of the cloud. A monitoring system with correct resource scaling approach should be created to regulate and monitor the DDoS assault. The network is overwhelmed with excessive traffic of significant resource usage requests during the attack, resulting in the denial of needed services to genuine users. In this research, a unique way to the analyze resources used by the cloud users, lowering of the resources consumed is done when the network is overburdened with excessive traffic, and the dynamic cloud load balancing algorithm DCLB (Dynamic Cloud Load Balancing) is used to balance the overhead towards the server. The core premise is to monitor traffic using the fuzzy logic approach, which employs different traffic parameters in conjunction with various built in measured to recognize the DDoS attack traffic in the network. Finally, the proposed method shows a 93% of average detection rate when compared to the existing model. This method is a unique attempt to comprehend the importance of DDoS mitigation techniques as well as good resource management during an attack and analysis of the.

Author 1: Amrutha Muralidharan Nair
Author 2: R Santhosh

Keywords: DDoS attack; resource scaling; DCLB; fuzzy logic; traffic parameters

Download PDF

Paper 60: Federated Learning Approach for Measuring the Response of Brain Tumors to Chemotherapy

Abstract: Brain tumor is a fatal disease and one of the major causes of rising death rates in adults. Predicting methylation of the O6-Methylguanine-DNA Methyltransferase (MGMT) gene status utilizing Magnetic resonance imaging (MRI) imaging is highly important since it is a predictor of brain tumor responses to chemotherapy, which reduces the number of needed surgeries. Deep Learning (DL) approaches became powerful in extracting meaningful relationships and making accurate predictions. DL-based models require a large database and accessing or transferring patient data to train the model. Federated machine learning has recently gained popularity, as it offers practical solutions for data privacy, centralized computation, and high computing power. This study aims to investigate the feasibility of federated learning (FL) by developing a FL-based approach to predict MGMT promoter methylation status using the BraTs2021 dataset for the four sequence types, (Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted (T1w), T1-weighted Gadolinium Post Contrast (T1wCE/T1Gd), and T2-weighted (T2w)) MRI images. The FL model compared to the DL-based and the experimental results show that even with imbalanced and heterogeneous datasets, the FL approach reached the training model to 99.99% of the model quality achieved with centralized data after 300 communication rounds between 10 institutions using OpenFl framework and the improved EfficentNet-B3 neural network architecture.

Author 1: Omneya Atef
Author 2: Mustafa Abdul Salam
Author 3: Hisham Abdelsalam

Keywords: Federated Learning (FL); BraTS2021; Data Privacy; O6-Methylguanine-DNA Methyltransferase (MGMT); OpenFl; EfficentNet-B3; brain tumors; Deep Learning (DL)

Download PDF

Paper 61: A Fake News Detection System based on Combination of Word Embedded Techniques and Hybrid Deep Learning Model

Abstract: At present, most people prefer using different online sources for reading news. These sources can easily spread fake news for several malicious reasons. Detecting this unreliable news is an important task in the Natural Language Processing (NLP) field. Many governments and technology companies are engaged in this research field to prevent the manipulation of public opinion and spare people and society the huge damage that can result from the spreading of misleading information on online social media. In this paper, we present a new deep learning method to detect fake news based on a combination of different word embedding techniques and a hybrid Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BILSTM) model. We trained the classification model on the unbiased dataset WELFake. The best method was a combination of a pre-trained Word2Vec CBOW model and a Word2Vec Skip-Word model with a CNN on BILSTM layers, yielding an accuracy of up to 97%.

Author 1: Mohamed-Amine OUASSIL
Author 2: Bouchaib CHERRADI
Author 3: Soufiane HAMIDA
Author 4: Mouaad ERRAMI
Author 5: Oussama EL GANNOUR
Author 6: Abdelhadi RAIHANI

Keywords: Deep learning (DL); Bidirectional Long Short-Term Memory (BILSTM); Convolutional Neural Network (CNN); Natural Language Processing (NLP); fake news

Download PDF

Paper 62: Research on Blind Obstacle Ranging based on Improved YOLOv5

Abstract: An improved model based on YOLOv5s is proposed for the problem that the YOLOv5 network model does not have high localization accuracy when detecting and identifying obstacles at different distances and sizes from the blind, which in turn leads to low accuracy in measuring distances. There are two main core ideas: firstly, a feature scale and a corresponding prediction head are added to YOLOv5 to improve the detection accuracy of small objects on blind paths. Secondly, SK attention mechanism is introduced in the feature fusion part. It can adaptively adjust the perceptual field for feature maps of different scales and more accurately extract objects of different distances and sizes on the blind path, which can improve the accuracy of detection and the accuracy of subsequent distance measurement. It was experimentally demonstrated that the improved YOLOv5 model improved the mAP by 6.29% compared to the original YOLOv5 model based on a small difference in time consumption. And for each category of AP values, the improvement ranged from 2.13% to 8.19%, respectively. The average accuracy of the measured distance from the obstacle at 1.5m to 3.5m from the camera is 98.20%. This shows that the improved YOLOv5 algorithm has good real-time performance and accuracy.

Author 1: Yongquan Xia
Author 2: Yiqing Li
Author 3: Jianhua Dong
Author 4: Shiyu Ma

Keywords: Binocular ranging; object detection; attention mechanism

Download PDF

Paper 63: Classification Method for Power Load Data of New Energy Grid based on Improved OTSU Algorithm

Abstract: The classification method for power load data of new energy grid based on improved OTSU algorithm is studied to improve the classification accuracy of power load data. According to the idea of two-dimensional visualization of time series, GAF (Geographical Adaptive Fidelity) method is used to transform the current data of power load in new energy grid into two-dimensional image of power load in new energy grid. The intra class dispersion is introduced, and the improved OTSU algorithm is used to segment the foreground and background of the two-dimensional image according to the pixel gray value of the two-dimensional image and the one-dimensional inter class variance corresponding to the pixel neighborhood gray value. The two-dimensional foreground image of power load is taken as the input sample of convolution neural network. The convolution neural network extracts the features of the two-dimensional foreground image of power load through convolution layer. According to the extracted features, the classification results of power load data of new energy grid are output through three steps: nonlinear processing, pooling processing and full connection layer classification. The experimental results show that this method can accurately classify the power load data of new energy grid, and the classification accuracy is higher than 97%.

Author 1: Xun Ma
Author 2: Kai Liu
Author 3: Anlei Liu
Author 4: Xuchao Jia
Author 5: Yong Wang

Keywords: Improved OTSU algorithm; new energy grid; power load; classification method

Download PDF

Paper 64: Study on the Technical Characteristics of Badminton Players in Different Stages through Video Analysis

Abstract: Through video analysis, Tai Tzu Ying, an excellent athlete, and badminton player A from Chongqing Institute of Engineering, were studied in this paper. The videos of the two athletes were organized and recorded, and the use of techniques in different stages were compared. The results found that Tai Tzu Ying’s serve technique was flexible, with few errors, while Play A’s serve technique was single and had many errors; in terms of serve receive, Tai Tzu Ying was more aggressive, mainly using rush shot and spinning net shot, while A mainly used techniques of spinning net shot and lift shot. The comparison of techniques in front, middle and back courts showed that Tai Tzu Ying’s playing style was more aggressive, while A’s playing style was more conservative. This paper compared the two athletes to understand the technical characteristics of excellent athletes and gave some suggestions for the training of school badminton players.

Author 1: Jin Qiu

Keywords: Women’s singles; video analysis; athletes; badminton; technical characteristics

Download PDF

Paper 65: Deep Architecture based on DenseNet-121 Model for Weather Image Recognition

Abstract: Weather conditions have a significant effect on humans’ daily lives and production, ranging from clothing choices to travel, outdoor sports, and solar energy systems. Recent advances in computer vision based on deep learning methods have shown notable progress in both scene awareness and image processing problems. These results have highlighted network depth as a critical factor, as deeper networks achieve better outcomes. This paper proposes a deep learning model based on DenseNet-121 to effectively recognize weather conditions from images. DenseNet performs significantly better than previous models; it also uses less processing power and memory to further increase its efficiency. Since this field currently lacks adequate labeled images for training in weather image recognition, transfer learning and data augmentation techniques were applied. Using the ImageNet dataset, these techniques fine-tuned pre-trained models to speed up training and achieve better end results. Because DenseNet-121 requires sufficient data and is architecturally complex, the expansion of data via geometric augmentation—such as rotation, translation, flipping, and scaling—was critical in decreasing overfitting and increasing the effectiveness of fine-tuning. These experiments were conducted on the RFS dataset, and the results demonstrate both the efficiency and advantages of the proposed method, which achieved an accuracy rate of 95.9%.

Author 1: Saleh A. Albelwi

Keywords: Weather recognition; DenseNet-121; deep learning; data augmentation; transfer learning

Download PDF

Paper 66: Research on Improved Shallow Neural Network Big Data Processing Model based on Gaussian Function

Abstract: The application of the current new generation communication technology is gradually diversified, and the global Internet users are increasing, leading some large enterprises to increasingly rely on faster and more efficient big data processing technology. In order to solve the shortcomings of the current big data processing algorithms, such as slow computing speed, computing accuracy to be improved, and poor online real-time learning ability, this research combines incremental learning and sliding window ideas to design two improved radial basis function (RBF) neural network algorithms with Gaussian function as the kernel function. The Duffing equation example and the data of "Top 100 single products for Taobao search glasses sales" were used to verify the performance of the design algorithm. The experimental results of Duffing equation example show that when the total sample is 100000, the mean square errors of IOL, SWOL, SVM and ResNet50 algorithms are 1.86e-07, 1.59e-07, 3.37e-07 and 2.67e-07 respectively. The experimental results of the data set of "Top 100 SKUs for Taobao Search Glasses Sales" show that when the number of samples in the test set is 800, the root mean square errors of IOL, SWOL, SVM and ResNet50 algorithms are 0.0060, 0.0056, 0.0069 and 0.0073 respectively. This shows that the RBF online learning algorithm designed in this study, which integrates sliding windows, has a stronger comprehensive ability to process big data, and has certain application value for improving the accuracy of online data based commodity recommendation in e-commerce and other industries.

Author 1: Lifang Fu

Keywords: Gaussian function; RBF; big data processing; incremental learning; sliding window

Download PDF

Paper 67: Conceptual Model of Augmented Reality Mobile Application Design (ARMAD) to Enhance user Experience: An Expert Review

Abstract: Rapid technological advancement has altered the demands of user experience (UX) in product design. However, research has shown that there is a gap and paucity of conceptual and practical models in this research field that may serve as guidelines for the design of immersive technologies such as augmented reality (AR) applications. Identifying the variables and components that influence the enhancement of AR design is critical for creating a great UX. The literature indicated that emotion is the primary driver of the UX. Therefore, this study proposed a conceptual design model for AR mobile application that incorporates user interface, interaction, and content design elements while taking their emotions into consideration in order to improve the UX. The focus of this study is to evaluate the proposed conceptual design model of augmented reality mobile application design (ARMAD) through expert reviews. Feedbacks from the expert reviewers are compiled and illustrated in order to refine the initial ARMAD model. The findings showed that majority of the expert reviewers agreed that the conceptual design model is suitable to be used as guideline in designing AR applications that enhances the UX through emotional elicitation. Accordingly, ARMAD model has been refined according to the feedback and suggestions from the expert reviewers. This model will provide researchers and practitioners insight into the essence of AR design that influence the user experience.

Author 1: Nik Azlina Nik Ahmad
Author 2: Ahmad Iqbal Hakim Suhaimi
Author 3: Anitawati Mohd Lokman

Keywords: Augmented reality; conceptual design model; emotional UX; Kansei Engineering; mobile application; user experience

Download PDF

Paper 68: A Novel Prediction Model for Compiler Optimization with Hybrid Meta-Heuristic Optimization Algorithm

Abstract: Compiler designer needs years or sometimes months to construct programs using heuristic optimization rules for a specified compiler. For every novel processor, the modelers require readjusting the heuristics to get the probable performances of processor. The most important purpose of the developed approach is to build a prediction approach with optimization constraints for transforming programs with a lesser training overhead. The problem has occurred in the optimization and it is needed to address it with novel prediction model with derived features & neural network. Here, a novel Compiler Optimization Prediction Model is developed. The features like static and dynamic features as well as improved Relief based features are derived, which are provided as input to Neural Network (NN) scheme, in which the weights are tuned via Honey Badger Adopted BES (HBA-BEO) model. Finally, the NN offers the final predicted output. The analysis outcomes prove the superiority of HBA-BEO model.

Author 1: Sandeep U. Kadam
Author 2: Sagar B. Shinde
Author 3: Yogesh B. Gurav
Author 4: Sunil B Dambhare
Author 5: Chaitali R Shewale

Keywords: Compiler; prediction; improved relief; HBA-BEO model; neural network

Download PDF

Paper 69: A Deep Learning and Machine Learning Approach for Image Classification of Tempered Images in Digital Forensic Analysis

Abstract: Multimedia images are the primary source of communication across social media and other websites. Multimedia security has gained the attention of modern researchers and has posed dynamic challenges such as image forensics, image tampering, and deep fakes. Malicious users tamper with the image embedding noise, leading to misinterpretation of the content. Identifying and authenticating the image by detecting the forgery operations performed on it is essential. In our proposed model, we detect the forged region using the machine learning model SVM in the first iteration and Convolution Neural Network in the second iteration with Discrete Cosine Transform (DCT) for feature extraction. The proposed model is tested with a Corel 10K dataset, and an average accuracy of 98% is obtained for all kinds of image operations, including scaling, rotation, and augmentation.

Author 1: Praveen Chitti
Author 2: K. Prabhushetty
Author 3: Shridhar Allagi

Keywords: Support Vector Machine (SVM); Discrete Cosine Transform (DCT); Convolution Neural Network (CNN); Image Forensics and Image Forgery

Download PDF

Paper 70: Evaluation of Land Use/Land Cover Classification based on Different Bands of Sentinel-2 Satellite Imagery using Neural Networks

Abstract: Spatial data analytics is an emerging technology. Artificial neural network techniques play a major role in analysing any critical dataset. Integrating remote sensing data with deep neural networks has led a way to several research problems. This paper aims at producing land use land cover map of Bangalore region, Karnataka, India with various band combinations of sentinel satellite imagery obtained from google earth engine. LULC map classes include water, urban, forest, vegetation and openland. Band combinations of satellite images represent different characteristics of spatial data. Hence, several band combinations are used to build LULC maps. Also, classified maps are generated using different neural networks with pixel-based classification approach. Appropriate performance metrics were identified to evaluate the classification results such as Accuracy, Precision, Recall, F1-score and Confusion Matrix. Among neural networks, Convolutional Neural Network technique outperformed with 98.1 % of accuracy and less error rates in confusion matrix considering RGBNIR (4328) band combination of satellite imagery.

Author 1: Pallavi M
Author 2: Thivakaran T K
Author 3: Chandankeri Ganapathi

Keywords: Sentinel-2; neural networks; convolutional neural networks; remote sensing data; land use land cover maps

Download PDF

Paper 71: Exponential Decay Function-Based Time-Aware Recommender System for e-Commerce Applications

Abstract: Unlike traditional recommendation systems that rely only on the user's preferences, context-aware recommendation systems (CARS) consider the user's contextual information such as (time, weather, and geographical location). These data are used to create more intelligent and effective recommendation systems. Time is one of the most important and influential factors that affect users’ preferences and purchasing behavior. Thus, in this paper, time-aware recommendation systems are investigated using two common methods (Bias and Decay) to incorporate the time parameter with three different recommendation algorithms known as Matrix Factorization, K-Nearest Neighbor (KNN), and Sparse Linear Method (SLIM). The performance study is based on an e-commerce database that includes basic user purchasing actions such as add to cart and buy. Results are compared in terms of precision, recall, and Mean Average Precision (MAP) parameters. Results show that Decay-MF and Decay-SLIM outperform the Bias CAMF and CA-SLIM. On the other hand, Decay-KNN reduced the accuracy of the RS compared to the context-unaware KNN.

Author 1: Ayat Yehia Hassan
Author 2: Etimad Fadel
Author 3: Nadine Akkari

Keywords: Time-aware recommender system; context-aware recommender system; matrix factorization; K-Nearest Neighbor (KNN); and Sparse Linear Method (SLIM)

Download PDF

Paper 72: A Comprehensive Assessment Framework for Evaluating Adaptive Security and Privacy Solutions for IoT e-Health Applications

Abstract: There exist numerous adaptive security and privacy (S&P) solutions to manage potential threats at runtime. However, there is a lack of a comprehensive assessment framework that can holistically validate their effectiveness. Existing Adaptive S&P assessment efforts either focus on privacy or security in general, or are focused on specific adaptive S&P attributes, e.g. authentication, and, at certain times, disregards the architecture in which they should be comprehended. In this paper, we propose a holistic assessment framework for evaluating adaptive S&P solutions for IoT e-health. The framework utilizes a proposed classification of essential attributes necessary to be recognized, evaluated, and incorporated for the effectiveness of adaptive S&P solutions for the most common IoT architectures, fog-based and cloud/server-based architectures. As opposed to the existing related work, the classification comprehensively covers all the major classes of essential attributes, such as S&P objectives, contextual factors, adaptation action aptitude, and the system’s self-* properties. Using this classification, the framework assists to evaluate the existence of a given attribute with respect to the adaptation process and in the context of the architectural layers. Therefore, it stresses the importance of where an essential attribute should be realized in the adaptation phases and in the architecture for an adaptive S&P solution to be effective. We have also presented a comparison of the proposed assessment framework with existing related frameworks and have shown that it exhibits substantial completeness over the existing works to assess the feasibility of a given adaptive S&P solution.

Author 1: Waqas Aman
Author 2: Fatima Najla Mohammed

Keywords: Internet of Things; Adaptive Security; IoT Architecture; e-Health; Effectiveness; Privacy

Download PDF

Paper 73: Enhanced Jaya Algorithm for Multi-objective Optimisation Problems

Abstract: Evolutionary algorithms are suitable techniques for solving complex problems. Many improvements have been made on the original structure algorithm in order to obtain more desirable solutions. The current study intends to enhance multi-objective performance with benchmark optimisation problems by incorporating the chaotic inertia weight into the current multi-objective Jaya (MOJaya) algorithm. Essentially, Jaya is a recently established population-oriented algorithm. Exploitation proves to be more dominant in MOJaya following its propensity to capture local optima. This research addressed the aforementioned shortcoming by refining the MOJaya algorithm solution to update the equation for exploration-exploitation balance, enhancing divergence, and deterring premature convergence to retain the algorithm fundamentals while simultaneously sustaining its parameter-free component. The recommended chaotic inertia weight-multi-objective Jaya (MOiJaya) algorithm was assessed using well-known ZDT benchmark functions with 30 variables, whereas the convergence matrix (CM) and divergence matrix (DM) analysed the suggested MOiJaya algorithm performances are inspected. As such, this algorithm enhanced the exploration-exploitation balance and substantially prevented premature convergence. Then, the proposed algorithm is compared with a few other algorithms. Based on the comparison, the convergence metric and diversity metric results show that the recommended MOiJaya algorithm potentially resolved multi-objective optimisation problems better than the other algorithms.

Author 1: Rahaini Mohd Said
Author 2: Roselina Sallehuddin
Author 3: Nor Haizan Mohd Radzi
Author 4: Wan Fahmn Faiz Wan Ali

Keywords: MOJaya; chaotic inertia weight; ZDT benchmark function; convergence metric; diversity metric

Download PDF

Paper 74: Research on the Academic Early Warning Model of Distance Education based on Student Behavior Data in the Context of COVID-19

Abstract: The COVID-19 epidemic has caused great impact on the entire society, and the spread of novel coronavirus has brought a lot of inconvenience to the education industry. To ensure the sustainability of education, distance education plays a significant role. During the process of distance education, it is necessary to examine the learning situation of students. This study proposes an academic early warning model based on long- and short-term memory (LSTM), which firstly extracts and classifies students’ behavior data, and then uses the optimized LSTM to establish an academic early warning model. The precision rate of the optimized LSTM algorithm is 0.929, the recall rate is 0.917 and the F value is 0.923, showing a higher degree of convergence than the basic LSTM algorithm. In the actual case analysis, the accuracy rate of the academic early warning system is 92.5%. The LSTM neural network shows high performance after parameter optimization, and the academic early warning model based on LSTM also has high accuracy in the actual case analysis, which proves the feasibility of the established academic early warning model.

Author 1: Yi Qu
Author 2: Zhiyuan Sun
Author 3: Libin Liu

Keywords: COVID-19; Student behavior data; Distance education; Academic early warning model

Download PDF

Paper 75: Dynamic Polymorphism without Inheritance: Implications for Education

Abstract: Polymorphism is a core OO concept. Despite the rich pedagogical experience in teaching it, there are still difficulties in its correct and multifaceted perception by students. In this article, a method about a deeper study of the concept of polymorphism is offered by extending the learning content of the CS2 C++ Programming course with an implementation variant of dynamic polymorphism by type erasure, without using inheritance. The research is based on an inductive approach with a gradual expansion of functionalities when introducing new concepts. The stages of development of such a project and the details of the implementation of each functionality are traced. The results of experimental training showed higher scores of the experimental group in mastering the topics related to polymorphism. Based on these findings, recommendations for the construction of the lecture course and the organization of the laboratory work are suggested.

Author 1: Ivaylo Donchev
Author 2: Emilia Todorova

Keywords: Inheritance; polymorphism; object-oriented; C++; type erasure; pointers; templates; lambda expressions; teaching

Download PDF

Paper 76: Application of Training Load Prediction Model based on Improved BP Neural Network in Sports Training of Athletes

Abstract: With the enhancement of data mining technology, competitive sports informatization has become an inevitable development trend. It has become a common phenomenon to use data mining technology to help athletes train scientifically, assist coaches in rational decision-making, and improve team competitiveness. In competitive sports, cyclists' adaptation to training has a complex relationship with their physical performance. In order to explore the correlation between data and provide better training data for athletes, this study proposes a load prediction model based on BP neural network (Back propagation, BP). Considering the local convergence and random assignment of traditional BP model, an adaptive genetic algorithm with improved selection operator is used to determine the initial weights and thresholds of BP neural network to improve the accuracy of the prediction model. The experimental results show that the improved adaptive genetic algorithm improves the overall optimization ability of the BP neural network, the improved BP neural network model has good stability in the convergence process, and the algorithm can search for better weight thresholds. Compared with the basic BP neural network prediction model, the accuracy of the optimized prediction model is increased by 11.86%, and the average error value is reduced by 26.21%, which is a guide to improve the training effect of the cycling team's competitive sports.

Author 1: Lin Liu
Author 2: Guannan Sheng

Keywords: BP neural network; adaptive genetic algorithm; selection operator; training load

Download PDF

Paper 77: Analyzing Multi-stage Reverse Osmosis Desalination Using Artificial Intelligence

Abstract: Population growth has resulted in a decrease in readily available sources of potable water. Desalination is one of many approaches that has been studied and proposed as a way out of this predicament. In this study, multistage Reverse Osmosis desalination process is used in the model, since it has the potential to achieve a higher purity percentage than the single-stage RO desalination process. Some researchers have studied the distinctive tools of AI, specifically Artificial Neural Network as regression model and the genetic Algorithms as an optimization technique in the process of desalination and water treatments. This paper aims to examine multistage RO desalination by employing various artificial intelligence (AI) techniques, including Artificial Neural Network (ANN) and Support Vector Machine (SVM). Both training methods used for this research come under the category of regression algorithms, which are used to establish a predictive link between variables and labels. The main finding of this study was the noticeable decrease of Mean Square Error (MSE) in second stage when data was trained using the ANN. While on the other hand the MSE increased in second stage when the data was trained using the SVM. It can be concluded that the results of this research indicate that applying ANN and SVM to RO desalination process modelling would yield substantial improvements. Future work will be focusing on predicting and improving the performance of ANN and SVM prediction with other function variables.

Author 1: Batiseba Tekle
Author 2: Azmi Alazzam
Author 3: Abdulwehab Ibrahim
Author 4: Ghassan Malkawi
Author 5: Abdulaziz Fares NajiMoqbel
Author 6: Nissar Qureshi
Author 7: Ahmed Hamadat
Author 8: Filomento O. Corona Jr

Keywords: Artificial intelligence; artificial neural network; desalination; regression; reverse osmosis; support vector machine

Download PDF

Paper 78: Educational Platform based on Smartphone to Increase Students’ Interaction in Classroom

Abstract: Current smartphones meet all the criteria for university application. This technology opens the door to develop new techniques to enhance teaching methods. In addition, it presents an interest solution for students’ guidance and helping. Thus, the idea of this proposal aims to provide a platform around smart phone based on android to help student and teachers managing their courses. It is based on the Internet of Things to increase digital interaction and improve the teaching process while delivering traditional lectures. The system encompasses three main parts. The first step is carried out for guiding students to find their classroom and teacher’s desk. The second part was developed to help teachers to monitor the attendance of students. The third part is dedicated for improving e-learning in classroom by managing the educational process with the purpose of providing the adequate platform for data management. The platform, therefore, succeeded to provide the adequate solution to prevent the misuse of smart phone in classroom, and to enhance the learning methods using smart technologies.

Author 1: Mohamed Naser AlSubie
Author 2: Omar Ben Bahri

Keywords: Android; classroom; e-learning; data management; IoT; smartphone

Download PDF

Paper 79: A Novel Architecture for Community-Sustained Cultural Mapping System

Abstract: This paper presents a novel system architecture for implementing a cultural mapping system for the community of Buayan, a remote rural village in Sabah, East Malaysia. By considering various shortcomings of the local environment and the need for a community-sustained system, the cultural mapping system was designed to leverage a new set of architecture to achieve minimal implementation cost and higher reliability to survive the rural environment. The new architecture evolves from previous Telecentres’ design and implementation experience that was targeted at larger scale ICT systems. This paper also highlights the critical influence of power provision on the digital system implementation in rural areas, which always incurs a significant amount of the overall implementation cost. An efficient ICT system architecture will significantly reduce the cost of its associated power provision. The implementation of the cultural mapping system using the new ICT architecture at Buayan is also being described.

Author 1: Chong Eng Tan
Author 2: Sei Ping Lau
Author 3: Siew Mooi Wong

Keywords: Rural system architecture; telecentre; cultural mapping; sustainability

Download PDF

Paper 80: Research on Personalized Recommendation of High-Quality Academic Resources based on user Portrait

Abstract: With the advent of the era of big data, the phenomenon of information overload is becoming increasingly serious. It is difficult for academic users to obtain the information they want quickly and accurately in the face of massive academic resources. Aiming at the optimization of academic resource recommendation services, this paper constructs a multi-dimensional academic user portrait model and proposes an Academic Resource Recommendation Algorithm Based on user portrait. This paper first, combs the relevant literature and information; Secondly, to obtain the attribute tags of multi-dimensional user portraits, a set of questionnaires are designed to collect the real information of academic users, and the corresponding academic user portrait model is constructed; Then, the collected data is processed through certain rules, and the user is quantitatively modeled based on the data through mathematical means; Finally, through the construction of the completed academic user portrait model, combined with collaborative filtering algorithm, provide personalized academic resource recommendation services for academic users. Through the verification and analysis of simulation experiments, the Academic Resource Recommendation Algorithm Based on the user portrait proposed in this paper plays a great role in expanding users' interest fields and discovering new hobbies across fields and disciplines.

Author 1: Jianhui Xu
Author 2: Mustafa Man
Author 3: Ily Amalina Ahmad Sabri
Author 4: Guoyi Li
Author 5: Chao Yang
Author 6: Mingxue Jin

Keywords: Personalized recommendation system; user portrait; academic resources; collaborative filtering

Download PDF

Paper 81: An Inspection of Learning Management Systems on Persuasiveness of Interfaces and Persuasive Design: A Case in a Higher Learning Institution

Abstract: An effective Learning Management System (LMS) is an essential factor that can increase e-learning persuasiveness. One of the components that need to be addressed to design an effective LMS is design interfaces. Instead of developing a new LMS that requires a high cost, evaluating and improving the existing LMS is the best option. Issues like low completion rates and procrastination are common issues related to e-learning usage. These issues can be solved if academic institutions provide a proper LMS for students to change their learning behaviors positively. Many previous studies claimed they managed to implement persuasive technology into e-learning platforms to encourage positive learning behaviors. However, the claims can be questionable if the persuasive e-learning systems are not gone through a proper evaluation phase. This study will use the heuristic evaluation method to assess the persuasiveness level of LMS interfaces. The persuasive Systems Design Model (PSD), on the other hand, is used to evaluate persuasive strategies in LMS. The assessment involves students’ perspectives as the primary users to identify potentially behavior change factors, especially on engagement. Thus, the objectives of this study are i) to investigate the persuasiveness of LMS interfaces and ii) to identify persuasive strategies in the LMS design. Apart from that, this study also produces a) recommendations on design examples to increase the persuasiveness of LMS interfaces and b) the mapping of LMS interfaces to PSD framework that can be utilized by higher learning institutions.

Author 1: Wan Nooraishya Wan Ahmad
Author 2: Mohamad Hidir Mhd Salim
Author 3: Ahmad Rizal Ahmad Rodzuan

Keywords: Learning management system; e-learning; persuasive design; persuasiveness; interface design

Download PDF

Paper 82: The Development of an Ontology for Information Retrieval about Ethnic Groups in Chiang Mai Province

Abstract: This study aims to develop the semantic ontology of information knowledge about ethnic groups by analyzing information from the collection of documentary sources from libraries, research, and the museum for learning about people on the highlands located in Chiang Mai Province. The study is based on the classification theory of ethnic groups in Chiang Mai Province with the intention of establishing the relationship between knowledge structure regarding ethnic groups. The study procedures consist of three stages: 1)Establishing ontology requirements from online data to analyze the data of the keyword from the research database of Chiang Mai University Library's Online Information Resource Database (OPAC) and Ratchamangkhalaphisek National Library, Chiang Mai to group the words by studying information resources in Thai language, such as books, textbooks, research papers, theses, research articles, academic articles, and reference books related to ethnic groups. Stage 2) Designing classes, defining main classes, subclasses, hierarchies, and properties in order to establish the relationship of data in each class using the Protégé program. Stage 3) Ontology evaluation, which is divided into two parts: an expert's evaluation of the suitability of the ontology structure using the Inter-Class Relational Accuracy Assessment Scale and an examination of ethnic grouping data. The findings reveal that specifying, definition, scope, and objectives of development are appropriate (average score = 0.97) in three areas: grouping and ordering of classes within the ontology (score value = 0.98), defining affinity names and class properties (score value = 0.96), and suitable overall ontology content (score value = 0.97).

Author 1: Phichete Julrode
Author 2: Thepchai Supnithi

Keywords: Information retrieval; ontology development; ethnic groups; knowledge organization; chiang mai

Download PDF

Paper 83: CoSiT: An Agent-based Tool for Training and Awareness to Fight the Covid-19 Spread

Abstract: Since the beginning of 2020 and following the recommendation of the Emergency Committee, the WHO (World Health Organization) Director General declared that the Covid-19 outbreak constitutes a Public Health Emergency of International Concern. Given the urgency of this outbreak, the international community is mobilizing to find ways to significantly accelerate the development of interventions. These interventions include raising awareness of ethical solutions such as wearing a face mask and respecting social distancing. Unfortunately, these solutions have been criticized and the number of infections and deaths by Covid-19 has only increased because of the lack of respect for these gestures on the one hand, and because of the lack of awareness and training tools on the spread of this disease through simulation packages on the other. To give importance to the respect of these measures, the WHO is going to try to propose to his member states, training and sensitization campaigns on coronavirus through simulation packages, so that the right decisions are taken in time to save lives. Thus, a rigorous analysis of this problem has enabled us to identify three directions for reflection. First, how to propose an IT tool based on these constraints in order to generalize training and awareness for all? Secondly, how to model and simulate these prescribed measures in our current reality? Thirdly, how to make it playful, interactive, and participative so that it is flexible according to the user’s needs? To address these questions, this paper proposes an interactive Agent-Based Model (ABM) describing a pedagogical (training and educational) tool that can help understanding the spread of Covid-19 and then show the impact of the barrier measures recommended by the WHO. The tool implemented is quite simple to use and can help make appropriate and timely decisions to limit the spread of Covid-19 in the population.

Author 1: Henri-Joel Azemena
Author 2: Franck-Anael K. Mbiaya
Author 3: Selain K. Kasereka
Author 4: Ho Tuong Vinh

Keywords: Multi-agent system; covid-19; CoSiT; modeling-simulation; barrier measures; complex systems

Download PDF

Paper 84: Parallel Hough Transform based on Object Dual and Pymp Library

Abstract: Geometric shape detection in an image is a classical problem that leads to many applications, in cartography to highlight roads in a noisy image, in medical imaging to localize disease in a region and in agronomy to fight against weeds with pesticides. The Hough Transform method contributes effectively to the recognition of digital objects, as straight lines, circles and arbitrary objects. This paper deals with the theoretical comparisons of object dual based on the definition of Standard Hough Transform. It also focuses on parallelism of Hough Transform. A generic pseudo-code algorithm, using the Openmp library for the parallel computing of object dual is proposed in order to improve the execution time. In simulation, a triangular mesh superimposed on the image is implemented with the pymp library in python, in considering threads as inputs to read the image and to update the accumulator. The parallel computing presents reduction of the execution time accordingly to the rate of lit pixels in each virtual object and the number of threads. In perspectives, it will contribute to strenghen the developement of a toolkit for the Hough Transform method.

Author 1: Abdoulaye SERE
Author 2: Moise OUEDRAOGO
Author 3: Armand Kodjo ATIAMPO

Keywords: Hough transform; parallel computing; pattern recognition

Download PDF

Paper 85: Prototyping a Mobile Application for Children with Dyscalculia in Primary Education using Augmented Reality

Abstract: Dyscalculia is a disorder of difficulty in understanding, understanding of numbers and mathematical operations in such a way that the child has a greater stress by not solving the exercises proposed by the teacher, leading to the objective of the research in making an innovation plan dedicated to the mobile prototype with augmented reality for children with dyscalculia in primary education. Design Thinking was used as a methodology that allows us to know the needs of users and implement new solutions to their problems, so that the project team makes decisions to choose the best idea proposed, likewise this idea must be applied to a model or design, for this the Miro application was used for the mobile prototype, for the 3D design TinkerCad was used for educational games and finally the App Augmented Class application that was responsible for the visualization of augmented reality. The results were obtained through interviews with parents, indicating that the mobile prototype with augmented reality is a great contribution of impact and should be applied for children, finally this prototype is validated by five experts who mentioned that the final prototype has 86% acceptance. The conclusion of this research is to make an innovation model to solve the problems of dyscalculia, improving understanding and comprehension in mathematics.

Author 1: Misael Lazo-Amado
Author 2: Leoncio Cueva-Ruiz
Author 3: Laberiano Andrade-Arenas

Keywords: App augmented class; design thinking; dyscalculia; miro app; TinkerCad

Download PDF

Paper 86: Adaptive Lane Keeping Assist for an Autonomous Vehicle based on Steering Fuzzy-PID Control in ROS

Abstract: An autonomous vehicle is a vehicle that can run autonomously using a control. There are two modern autonomous assistant systems that are proposed in this research. First, we introduce a real-time approach to detect lanes of the streets. Based on a series of multi-step image processing through input data from the camera, the vehicle’s steering angle is estimated for lane keeping. Second, the steering control system ensures that autonomous vehicles can operate stably, and smoothly, and adapt to various road conditions. The steering controller consists of a PID controller and fuzzy logic control strategy to adjust the controller parameters. The simulation experiments by Gazebo simulator of the Robot Operating System (ROS) not only indicate that the vehicle can keep the lane safely, but also demonstrate that the proposed steering angle controller is more stable and adaptive than the conventional PID controller.

Author 1: Hoang Tran Ngoc
Author 2: Luyl-Da Quach

Keywords: Autonomous vehicles; automated steering; lane detection; fuzzy PID control; ROS; Gazebo

Download PDF

Paper 87: Money Laundering Detection using Machine Learning and Deep Learning

Abstract: In recent years, money laundering activities have shown rapid progress and have indeed become the main concern for governments and financial institutions all over the world. As per recent statistics, $800 billion to $2 trillion is the estimated value of money laundered annually, in which $5 billion of the total is obtained from cryptocurrency money laundering. As per the financial action task force (FATF), the criminals may trade illegally obtained fiat money for the cryptocurrency. Accordingly, detecting and preventing illegal transactions becomes a serious threat to governments and it has been indeed challenging. To combat money laundering, especially in cryptocurrency, effective techniques for detecting suspicious transactions must be developed since the current preventive efforts are outdated. In fact, deep learning and machine learning techniques may provide novel methods to detect suspect currency movements. This study investigates the applicability of deep learning and machine learning techniques for anti-money laundering in cryptocurrency. The techniques employed in this study are Deep Neural Network (DNN), random forest (RF), K-Nearest Neighbors(KNN), and Naive Bayes (NB) with the bitcoin elliptic dataset. It was observed that the DNN and random forest classifier have achieved the highest accuracy rate with promising findings in decreasing the false positives as compared to the other classifiers. In particular, the random forest classifier outperforms DNN and achieves an F1-score of 0.99%.

Author 1: Johrha Alotibi
Author 2: Badriah Almutanni
Author 3: Tahani Alsubait
Author 4: Hosam Alhakami
Author 5: Abdullah Baz

Keywords: Anti-money laundering; machine learning; supervised learning; cryptocurrency

Download PDF

Paper 88: Multi-Channel Speech Enhancement using a Minimum Variance Distortionless Response Beamformer based on Graph Convolutional Network

Abstract: The Minimum Variance Distortionless Response (MVDR) beamforming algorithm is frequently utilized to extract speech and noise from noisy signals captured from multiple microphones. A frequency-time mask should be employed to compute the Power Spectral Density (PSD) matrices of the noise and the speech signal of interest to obtain the optimal weights for the beamformer. Deep Neural Networks (DNNs) are widely used for estimating time-frequency masks. This paper adopts a novel method using Graph Convolutional Networks (GCNs) to learn spatial correlations among the different channels. GCNs are integrated into the embedding space of a U-Net architecture to estimate a Complex Ideal Ratio Mask (cIRM). We use the cIRM in an MVDR beamformer to further improve the enhancement system. We simulate room acoustics data to experiment extensively with our approach using different types of the microphone array. Results indicate the superiority of our approach when compared to current state-of-the-art methods. The metrics obtained by the proposed method are significantly improved, except the Scale-Invariant Source-to-Distortion Ratio (SI-SDR) score. The Perceptual Evaluation of Speech Quality (PESQ) score shows a noticeable improvement over the baseline models (i.e., 2.207 vs. 2.104 and 2.076). Our implementation of the proposed method can be found in the following link: https://github.com/3i-hust-asr/gnn-mvdr-final.

Author 1: Nguyen Huu Binh
Author 2: Duong Van Hai
Author 3: Bui Tien Dat
Author 4: Hoang Ngoc Chau
Author 5: Nguyen Quoc Cuong

Keywords: Multi-Channel Speech Enhancement; Graph Convolutional Networks; Minimum Variance Distortionless Response Beamformer; Complex Ideal Ratio Mask

Download PDF

Paper 89: Impact of Input Data Structure on Convolutional Neural Network Energy Prediction Model

Abstract: Energy demand continues to increase with no prospect of slowing down in the future. This increase is caused by several sociological and economical factors such as population growth, urbanization and technological developments. In view of this growth, it becomes crucial to predict energy consumption for a more accurate management and optimization. Nevertheless, consumption estimation is a complex task due to consumer behaviour fluctuation and weather alterations. Several efforts were proposed in the literature. Almost, all of them focused on improving the prediction model to increase the accuracy of the results. They use the LSTM (Long-Short Term Memory) model to reflect the temporal dependencies between historical data despite its spatial and temporal complexities. The main contribution in this paper is a novel and simple Convolutional Neural Network energy prediction model based on input data structure enhancement. The main idea is to adjust the structure of the input data instead of using a more complicated deep learning model for better performance. The proposed model was implemented, tested using real data and compared to existing ones. The obtained results showed that the proposed data structure has a great influence on the model performance measurement.

Author 1: Imen Toumia
Author 2: Ahlem Ben Hassine

Keywords: Deep learning; convolutional neural network; energy consumption; energy prediction

Download PDF

Paper 90: Decode and Forward Coding Scheme for Cooperative Relay NOMA System with Cylindrical Array Transmitter

Abstract: The Non-Orthogonal Multiple Access (NOMA) technique has enormous potential for wireless communications in the fifth generation (5G) and beyond. Researchers have recently become interested in the combination of NOMA and cooperative relay. Even though geometric-based stochastic channel models (GBSM) have been found to provide better, practical, and realistic channel properties of massive multiple-input multiple-output (mMIMO) systems, the assessment of Cooperative Relay NOMA (CR-NOMA) with mMIMO system is largely based on correlated-based stochastic channel model (CBSM). We believe that this is a result of computational difficulties. Again, not many discussions have been done in academia about how well CR-NOMA systems perform when large antenna transmitters with the GBSM channel model are used. As a result, it is critical to investigate the mMIMO CR-NOMA system with the GBSM channel model that takes into account channel parameters such as path loss, delay profile, and tilt angle. Moreover, the coexistence of large antenna transmitters and coding methods requires additional research. In this research, we propose a two-stage, three-dimension (3D) GBSM mMIMO channel model from the 3GPP, in which the transmitter is modelled as a cylindrical array (CA) to investigate the efficiency of CR-NOMA. By defining antenna elements placement vectors using the actual dimensions of the antenna array and incorporating them into the three-dimension (3D) channel model, we were able to increase the analytical tractability of the 3D GBSM. Bit-error rates, achievable rates, and outage probabilities (OP) are investigated utilizing the decode-and-forward (DF) coding method: the results are compared with that of a system using the CBSM channel model. Despite the computational difficulties of the proposed GBSM system, there is no difference in performance between CBSM and GBSM.

Author 1: Samuel Tweneboah-Koduah
Author 2: Emmanuel Ampoma Affum
Author 3: Kingsford Sarkodie Obeng Kwakye
Author 4: Owusu Agyeman Antwi

Keywords: CR-NOMA; 3D GBSM; DF coding scheme; Cylindrical Array (CA); cooperative relay

Download PDF

Paper 91: A Decision Concept to Support House Hunting

Abstract: House hunting, or the act of seeking for a place to live, is one of the most significant responsibilities for many families around the world. There are numerous criteria/factors that must be evaluated and investigated. These traits can be both statistically and qualitatively quantified and expressed. There is also a hierarchical link between the elements. Furthermore, objectively/quantitatively assessing qualitative characteristics is difficult, resulting in data inconsistency and, as a result, uncertainty. As a result, ambiguity must be dealt with using the necessary processes; otherwise, the decision to live in a particular property would be incorrect. To compare criteria, the Analytic Hierarchy Process (AHP) is employed, evidential reasoning is used to evaluate houses based on each criterion, and TOPSIS is used to rank house sites for selection. It was necessary to analyze qualitative and quantitative elements, as well as economic and social features of these residences, in order to arrive at the final order of houses, which was not an easy process. As a result, the authors developed a decision support model to aid decision makers in the management of activities related to finding a suitable dwelling. This study describes the development of a decision support system (DSS) capable of providing an overall judgment on the location of a house to live in while taking into account both qualitative and quantitative factors.

Author 1: Tanjim Mahmud
Author 2: Dilshad Islam
Author 3: Manoara Begum
Author 4: Sudhakar Das
Author 5: Lily Dey
Author 6: Koushick Barua

Keywords: AHP; multiple criteria decision Making (MCDM); uncertainty; evidential reasoning

Download PDF

Paper 92: A Drone System with an Object Identification Algorithm for Tracking Dengue Disease

Abstract: In recent decades, it has been shown that epidemi-ological surveillance is one of the most valuable tool that public health has, since it allows us to have an overview of the population general health, thus allowing to anticipate outbreaks of epidemics by helping in timely interventions. Currently there is an increase in cases of dengue disease in several regions of Peru. Therefore, to control this outbreak and to help population centers and human settlements that are far from the city this work puts forward a drone system with an object recognition algorithm. Drones are very efficient in terms of surveillance, allowing easy access to places that are difficult for humans. In this way, drones can carry out the field work that is required in epidemiological surveillance, carrying out photography or video work in real time, and thus identifying infectious foci of diverse diseases. In this work, an object detection algorithm that uses convolutional neural networks and a stable detection model is designed, this allows the detection of water reservoirs that are possible infectious sources of dengue. In addition the efficiency of the algorithm is evaluated through the statistical curves of precision and sensitivity that result of the training of the neural network. To validate the efficiency obtained, the model was applied to test images related to dengue, achieving an efficiency of 99.2%.

Author 1: Diego Moran-Landa
Author 2: Maria del Rosario Damian
Author 3: Pedro Miguel Portillo Mendoza
Author 4: Carlos Sotomayor-Beltran

Keywords: Epidemiological surveillance; drones; neural networks; recognition algorithms

Download PDF

Paper 93: Analysis of the Intuitive Teleoperated System of the TxRob Multimodal Robot

Abstract: Natural disasters such as earthquakes, avalanches, landslides, among others, leave in their path people who may be trapped in the rubble, which are hardly found by rescue agents, so a reliable system in the operation of an exploration and rescue robot is essential. This paper aims to evaluate the systems proposed for the operation of the TxRob exploration robot. The teleoperated control systems that were developed for the manipulation of the robot are: a multimodal system feedback with information through different sensors, and a GUI control system using joystick buttons. These systems were analyzed using subjective metrics such as NASA-TLX, Scale Utility System (SUS) and Microsoft Reaction Cards, which provide interesting data when evaluating the performance of an interface, as well as the workload, user satisfaction and usability; these aspects are used to conclude which system is the most intuitive when performing rescue operations in case of a disaster, among others. 15 operators were evaluated to validate this system; the age range of the operators was between 20 and 43 years old and 20% of them had previously used VR headsets. Priority is given to the most immersive, easy to use and the most efficient system to perform the task of handling the robot.

Author 1: Jeyson Carpio A
Author 2: Samuel Luque C
Author 3: Juan Chambi C
Author 4: Jesus Talavera S

Keywords: Multimodal interface; Immersive teleoperation; exploration robot; Gyroscope and subjective measurements

Download PDF

Paper 94: Exploring Power Advantage of Binary Search: An Experimental Study

Abstract: As exascale systems come online, more ways are needed to keep them within reasonable power budgets. This study aims to help uncover power advantages in algorithms likely ubiquitous in high-performance workloads such as searching. This study explored the power efficiency of binary search and its ternary variant, comparing consumption under different scenarios and workloads. Accurate modern on-chip integrated voltage regulators were used to get reliable power measurements. Results showed the binary version of the algorithm, which runs slower but relies on a barrel-shifter circuit, to be more power efficient in all studied scenarios offering an attractive time-power tradeoff. The cumulative savings were significant and will likely be valuable where the search may be a substantial fraction of workloads, especially massive ones.

Author 1: Muhammad Al-Hashimi
Author 2: Naif Aljabri

Keywords: Binary search; ternary search; time-power tradeoff; exascale computing; barrel shifter

Download PDF

Paper 95: CertOracle: Enabling Long-term Self-Sovereign Certificates with Blockchain Oracles

Abstract: Identity certificate is an endorsement of identity attributes from an authority issuer, and plays a critical role in many digital applications such as electronic banking. However, the existing certificate schemes have two weaknesses: (1) a certificate is valid only for a short period due to expiry of the issuer’s private key, and (2) privacy leaks because all the attributes have to be disclosed in the attribute verification process. To overcome the weaknesses, this paper proposes a blockchain-based certificate scheme called CertOracle. Specifically, CertOr-acle allows a traditional certificate owner to encrypt the off-chain certificate attributes with fully homomorphic encryption algorithms. Then, the uploading protocol in CertOracle enables to post the encrypted off-chain attributes into the blockchain via a blockchain oracle in an authenticated way, i.e., the off-chain attributes and on-chain encrypted attributes are consistent. Finally, the attribute verification protocol in CertOracle enables anyone to verify any set of on-chain attributes under the control of the attribute owner. As the on-chain certificate attributes are immutable forever, a traditional short-term certificate is transformed into a long-term one. Besides, the owner of the on-chain certificate attributes can arbitrarily select his/her attributes to meet the requirements of target applications, i.e., the on-chain certificate has the self-sovereign merit. Moreover, the proposed scheme is implemented with fully homomorphic encryption and secure two-party computation algorithms, and its experiments show that it is viable in terms of computation time and communication overhead.

Author 1: Shaoxi Zou
Author 2: Fa Jin
Author 3: Yongdong Wu

Keywords: Digital certificate; blockchain oracle; fully homo-morphic encryption; secure two-party computation

Download PDF

Paper 96: Evaluation of Online Machine Learning Algorithms for Electricity Theft Detection in Smart Grids

Abstract: Electricity theft-induced power loss is a pressing issue in both traditional and smart grid environments. In smart grids, smart meters can be used to track power consumption behaviour and detect any suspicious activity. However, smart meter readings can be compromised by deploying intrusion tactics or launching cyber attacks. In this regard, machine learning models can be used to assess the daily consumption patterns of customers and detect potential electricity theft incidents. Whilst existing research efforts have extensively focused on batch learning algorithms, this paper investigates the use of online machine learning algorithms for electricity theft detection in smart grid environments, based on a recently proposed dataset. Several algorithms including Naive Bayes, K-nearest Neighbours, K-nearest Neighbours with self-adjusting memory, Hoeffding Tree, Extremely Fast Decision Tree, Adaptive Random Forest and Leveraging Bagging are considered. These algorithms are evaluated using an online machine learning platform considering both binary and multi-class theft detection scenarios. Evaluation metrics include prediction accuracy, precision, recall, F-1 score and kappa statistic. Evaluation results demonstrate the ability of the Leveraging Bagging algorithm with an Adaptive Random Forest base classifier to surpass all other algorithms in terms of all the considered metrics, for both binary and multi-class theft detection. Hence, it can be considered as a viable option for electricity theft detection in smart grid environments.

Author 1: Ashraf Alkhresheh
Author 2: Mutaz A. B. Al-Tarawneh
Author 3: Mohammad Alnawayseh

Keywords: Smart grid; power loss; electricity theft; online machine learning

Download PDF

Paper 97: Artificial Intelligence for Automated Plant Species Identification: A Review

Abstract: Plants are very important for life on Earth. There is a wide variety of plant species and their number increases each year. The plants identification using conventional keys is complex, takes time and it is frustrating for non-experts because of the use of specific botanical terms/techniques. This creates a difficult obstacle to overcome for novices interested in acquiring knowledge about species, which is very important to develop any environmental study, like climate change anticipation models for example. Today, there is an increasing interest in automating the species identification process. The availability and omnipresence of relevant technologies, such as digital cameras, mobile devices, pattern recognition and artificial intelligence techniques in general, have allowed the idea of automated species identification to become a reality. In this paper, we present a review of automated plant identification over all significant available studies in literature. The main result of this synthesis is that the performance of advanced deep learning models, despite the presence of several challenges, is becoming close to the most advanced human expertise.

Author 1: Khaoula Labrighli
Author 2: Chouaib Moujahdi
Author 3: Jalal El Oualidi
Author 4: Laila Rhazi

Keywords: Plants identification; species; artificial intelligence; machine learning; deep learning

Download PDF

Paper 98: Arduino for Developing Problem-Solving and Computing Competencies in Children

Abstract: Fostering children’s problem-solving and computationalprogramming competencies is crucial at the current time. Like in other in-developing nations, children grew up with technology in Chile. Developing programming and problem-solving competencies in children seems a reachable task using high-level block-based programming languages. However, programming and electronics competencies often emerge at higher educational levels. This article presents that using Arduino can enhance the development of programming and problem-solving competencies in children and encourages them to think in new ways. This article uses TinkerCAD, an online emulator of Arduino, to teach fundamental electronic circuits and computer program-ming components. Using TinkerCAD effectively addresses various computing and electrical difficulties, such as turning on and off a group of lights and reading sensors to respond to the acquired values. This article seeks to develop problem-solving and computer programming competencies in primary school students, given the significance of both competencies, the open nature of Arduino, and the applicability of TinkerCAD, which permits using a block-based programming language. Children that took part in the trial saw an increase in their academic performance on average, which is a critical concomitant finding. The essential drawbacks of this project were the children’s lack of knowledge of electronics and programming principles and the need to use a computer with an internet connection.

Author 1: Cristian Vidal-Silva
Author 2: Claudia Jimenez-Quintana
Author 3: Erika Madariaga-Garcia

Keywords: Arduino; competencies; programming; problem-solving; children

Download PDF

Paper 99: An Integrated Hardware Prototype for Monitoring Gas leaks, Fires, and Remote control via Mobile Application

Abstract: Liquefied petroleum gas (LPG) is used in a wide range of applications such as home and industrial appliances, vehicles, and refrigerators. However, leakage of gas can have a dangerous and toxic effect on humans and other living organisms. In this paper, an IoT based system is employed for this purpose to monitor gas leakage, detect flames, and alert users. The MQ-5 gas sensor was used to understand the concentration level of a closed volume of gas, while the infrared flame sensor was used to detect the spread of fire in this study. The proposed system has the capacity to detect fire and gas leaks as well as take additional action to lower gas concentration by air ventilation with exhausted fan and put out fires with fire extinguisher. The suggested approach will contribute to increasing safety, lowering the mortality toll, and minimizing harm to the environment. Overall system is implemented with IOT cloud-based remote controls to prevent gas leakage by using android application in response to individual feedback or feed-forward commands. The controller used here is Arduino Uno Rev3 SMD. This study provides design approaches to both software and hardware.

Author 1: Md. Ashiqur Rahman
Author 2: Humayra Ahmed
Author 3: Md. Mamun Hossain

Keywords: Gas leakage; infrared flame detection; IoT; android; Arduino UNO

Download PDF

Paper 100: Performance Evaluation of Raspberry Pi as an IoT Edge Signal Processing Device for a Real-time Flash Flood Forecasting System

Abstract: The Raspberry Pi has evolved in recent years into a popular, low-cost, tiny computer for a wide range of IoT applications. Raspberry Pi is not only successful for data collection but also for data processing, including data storage and analysis. Thus, this study investigates the capability of Raspberry Pi as an edge processing device for capturing lightning strike signals in predicting flash flood locations. An electric and magnetic sensor (EMS) is connected to a Raspberry Pi in the experiment setup. The Raspberry Pi is then used to process digitised lightning signals. From the experiment, Raspberry Pi’s performance is measured using the performance metrics: central processing unit (CPU) usage and temperature. The results revealed that the Raspberry Pi could handle the real-time collection and processing of lightning signals from the EMSs without affecting the hardware capability.

Author 1: Aslinda Hassan
Author 2: Haniza Nahar
Author 3: Wahidah Md Shah
Author 4: Azlianor Abd-Aziz
Author 5: Sarah Afiqah Sahiran
Author 6: Nazrulazhar Bahaman
Author 7: Mohd Riduan Ahmad
Author 8: Isredza Rahmi A. Hamid
Author 9: Muhammad Abu Bakar Sidik

Keywords: Raspberry Pi; IoT; edge; performance

Download PDF

Paper 101: Decentralized Payment Aggregator: Hyperledger Fabric

Abstract: Blockchain has become a great trend and very popular in the present era. There are two types of Blockchain technology, centralized and decentralized. In this research, the main concern is about the decentralized payment gateway, which is a trustworthy architecture and does not depend on third parties. For recording the transaction, decentralized payment systems use distributed ledger. Previously, Bitcoin and Ethereum payment systems were used to verify the consistency of the ledger of blockchain and also the transaction data along with the sender-receiver address and transaction value, but as all the payment system is public, so the transaction mode is also public. However, here the main concern is privacy and security. Because anyone can easily access the network, the attacker can also attack the network and the identity and transaction records and the address of the user identity, which is a privacy challenge. This research incorporates the Hyperledger Fabric, which is private, to overcome this challenge. Moreover, no one can access it from outside of the network. The transaction cost is low and the timing is fast during transactions. Considering the above scenario, this research proposes a decentralized payment system architecture using Hyperledger Fabric.

Author 1: Md. Al-Amin
Author 2: Khondoker Shahrina
Author 3: Rubyet Hossain
Author 4: Debashish Sarker
Author 5: Sumya Sultana Meem

Keywords: Blockchain; decentralized; hyperledger fabric; bit-coin; payment system

Download PDF

Paper 102: Efficient HPC and Energy-Aware Proactive Dynamic VM Consolidation in Cloud Computing

Abstract: The adoption of High-Performance Computing (HPC) applications has gained an extensive interest in the Cloud computing. Current cloud vendors utilize separate management tools for HPC and non-HPC applications, missing out on the consolidation benefits of virtualization. Non-HPC applications executed in the cloud may interfere with resource-hungry HPC applications, which is a key performance challenge. Furthermore, correlations between application major performance indicators, such as response time and throughput, with resource capacities reveal that conventional placement strategies are impacting virtual machine efficiency, resulting in poor resource optimization, increased operating expenses, and longer wait times. Since applications often underutilized the hardware, smart execution of HPC and Non-HPC applications on the same node can boost system and energy efficiency. This research incorporates proactive dynamic VM consolidation to enhance the resource usage and performance while maintaining energy efficiency. The proposed algorithm generates a workload-aware fine-grained classification by employing machine learning techniques to generate complimentary profiles that alleviate cross-application interference by intelligently co-locating non-HPC and HPC applications. The research used CloudSim to simulate real HPC workloads. The results verified that the proposed algorithm outperforms all heuristic methods with respect to the metrics in key areas.

Author 1: Rukshanda Kamran
Author 2: Ali A. El-Moursy
Author 3: Amany Abdelsamea

Keywords: Cloud computing; HPC (High-Performance Computing); virtual machine consolidation; placement; optimization

Download PDF

Paper 103: Triple SVM Integrated with Enhanced Random Region Segmentation for Classification of Lung Tumors

Abstract: The rapid growth of Computer vision and Machine Learning applications, especially in Health care systems, assures a secure, innovative lifestyle for society. The implication of these technologies in the early diagnosis of lung tumors helps in lung cancer detection and promises the survival rate of patients. The existing general diagnosis method of lung radiotherapy, i.e., Computed Tomography imaging (CT), doesn’t spot exactly affected parts during injuries on lung malignancy. Herein, we propose a computer vision-based diagnostic method empowered with machine learning algorithms to detect lung tumors. The primary objective of the proposed method is to develop an efficient segmentation method to enhance the classification accuracy of lung tumors by implementing a Triple Support Vector Machine (SVM) for the classification of data samples into normal, malignant, or benign, Random Region Segmentation (RSS) for image segmentation and SIFT and GLCM algorithms are applied for featur extraction technique. The model is trained considering the dataset IQ - OTH or NCCD with 300 epochs, with an accuracy of 96.5% achieved under 200 cluster formations.

Author 1: Sukruth Gowda M A
Author 2: A Jayachandran

Keywords: Benign; computed tomography; malignant; lung cancer; radiation; triple support vector machine

Download PDF

Paper 104: Optimized Automatic Course Timetabling Service Architecture for Integration with Vendor Management Systems

Abstract: Generating university course timetables is a complex problem, especially in large environments such as institutions. Currently, some universities in Saudi Arabia manually generate timetables for classes because they use Vendor Management Systems (VMS) for registration and management. Manually generating course timetables is time-consuming and laborious for the academic staff. Although various methods have been proposed to generate timetables, they address specific environments or systems that can be extended to or work as separate components of the university management system. In this paper, we propose a service-based system with a decentralized architecture that can fully automate the process of course timetable generation and can be easily integrated into VMS. The proposed service-based system employs a genetic algorithm to optimize the process of scheduling courses and generating timetables. The system was implemented using JAVA RESTful web services, and the algorithm was tested by generating various course timetables with various constraints. The results showed that the proposed decentralized architecture is applicable to and can be fully integrated with any VMS. Furthermore, the use of genetic algorithm set up to 200 generations and iterate 1000 times produces acceptable timetables without violating any of the defined constraints.

Author 1: Marwah M. Alansari

Keywords: Courses timetable generation; genetic algorithm; course scheduling; service-based system; service-oriented architecture; optimization; web services

Download PDF

Paper 105: Cryptocurrency Price Prediction using Forecasting and Sentiment Analysis

Abstract: In recent years, many investors have used cryptocurrencies, prompting specialists to find out the factors that affect cryptocurrencies’ prices. Therefore, one of the most popular methods that have been used to predict cryptocurrency prices is sentiment analysis. It is a widespread technique utilized by many researchers on social media platforms, particularly on Twitter. Thus, to determine the relationship between investors’ sentiment and the volatility of cryptocurrency prices, this study forecasts the cryptocurrency prices using the Long-Term-Short-Memory (LSTM) deep learning algorithm. In addition, Twitter users’ sentiments using Support Vector Machine (SVM) and Naive Bayes (NB) machine learning approaches are analyzed. As a result, in the classification of the bitcoin (BTC) and Ethereum (ETH) datasets of investors’ sentiments into (Positive, Negative, and Neutral), the SVM algorithm outperformed the NB algorithm with an accuracy of 93.95% and 95.59%, respectively. Furthermore, the forecasting regression model achieves an error rate of 0.2545 for MAE, 0.2528 for MSE, and 0.5028 for RMSE.

Author 1: Shaimaa Alghamdi
Author 2: Sara Alqethami
Author 3: Tahani Alsubait
Author 4: Hosam Alhakami

Keywords: Sentiment analysis; cryptocurrencies; forecasting; bitcoin; ethereum

Download PDF

Paper 106: The Multi-Objective Design of Laminated Structure with Non-Dominated Sorting Genetic Algorithm

Abstract: Non-dominated sorting genetic algorithm has shown excellent advantages in solving complicated optimization problems with discrete variables in a variety of domains. In this paper, we implement a multi-objective genetic algorithm to guide the design of the laminated structure with two objectives: minimizing the mass and maximizing the strength of a specified structure simultaneously, classical lamination theory and failure theory are adopted to compute the strength of a laminate. The simulation results have shown that a non-dominated genetic algorithm has great advantages in the design of laminated composite material. Experiment results also suggest that optimal run times are from 16 to 32 for the design of glass-epoxy laminate with non-dominated sorting genetic algorithm. We also observed that two stages involve the optimization process in which the number of individuals in the first frontier first increases, and then decreases. These simulation results are helpful to decide the proper run times of genetic algorithms for glass-epoxy design and reduce computation costs.

Author 1: Huiyao Zhang
Author 2: Yuxiao Wang
Author 3: Fangmeng Zeng

Keywords: Non-dominated sorting genetic algorithm; optimization; failure theory; laminated composite material; classical lamination theory

Download PDF

Paper 107: From Monolith to Microservices: A Semi-Automated Approach for Legacy to Modern Architecture Transition using Static Analysis

Abstract: Modern system architecture may increase the maintainability of the system and promote the sustainability of the system. Nowadays, more and more organizations are looking towards microservice due to its positive impact on the business which can be translated into delivering quality products to the market faster than ever before. On top of that, native support of DevOps is also desirable. However, transforming legacy system architecture to modern architecture is challenging. As manual modernization is inefficient due to its time-intensive and the significant amount of effort required, the software architect is looking for an automated or semi-automated approach for easy and smooth transformation. Hence, this work proposed a semi-automated approach to transform legacy architecture to modern system architecture based on static analysis techniques. This bottom-up approach utilized legacy source code to adhere to the modern architecture framework. We studied the manual transformation pattern for architectural conversion and explore the possibility of providing transformation rules and guidelines. A task-based experiment was conducted to evaluate the correctness and efficiency of the approach. Two open-source projects were selected and several software architects participated in an architectural transformation task as well as in the survey. We found that the new approach promotes an efficient migration process and produces correct software artifacts with minimum errors rates.

Author 1: Mohd Hafeez Osman
Author 2: Cheikh Saadbouh
Author 3: Khaironi Yatim Sharif
Author 4: Novia Admodisastro

Keywords: Static analysis; software architecture; software modernisation; microservices

Download PDF

Paper 108: SDN Architecture for Smart Homes Security with Machine Learning and Deep Learning

Abstract: In recent decades, Intelligent home systems are popular because they improve comfort and quality of life. A growing number of homes are becoming "smarter" by incorporating Internet of Things (IoT) technology to improve comfort, energy efficiency, and safety. Increases in resource-constrained IoT devices heighten security threats and vulnerabilities connected with them. Using SDN and virtualization, the IoT's size and adaptability can be managed at a lower cost than ever before. Using these intelligent security solutions, we can achieve real-time detection and automation for attack detection and prevention using artificial intelligence. Consequently, a large variety of solutions utilizing machine learning and deep learning have been developed to mitigate attacks on the IoT. Thus, the goal of this work is to use machine learning and deep learning to defend smart homes with SDN-based. We have designed smart home environments using Software-Defined Networking and Mininet that provide Instant Virtual networks for IoT in smart homes. Two datasets were used in this work: the first SDN dataset, which we acquired from smart homes by launching real attacks and creating normal traffic, and the second IoTID20 dataset, which is publicly available online. On both datasets, conducted ML and DL experiments. The best accuracy on SDN Dataset was 99.9% using Xgboost classifier, and on IoTID20 was 98.9% LSTM in binary classification, and ANN 85.7% on multiclass.

Author 1: Wesam Abdulrhman Alonazi
Author 2: Hedi HAMDI
Author 3: Nesrine A. Azim
Author 4: A. A. Abd El-Aziz

Keywords: SDN; smart home; security; machine learning; deep learning

Download PDF

Paper 109: Skin Melanoma Classification from Dermoscopy Images using ANU-Net Technique

Abstract: Cells in any area of the body might develop cancer when they begin to grow uncontrollably. Other body regions may become affected by it. Skin cancer known as melanoma develops when melanocytes, or cells that create melanin, the pigment that gives skin its appearance of color, start to develop out of control. Melanoma is deadly because, if not caught early and addressed, it has a high propensity to spread to other regions of the body. Analyzing digital dermoscopy images, create a unique approach to categorizing melanocytic tumors as malignant or benign. Every single newly formed mole has a unique shape and colour compared to the pre-existing moles and given few more issues to classify the melanoma. To overcome all of these issues, this paper uses deep learning techniques. In this paper, a four-step system for classifying melanoma is described. The first stage is pre-processing, followed by the removal of hair from dermoscopic images using a Laplacian-based algorithm and then removing noise from the images using a Median filter. The second method is feature extraction from pre-processed images. Extracting features including texture, shape, and color using the Principal Component Analysis (PCA) technique. Thirdly, the LeNet-5 approach is utilized to locate the lesion location and segment the skin lesion. Fourth, the ANU-Net technique is used to categorize the lesion as cancerous (melanoma) or non-cancerous (non-melanoma). Evaluated based on performance parameters such as precision, sensitivity, accuracy, and specificity. Results are compared to those of current systems and show higher accuracy.

Author 1: Vankayalapati Radhika
Author 2: B. Sai Chandana

Keywords: Melanoma; LeNet-5; ANU-Net; dermoscopy images; benign; classification

Download PDF

Paper 110: Method for Determination of Tealeaf Plucking Date with Cumulative Air Temperature: CAT and Photosynthetically Active Radiation: PAR

Abstract: Method for determination of tealeaf plucking date with cumulative air temperature and Photosynthetically Active Radiation: PAR which is provided by the remote sensing satellites: Terra/MODIS and Aqua/MODIS is proposed. Also, a confirmation of thermal environment at the intensive study tea farm areas with Landsat-9 TIR (Thermal Infrared) image is conducted. Through a regressive analysis between the harvested tealeaf quality and the cumulative air temperature and PAR at the intensive study areas, it is found that there is a highly reliable relation between both. Also, an importance of air temperature environment at the sites is confirmed with Landsat-9 TIR image.

Author 1: Kohei Arai
Author 2: Yoshiko Hokazono

Keywords: Plucking date; elapsed days after sprouting; cumulative air temperature; Landsat-9 TIR; theanine; regressive analysis

Download PDF

Paper 111: Remote Monitoring Solution for Cardiovascular Diseases based on Internet of Things and PLX-DAQ add-in

Abstract: Access to health remains a real problem in Africa especially for the follow up of patients with chronic diseases. Many cases of heart attack deaths are still recorded before victims can access treatment. This is due to several factors, namely the insufficient number of cardiologists, the inaccessibility of hospitals with adequate infrastructure, the carelessness and ignorance of people about their health. In response to these limitations, internet of Things, thanks to its remarkable technological contribution, allow to follow from afar and easily patient’s condition. In this paper, we offer a ubiquitous surveillance solution distance from patients with cardiovascular disease in order to minimize or eliminate the risk of heart attacks. The proposed solution is based on a micro- service architecture and consists of two essential parts that are data acquisition and data transfer. It will allow the patient to access their physical data and submit them in real time to the doctor through a dedicated medical application. The doctor will then be able to analyse the data obtained and return a prescription to the patient in case of abnormality .We used the microcontroller Arduino esp8066, the heart Rate Monitor AD8232 ECG (electrocardiogram) to measure the electrical activity of the heart that can be traced as an ECG, a pulse-eater, a photoresistor LDR and an potentiometer to regulate and modify the current flow in the circuit. We also used add-in PLX-DAQ for data acquisition and Jira software for data transfer to the doctor. Our solution is inexpensive and allows people not yet suffering from cardiovascular disease to prevent it.

Author 1: Jeanne Roux NGO BILONG
Author 2: Yao Gaspard Magnificat BOSSOU
Author 3: Adam Ismael Paco SIE
Author 4: Gervais MENDY
Author 5: Cheikhane SEYED

Keywords: Cardiovascular diseases; microcontroller arduino; esp8086; AD8232; Plx-DAQ; IoT; ECG

Download PDF

Paper 112: Anomaly Detection in Video Surveillance using SlowFast Resnet-50

Abstract: Surveillance systems are widely used in malls, colleges, schools, shopping centers, airports, etc. This could be due to the increasing crime rate in daily life. It is a very tedious task to monitor and detect abnormal activities 24x7 from the surveillance system. So the detection of abnormal events from videos is a hugely demanding area of research. In this paper, the proposed framework is used for deep learning concepts. Here SlowFast Resnet50 has been used to extract and process the features. After that, the deep neural network has been applied to generate a class using the Softmax function. The proposed framework has been applied to the UCF-Crime dataset using Graphics Processing Unit (GPU). It includes 1900 videos with 13 classes. Our proposed algorithm is evaluated by accuracy. Our proposed algorithm works better than the existing algorithm. It achieves 47.8% more accuracy than state of art method and also achieves good accuracy compared to other approaches used for detecting abnormal activity on the UCF-Crime dataset.

Author 1: Mahasweta Joshi
Author 2: Jitendra Chaudhari

Keywords: Accuracy; GPU (Graphics Processing Unit); SlowFast Resnet50; Softmax; UCF-Crime dataset

Download PDF

Paper 113: Address Pattern Recognition Flash Translation Layer for Quadruple-level cell NAND-based Smart Devices

Abstract: The price of the solid-state drives has become a major factor in the development of flash memory technology. Major semiconductor companies are developing quadruple-level cell NAND-based SSDs for smart devices. Unfortunately, SSDs composed of quadruple-level cell (QLC) flash memory may suffer from low performance. In addition, few studies on internal page buffering mechanisms have been conducted. As a solution to these problems, an address pattern recognition flash translation layer (APR-FTL) is proposed in this study. APRA-FTL gathers the data in a page unit and separates random data from sequential data. Furthermore, APRA-FTL proposes address mapping algorithm which is compatible to the page buffering algorithm. Experimental results show that APRA-FTL generates a lower number of write and erase operations compared to previous FTL algorithms.

Author 1: Se Jin Kwon

Keywords: Memory management; nonvolatile memory; smart devices

Download PDF

Paper 114: Hybrid Deep Learning Signature based Correlation Filter for Vehicle Tracking in Presence of Clutters and Occlusion

Abstract: This vehicle tracking is an important task of smart traffic management. Tracking is very challenging in presence of occlusions, clutters, variation in real world lighting, scene conditions and camera vantage. Joint distribution of vehicle movement, clutter and occlusions introduces larger errors in particle tracking based approaches. This work proposes a hybrid tracker by adapting kernel and particle-based filter with aggregation signature and fusing the results of both to get the accurate estimation of target vehicle in video frames. Aggregation signature of object to be tracked is constructed using a probabilistic distribution function of lighting variation, clutters and occlusions with deep learning model in frequency domain. The work also proposed a fuzzy adaptive background modeling and subtraction algorithm to remove the backgrounds and clutters affecting the tracking performance. This hybrid tracker improves the tracking accuracy even in presence of larger disturbances in the environment. The proposed solution is able to track the objects with 3% higher precision compared to existing works even in presence of clutters.

Author 1: Shobha B. S
Author 2: Deepu. R

Keywords: Smart traffic management; background subtraction; vehicle detection; aggregation signature; hybrid tracker

Download PDF

Paper 115: Improving Slope Stability in Open Cast Mines via Machine Learning based IoT Framework

Abstract: Slope stability has been a matter of concern for most geologists, mainly due to the fact that unstable slopes cause a greater number of accidents, which in turn reduces efficiency of mining operations. In order to reduce the probability of these slope instabilities, methods like tension crack mapping, inclinometer measurements, time domain reflectometry, borehole extensometers, piezometer, radar systems and image processing systems are deployed. These systems work efficiently for single site slope failures, but as the number of mining sites increase, dependency of one site slope failure on nearby sites also increases. Current systems are not able to capture this data, due to which the probability of accidents at open cast mines increases. In order to reduce this probability, a high efficiency internet of things (IoT) based continuous slope monitoring and control system is designed. This system assists in improving the efficiency of real-time slope monitoring via usage of a sensor array consisting of radar, reflectometer, inclinometer, piezometer and borehole extensometer. All these measurements are given to a high efficiency machine learning classifier which uses data mining, and based on its output suitable actions are taken to reduce accidents during mining. This information is dissipated to nearby mining sites in order to inform them about any inconsistencies which might occur due to the slope changes on the current site. Results were simulated using HIgh REsolution Slope Stability Simulator (HIRESSS), and an efficiency improvement of 6% is achieved for slope analysis in open cast mines, while probability of accident reduction is increased by 35% when compared to traditional non-IoT based approach.

Author 1: Sameer Kumar Das
Author 2: Subhendu Kumar Pani
Author 3: Abhaya Kumar Samal
Author 4: Sasmita Padhy
Author 5: Sachikanta Dash
Author 6: Singam Jayanthu

Keywords: Opencast; mining; slope; IoT; stability; machine learning; data mining

Download PDF

Paper 116: Cross-Event User Reaction Prediction on a Social Network Platform

Abstract: Social network surges with multiple tweets with mixture of multiple emotions by many users when events like rape, robbery, war and murder, we use this user data to analyze user emotions between cross-events and try to predict user reactions for the next possible such event. Cross-events are a series of events that belong under the same umbrella of topics and are related to the events occurring prior to it. The proposed system solve this problem using collaborative filtering using Topical and Social context. The Text Rank Algorithm is an unsupervised algorithm used for keyword extraction. Count Vectorizer is used on preprocessed text to get the frequency of words throughout the text which is used as training data to get a probability of emotion using a logistic regression model. We incorporated social context along with topical context to account for homophily and used the Low-rank matrix factorization method for user-topic prediction. The model as an output gives a total of 8 emotions which include Shame, Disgust, Anger, Fear, Sadness, Neutral, Surprise and Joy. Finally, the model is able to predict emotions with an accuracy of 95% considering cross events.

Author 1: Pramod Bide
Author 2: Sudhir Dhage

Keywords: Twitter; cross events; collaborative filtering; logistic regression; social and topical context

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org