The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 11

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Investigating the User Experience of Mind Map Software: A Comparative Study based on Eye Tracking

Abstract: Software for creating mind maps is currently prevalent, and it should have strong usability and create a good user experience. Usability testing can help to uncover flaws in software's usability and support its optimization. This paper took the mind map software "Xmind" and "MindMaster" as study cases and conducted comparative research on three aspects: effectiveness, efficiency, and satisfaction. The research investigated 20 participants' interactions with the two software. Task completion rate, number of errors, and number of requests for help were collected to evaluate the effectiveness. Eye tracking data and task completion time are collected to evaluate efficiency. System usability, interface quality, and emotional dimensions were collected with subjective scales to assess the software’s user satisfaction. The data together led to a conclusion: each software has a few usability issues. The use of jargon to explain functions was costly to learn and quickly undermined users' confidence in using the software; the interface's simplicity impacted satisfaction, although users tended to evaluate utility tools in terms of their ease of use and ease of learning. These findings could be used to optimize utility software.

Author 1: Junfeng Wang
Author 2: Xi Wang
Author 3: Jingjing Lu
Author 4: Zhiyu Xu

Keywords: Usability; mind map software; comparative research; eye tracking; user experience

PDF

Paper 2: Character Level Segmentation and Recognition using CNN Followed Random Forest Classifier for NPR System

Abstract: The number plate recognition system must be able to quickly and accurately identify the plate in both low and noisy lighting conditions, as well as within the specified time limit. This study proposes automated authentication, which would minimize security and individual workload while eliminating the requirement for human credential verification. The four processes that follow the acquisition of an image are pre-processing, number plate localization, character segmentation, and character identification. A human error during the affirmation and the enrolling process is a distinct possibility since this is a manual approach. Personnel at the selected location may find it difficult and time-consuming to register and compose information manually. Due to the printed edition design, it is impossible to communicate the information. Character segmentation breaks down the number plate region into individual characters, and character recognition detects the optical characters. Our approach was tested using genuine license plate images under various environmental circumstances and achieved overall recognition accuracy of 91.54% with a single license plate in an average duration of 2.63 seconds.

Author 1: U. Ganesh Naidu
Author 2: R. Thiruvengatanadhan
Author 3: S. Narayana
Author 4: P. Dhanalakshmi

Keywords: Character segmentation; convolutional neural networks; bilateral filter; character recognition; SVM classifier

PDF

Paper 3: Prediction of Micro Vascular and Macro Vascular Complications in Type-2 Diabetic Patients using Machine Learning Techniques

Abstract: A collection of metabolic conditions known as diabetes mellitus are defined by hyperglycemia brought on by deficiencies in insulin secretion, action, or both. In terms of mortality rate, type-2 diabetes is 20 times higher when compared with type-1. Based on the earlier research, there is still scope to identify different risk levels of type-2 diabetes complications. To achieve this, we have proposed a T2DC machine learning-based prediction system using a decision tree as a base estimator with random forest to identify the severity of T2-DM complications at an early stage. Our proposed model achieved accuracies of 95.43%, 94.62%, 96.25%, 97.55%, and 97.83% for Nephropathy, Neuropathy, Retinopathy, Cardiovascularand Peripheral Vascu-lar complications in T2-DM patients. The proposed model has the potential to improve clinical outcomes by promoting the delivery of early and personalized care to T2-DM patients.

Author 1: Bandi Vamsi
Author 2: Ali Al Bataineh
Author 3: Bhanu Prakash Doppala

Keywords: Diabetes mellitus; micro vascular; macro vascular; machine learning; type-2 complications

PDF

Paper 4: Using Incremental Ensemble Learning Techniques to Design Portable Intrusion Detection for Computationally Constraint Systems

Abstract: Computers have evolved over the years, and as the evolution continues, we have been ushered into an era where high-speed internet has made it possible for devices in our homes, hospital, energy, and industry to communicate with each other. This era is known as the Internet of Things (IoT). IoT has several benefits in a country’s economy’s health, energy, transportation, and agriculture sectors. These enormous benefits, coupled with the computational constraint of IoT devices, make it challenging to deploy enhanced security protocols on them, making IoT devices a target of cyber-attacks. One approach that has been used in traditional computing over the years to fight cyber-attacks is Intrusion Detection System (IDS). However, it is practically impossible to deploy IDS meant for traditional computers in IoT environments because of the computational constraint of these devices. This study proposes a lightweight IDS for IoT devices using an incremental ensemble learning technique. We used Gaussian Naive Bayes and Hoeffding trees to build our incremental ensemble model. The model was then evaluated on the TON IoT dataset. Our proposed model was compared with other proposed state-of-the-art methods and evaluated using the same dataset. The experimental results show that the proposed model achieved an average accuracy of 99.98%. We also evaluated the memory consumption of our model, which showed that our model achieved a lightweight model status of 650.11KB as the highest memory consumption and 122.38KB as the lowest memory consumption.

Author 1: Promise R. Agbedanu
Author 2: Richard Musabe
Author 3: James Rwigema
Author 4: Ignace Gatare

Keywords: Cyber-security; ensemble machine learning; incre-mental machine learning; Internet of Things; intrusion detection; online machine learning

PDF

Paper 5: Blockchain based Framework for Efficient Student Performance Tracking (BloSPer)

Abstract: For maintaining sustainable economy, the government of Malaysia is working towards improvising the standards of education in higher education institutes. According to reports, around 32% of enrolled students in Public Universities of Malaysia are unable to graduate on time due to unknown reasons. To ensure more students graduate on time with high quality of education, continuous monitoring of the student is essential. Continual tracking will allow the student as well as the educator to analyze the weak performer at an early stage. Tracking the student performance manually is challenging but with the advancements in information technology, keeping a track of student performance has been relatively easier. Therefore, the fundamental aim of this paper is to present a novel blockchain framework for record keeping and student performance tracking. We name this framework BloSPer (Blockchain Student Performance Tracking System). BloSPer has an edge over the existing systems as current systems face problems of single point of failure and unreliable data. The proposed framework will enable the students and educators to track the performance of the students in a more convenient and transparent manner. Due to this, it will be simpler for them to analyze the reasons of a students’ poor performance. Moreover, the data gathered through the system will be more reliable and worthy for data analytics because of tamper resistance provided through blockchain. This will result in much knowledgeable decisions by the institutions regarding improving the performance of each individual candidate.

Author 1: Aisha Zahid Junejo
Author 2: Anton Dziatkovskii
Author 3: Manzoor Ahmed Hashmani
Author 4: Uladzimir Hryneuski
Author 5: Ekaterina Ovechkina

Keywords: Blockchain; education; performance tracking; trackability; student data analytics; student monitoring

PDF

Paper 6: Impact of Mobile Technology Solution on Self-Management in Patients with Hypertension: Advantages and Barriers

Abstract: Hypertension is a major risk factor for cardiovascular morbidity and mortality. It is a condition that increases the high risk of heart, liver, and other diseases. Since hypertension is one of the biggest global public health issues, patients require more interventions to manage their blood pressure. The vast use of mobile phones and applications with medication features has turned a smartphone into a medical device. These tools are helpful for a physician in the treatment of hypertension. Mobile health applications are utilised to manage hypertension at the moment; however, there is a lack of information regarding their efficacy. Smartphones and their applications are evolving quickly hence the rise in the innovation of mobile health applications. Mobile-based applications are helpful in-patient education and reinforce the behaviour through constant reminders, medication, and appointment alarms. The main objective of this study is to determine the impact of mobile health applications on self-management in patients with hypertension and its advantages and disadvantages. We used publications from 2015 and later as a time frame and searched on the first five pages of Google Scholar, JSTOR, Hindawi, PubMed, and ResearchGate. We group all associated terms that might turn up articles on this subject in the search results. The total number of database records that we identified were 213, and the duplicate identified and removed were 117; hence the screened records were 96. The reports excluded based on abstract and title were 31. Articles with full text and have been accessed for final inclusion were 65. The excluded articles were 51, and the studies included in the qualitative analysis were 14.

Author 1: Adel Alzahrani
Author 2: Valerie Gay
Author 3: Ryan Alturki

Keywords: Impact; self-management; mHealth; hypertension

PDF

Paper 7: IRemember: Memorable CAPTCHA Method for Sighted and Visually Impaired Users

Abstract: A CAPTCHA is used to automatically differentiate between human users and automated software to prevent bots from accessing unauthorized websites. Most proposed CAPTCHAs are not accessible to visually impaired users because of the memorability of the CAPTCHA’s numerical digits. Recalling six random spoken digits is a difficult task for any human. Visually impaired users must typically play the audio several times to memorize the spoken digits in the correct order. The authors reviewed existing CAPTCHAs for visually impaired users and concluded that the high cognitive load is more susceptible to response errors due to extensive challenge digits intended for visual users. Thus, the authors proposed a novel method that improves current audio CAPTCHA by enhancing the display of the challenge and improving the memorability of its phraseology. The proposed CAPTCHA presents short common phrases, such as “piece of cake.” After hearing or seeing the phrases, the users are required to type the first letter of each word from the presented phrases, such as POC for a piece of cake. The study results of 11 visually impaired users concluded that the memorability and success rate for the IRemember CAPTCHA was 82.72%, compared to the audio CAPTCHA at only 48.18%. It has also demonstrated higher memorability and less workload than the traditional audio method. This research indicates that using common knowledge and experience in the design process for a CAPTCHA method for these users can enhance performance and minimize workload and, hence, error rates.

Author 1: Mrim Alnfiai
Author 2: Sahar Altalhi
Author 3: Duaa Alawfi

Keywords: CAPTCHA; blind users; visually impaired users; memorability; accessibility

PDF

Paper 8: Protein Secondary Structure Prediction based on CNN and Machine Learning Algorithms

Abstract: One of the most important topics in computational biology is protein secondary structure prediction. Primary, secondary, tertiary, and quaternary structure are the four levels of complexity that can be used to characterize the entire structure of a protein that are totally ordered by the amino acid sequences. The polypeptide backbone of a protein's local configuration is referred to as a secondary structure. In this paper, three prediction algorithms have been proposed which will predict the protein secondary structure based on machine learning. These prediction methods have been improved by the model structure of convolutional neural networks (CNN). The Rectified Linear Units (ReLU) has been used as the activation function. The 2D CNN has been trained with machine learning algorithms, including Support Vector Machine, Naive Bays and Random Forest. The SVM is used to correctly classify the unseen data. Naïve Bays (NB) and Random Forest (RF) are also applied to solve the prediction problems for not only classification problems but also regression problems. The 2D CNN, hybrid of 2D CNN -SVM, CNN-RF and CNN-NB have been proposed in this experiment. These different methods are implemented with the RS126, 25PDB and CB513 dataset. Further, all prediction Q3 accuracy is compared and improved with their datasets.

Author 1: Romana Rahman Ema
Author 2: Mt. Akhi Khatun
Author 3: Md. Nasim Adnan
Author 4: Sk. Shalauddin Kabir
Author 5: Syed Md. Galib
Author 6: Md. Alam Hossain

Keywords: Protein Secondary Structure Prediction (PSSP); Support Vector Machine (SVM); Naive Bays (NB); Random Forest (RF); Convolutional Neural network (CNN)

PDF

Paper 9: A Mobility Management Algorithm in the Internet of Things (IoT) for Smart Objects based on Software-Defined Networking (SDN)

Abstract: In recent decades, technological advancements have significantly improved people's living standards and given rise to the rapid development of intelligent technologies. The Internet of Things (IoT) is one of the most important research topics worldwide. However, IoT is often comprised of unreliable wireless networks, with hundreds of mobile sensors interconnected. A traditional sensor network typically consists of fixed sensor nodes periodically transmitting data to a pre-determined router. Current applications, however, require sensing devices to be mobile between networks. We need mobility management protocols to manage these mobile nodes to provide uninterrupted service to users. The interactions between the mobile nodes are affected by the loss of signaling messages, increased latency, signaling costs, and energy consumption because of the characteristics of these networks, including constrained memory, processing power, and limited energy source. Hence, developing an algorithm for managing smart devices' mobility on the Internet is necessary. This study proposes an efficient and effective distributed mechanism to manage mobility in IoT devices. Using Software-Defined Networking (SDN) based on the CoAP protocol, the proposed method is intended not only to reduce the signaling cost of messages but also to make mobility management more reliable and simpler.

Author 1: Lili Pei

Keywords: Internet of things (IoT); mobility management; software-defined networking (SDN); CoAP protocol

PDF

Paper 10: SPAMID-PAIR: A Novel Indonesian Post–Comment Pairs Dataset Containing Emoji

Abstract: The detection of spam content is an important task especially in social media. It has become a topic to be continuely studied in Natural Language Processing (NLP) area in the last few years. However, limited data sets are available for this research topic because most researchers collect the data by themselves and make it private. Moreover, most available data sets only provide the post content without considering the comment content. This data becomes a limitation because the post-comment pair is needed when determining the context of a comment from a particular post. The context may contribute to the decision of whether a comment is a spam or not. The scarcity of non-English data sets, including Indonesian, is also another issue. To solve these problems, the authors introduce SPAMID-PAIR, a novel post-comment pair data set collected from Instagram (IG) in Indonesian. It is collected from selected 13 Indonesian actress/actor accounts, each of which has more than 15 million followers. It contains 72874 pairs of data. This data set has been annotated with spam/non-spam labels in Unicode (UTF-8) text format. The data also includes a lot of emojis/emoticons from IG. To test the baseline performance, the data is tested with some machine learning methods using several scenarios and achieves good performance. This dataset aims to be used for the replicable experiment in spam content detection on social media and other tasks in the NLP area.

Author 1: Antonius Rachmat Chrismanto
Author 2: Anny Kartika Sari
Author 3: Yohanes Suyanto

Keywords: Dataset; natural language processing; spam detection; spamid-pair; post-comment pairs

PDF

Paper 11: Cedarwood Quality Classification using SVM Classifier and Convolutional Neural Network (CNN)

Abstract: Cedarwood is one of the most sought-after materials since it can be used to create a wide variety of household appliances. Other than its unique aroma, the product's quality is the most important selling attribute. Fiber patterns allow for a qualitative categorization of this wood. Traditionally, workers in the wood-processing business have relied solely on their eyesight to sort materials into several categories. As a result, there will be discrepancies in precision and efficiency, which will hurt the reputation of the regional wood sector. The answer to this issue is machine learning. In this study, we compare the performance of two different cedarwood quality classification systems where both systems use different machine learning methods namely Support Vector Machine (SVM) and Convolutional Neural Network (CNN). Each system will be sent images captured with a Logitech Brio 4K equipped with a joystick and ultrasonic sensors, labeled as belonging to one of five cedar classes (A, B, C, D, or E). In the initial method to learn the wood's pattern and texture, the Histogram of Oriented Gradient (HOG) will be used to identify the material. Meanwhile, the classification method uses a Support Vector Machine (SVM) which will be compared to find the best accuracy and time computation. The first system's experiment achieves 90 percent accuracy with a computation time of 1.40 seconds. For the second, we use a Convolutional Neural Network, a deep learning technique, to classify cedarwood (CNN). Extraction of features occurs in the convolution, activation, and pooling layers. Experimental results demonstrated a considerable enhancement, with an accuracy of 97% and a prediction speed of 0.56 seconds.

Author 1: Muhammad Ary Murti
Author 2: Casi Setianingsih
Author 3: Eka Kusumawardhani
Author 4: Renal Farhan

Keywords: Cedarwood classification; convolutional neural network (CNN); HoG feature; SVM classification

PDF

Paper 12: Ransomware Detection using Machine and Deep Learning Approaches

Abstract: Due to the advancement and easy accessibility to computer and internet technology, network security has become vulnerable to hacker threats. Ransomware is a frequently used malware in cyber-attacks to trick the victim users to expose sensitive and private information to the attackers. Consequently, victims may not be able to access their data any longer until they pay a ransom for stolen files or data. Different methods have been introduced to overcome these issues. It is evident through an extensive literature review that some lexical features are not always sufficient to detect categories of malicious URLs. This paper proposes a model to detect Ransomware using machine and deep learning approaches. This model was introduced as a novel feature for classification using the idea that starts with “https://www.” This feature was not considered in the earlier papers on malicious URLs identification. In addition, this paper introduced a novel dataset that consists of 405,836 records. Two main experiments were carried out utilizing malicious URL features to defend Ransomware using the proposed dataset. Moreover, to enhance and optimize the experimental accuracy, various hyper-parameters were tested on the same dataset to define the optimal factors of every method. According to the comparative and experimental results of the applied classification techniques, the proposed model achieved the best performance at 99.8% accuracy rate for detecting malicious URLs using machine and deep learning.

Author 1: Ramadhan A. M. Alsaidi
Author 2: Wael M.S. Yafooz
Author 3: Hashem Alolofi
Author 4: Ghilan Al-Madhagy Taufiq-Hail
Author 5: Abdel-Hamid M. Emara
Author 6: Ahmed Abdel-Wahab

Keywords: Machine learning; ransomware; URL classification; malicious URLs; deep learning

PDF

Paper 13: Evaluation of Online Teaching in the Covid Period using Learning Analytics

Abstract: The article compares education at the Faculty of Economics Matej Bel University before the pandemic and during the coronavirus pandemic. At the same time, it tries to outline what the education will look like after this situation is over. It finds out how the situation during the corona affected the education of economists and to what extent the changes it brought will be preserved in the future. The comparison of face-to-face and distance learning in 2019 and 2020 was made. This is because teaching in 2019 was carried out in a "classic", face-to-face manner, and on the contrary, in 2020, after the closure of schools in March 2020, teaching at Matej Bel University was carried out only distance online method. To get the best possible view of the researched topic, several research methods were used: the examination of the LMS Moodle with using of various Learning Analytics tools and Questionnaire Research. The results showed that face-to-face education before the Covid pandemic and after this pandemic will no longer be the same because distance online education will also cause changes in face-to-face education in the post-pandemic period. Questionnaire research showed that up to 78% of part-time students and 61% of full-time students would like their study program to use elements of distance education in full-time study as well. Since this is a large group of students, their opinion will be considered in the future when fully returning to face-to-face teaching.

Author 1: Jolana Gubalova

Keywords: Distance online learning; learning management system; moodle; collaboration platform microsoft teams

PDF

Paper 14: Multi-Feature Extraction Method of Power Customer’s Portrait based on Knowledge Map and Label Extraction

Abstract: In order to realize the visualization of power customer characteristics and better provide power services for power customers, a multi-feature extraction method of power customer’s portrait based on knowledge map and label extraction is studied. The power customer’s portrait construction model is designed, which uses the knowledge map construction link to collect the power customer related data from the power system official website and database, and clean and convert the data; In the multi-feature analysis section, natural language processing technology is used to analyze the characteristics of power customers through Chinese word segmentation, vocabulary weight determination and emotion calculation; Based on the feature analysis results, the portrait label is extracted to generate the power customer’s portrait. The power customer’s portrait is used to realize the application of power customer’s feature visualization, power customer recommendation, power customer evaluation and so on. The experimental results show that this method can effectively construct the knowledge map of power customers, accurately extract the characteristics of power customers, generate labels, and realize the visualization of power customer’s portraits.

Author 1: Wentao Liu
Author 2: Liang Ji

Keywords: Knowledge map; label extraction; power customer’s portrait; multi-feature extraction; natural language processing; feature visualization

PDF

Paper 15: Student Acceptance Towards Online Learning Management System based on UTAUT2 Model

Abstract: Recently, education has changed from physical learning to online and hybrid learning. Furthermore, the outbreak of COVID-19 makes them more significant. An online learning management system (LMS) is one of the most prevalent approaches to online and distance learning. The acceptance of the students towards the LMS is significant and it can give either bad or good responses to ensure the success of LMS. However, the Universiti Tun Hussein Onn Malaysia (UTHM) has not yet implemented any study to examine their LMS. The Unified Theory of Acceptance and Usage of Technology (UTAUT2) model is used in this study to investigate students’ Behavioral Intention and Use Behavior when using the LMS in UTHM. This study also introduces a new construct in UTAUT2 named Online Learning Value. 376 respondents took part in this survey. Descriptive Statistics, Reliability Analysis, Pearson Correlation Coefficient, and Multiple Linear Regression analysis were all used to analyze survey data. The outcome of this research is Performance Expectancy (β=0.129, p=0.014), Hedonic Motivation (β=0.221, p=0.000), Online Learning Value (β=0.109, p=0.036) and Habit (β=0.513, p=0.000) has influence on students’ intention to use LMS. Besides that, Facilitating Conditions (β=0.481, p=0.000) are the most important factors in students’ use behavior toward the LMS followed by Habit (β=0.343, p=0.000) and Behavioral Intention (β=0.239, p=0.000). By utilizing the UTAUT2 model, the constructs of technology acceptance related to students' adoption of LMS have been identified and may become a reference to the stakeholders for future enhancement.

Author 1: Masitah Musa
Author 2: Mohd. Norasri Ismail
Author 3: Suhaidah Tahir
Author 4: Mohd. Farhan Md. Fudzee
Author 5: Muhamad Hanif Jofri

Keywords: Online learning management system; technology acceptance; unified theory of acceptance and usage of technology 2; online learning value

PDF

Paper 16: Object Pre-processing using Motion Stabilization and Key Frame Extraction with Machine Learning Techniques

Abstract: Video information processing is one of the most important application areas in research and to solve in various pre-processing issues. The pre-processing issues such as unstable video frame rates or capture angle, noisy data and large size of the video data prevent the researchers to apply information retrieval or categorization algorithms. The video data itself plays a vital role in various areas. This work aims to solve the motion stabilization, noise reduction and key frame extraction, without losing the information and in reduced time. The work results into 66% reduction in key frame extraction and nearly 6 ns time for complete video data processing.

Author 1: Kande Archana
Author 2: V Kamakshi Prasad

Keywords: Information loss preventive; mean angle measure; key frame extraction; moving average; dynamic thresholding

PDF

Paper 17: A Review on Approaches in Arabic Chatbot for Open and Closed Domain Dialog

Abstract: A Chatbot is a computer program which facilitates human-to-human communication between an artificial agent and humans. The Arabic language unlike the other languages has been used in Natural Language Processing in relatively fewer works owing to the lack of corpus along with the complexity of the language which has a number of dialects extending across various countries across the world. In the current scenario, little research has been conducted in the case of Arabic chatbots. This study presents a review about the existing literature on Arabic chatbot studies to determine knowledge gaps and suggests areas that require additional study and research. Additionally, this research observes that all relevant research relies on pattern matching or AIML techniques. The searching process was conducted utilizing keywords like ‘utterance’ ‘chatbot’, ‘ArabChat’, ‘chat agent’, ‘dialogue’, ‘interactive agent’, ‘chatterbot’, ‘conversational robot’, ‘artificial conversational’, and ‘conversational agent’. Further the study deals with the existing studies and the various approaches in Open and Closed domain dialog system and their working in the case of Arabic Chatbots. The study identified a severe lack of studies on Arabic chatbots, and it was observed that the majority of those studies were retrieval-based or rule-based.

Author 1: Abraheem Mohammed Sulayman Alsubayhay
Author 2: Md Sah Hj Salam
Author 3: Farhan Bin Mohamed

Keywords: Arabic chatbot; artificial intelligence; arabchat; human-machine interaction; conversational agent

PDF

Paper 18: Facial Emotion Detection using Convolutional Neural Network

Abstract: Non-verbal specialized strategies, e.g. look, eye development, and motions are utilized in numerous uses of human-PC connection, among them facial feeling is generally utilized as it conveys the enthusiastic states and sensations of people. In the machine learning calculation, a few significant separated highlights are utilized for displaying the face. As a result, it won't get a high accuracy rate for acknowledging that the highlights rely on prior knowledge. Convolutional Neural Network (CNN) has created this work for acknowledgment of facial feeling appearance. Looks assume an essential part in nonverbal correspondence which shows up because of the inner sensations of an individual that thinks about the countenances. This paper has utilized the calculation to distinguish features of a face such as eyes, nose, etc. This paper identified feelings from the mouth, and eyes. This paper will be proposed as a viable method for distinguishing outrage, hatred, disdain, dread, bliss, misery, and shock. These are the seven feelings from the front-facing facial picture of people. The final result gives us an accuracy of 63% on the CNN model and 85% on the ResNet Model.

Author 1: Pooja Bagane
Author 2: Shaasvata Vishal
Author 3: Rohit Raj
Author 4: Tanushree Ganorkar
Author 5: Riya

Keywords: Feature extraction; convolutional neural network; resnet; emotion recognition; emotion detection; facial recognition

PDF

Paper 19: Research on Sentiment Analysis Algorithm for Comments on Online Ideological and Political Courses

Abstract: The online course teaching platform provides a more accessible and open teaching environment for teachers and students. The sentiment tendency reflected in the online course comments becomes an essential basis for teachers to adjust the course and students to choose the course. This paper combined two deep learning algorithms, i.e., a convolutional neural network (CNN) algorithm and a long short-term memory (LSTM) algorithm, to identify and analyze the emotional tendency of comments on online ideological and political courses. Moreover, the CNN+LSTM-based sentiment analysis algorithm was simulated in MATLAB software. The influence of the text vectorization method on the recognition performance of the CNN+LSTM algorithm was tested; then, it was compared with support vector machine (SVM) and LSTM algorithms, and the comments on online ideological and political courses were analyzed. The results showed that the recognition performance of the CNN+LSTM-based sentiment analysis algorithm adopting the Word2vec text vectorization method was better than that adopting the one-hot text vectorization method; the recognition performance of the CNN+LSTM algorithm was the best, the LSTM algorithm was the second, and the SVM algorithm was the worst in terms of the performance of recognizing the sentiment of comment texts; 86.36% of the selected comments on ideological and political courses contained positive sentiment, and 13.64% contained negative sentiment. Relevant suggestions were given based on the negative comments.

Author 1: Xiang Zhang
Author 2: Xiaobo Qin

Keywords: Online courses; comment; sentiment tendency; long short-term memory

PDF

Paper 20: Why do Women Volunteer More than Men? Gender and its Role in Voluntary Citizen Reporting Applications Usage and Adoption

Abstract: By researching why citizens are eager to participate in citizen reporting applications, this study contributes to the understanding of citizen-government interaction in open government. Self-determination theory, gender role theory, and social role theory were employed to evaluate the impact of various motivational factors on individual behavioural intentions to participate in citizen reporting applications, as well as the role of gender in moderating their effects. The model was quantitatively tested by collecting 499 responses through a questionnaire from citizens who had previously utilized citizen reporting applications. The model was validated using partial least squares. The findings reveal that social responsibility, output quality, self-concern, and revenge are the motivational antecedents that have the most influence on individuals' motivation to participate in citizen reporting applications managing to explain 65.9% of behavioural intention variances. Social responsibility is the most significant driver when compared to the others. The study also revealed that gender differences moderate the impact of social responsibility and revenge on user involvement in citizen reporting apps. The current study adds to the existing literature on citizen reporting adoption and usage by examining the motivational factors that affect citizens' engagement across multiple contexts and evaluating the effect of gender in moderating the influence of social responsibility and revenge. Government institutions need to consider gender differences when designing their citizen reporting applications and their associated marketing campaigns.

Author 1: Muna M. Alhammad

Keywords: Self-determination theory; gender role theory; social role theory; motivation; amotivation; gender diversity; social responsibility; citizens reporting application

PDF

Paper 21: Effect of Visuospatial Ability on E-learning for Pupils of Physics Option in Scientific Common Trunk

Abstract: This study aims to reveal the existence of a relationship between the visuospatial capacity of pupils with a specialization in physics, with high educational performance, and the capacity for E-learning. To achieve the study, we used the Wechsler intelligence test of cognitive ability. Our sample is composed of 204 adolescents, whose average age is 15 years, 12 months, and 11 days, with a standard deviation of 00 years, 1 month and 19 days. The selection criterion was based on the general results and specifically the physics science mark. The results of the study showed the existence of a significant relationship between visuospatial ability and scientific thinking, and statistically significant homogeneity attributed to specialization in visuospatial ability and creative thinking.

Author 1: Khalid Marnoufi
Author 2: Imane Ghazlane
Author 3: Fatima Zahra Soubhi
Author 4: Bouzekri Touri
Author 5: Elhassan Aamro

Keywords: Visuospatial; e-learning; physics; intelligence

PDF

Paper 22: Mobile Devices Supporting People with Special Needs

Abstract: Over the years, various devices designed for people with special needs have been used for a while and then replaced with modern devices to make everyday life easier. The development of mobile devices is especially improved today and through them different everyday activities are facilitated, not only by people with special needs. The purpose of this paper is to present some of the modern mobile devices with an analysis of their operating systems, functionalities, applications and design. Based on the research, their usability for both sighted and visually and hearing-impaired users is described. Attention is paid to the preferences formed among users when using specialized applications developed for mobile devices. Based on a survey of specific target user groups, the paper provides summary results to support the thesis on the importance of the facilities, offered by modern mobile devices.

Author 1: Tihomir Stefanov
Author 2: Silviya Varbanova
Author 3: Milena Stefanova

Keywords: Mobile devices; mobile operating systems; Android; iOS; special needs; visually impaired; hearing impaired; e-learning

PDF

Paper 23: Vision based 3D Object Detection using Deep Learning: Methods with Challenges and Applications towards Future Directions

Abstract: For autonomous intelligent systems, 3D object detection can act as a basis for decision making by providing information such as object’s size, position and direction to perceive information about surrounding environment. Successful application using robust 3D object detection can hugely impact robotic industry, augmented and virtual reality sectors in the context of Fourth Industrial Revolution (IR4.0). Recently, deep learning has become potential approach for 3D object detection to learn powerful semantic object features for various tasks, i.e., depth map construction, segmentation and classification. As a result, exponential development in the growth of potential methods is observed in recent years. Although, good number of potential efforts have been made to address 3D object detection, a depth and critical review from different viewpoints is still lacking. As a result, comparison among various methods remains challenging which is important to select method for particular application. Based on strong heterogeneity in previous methods, this research aims to alleviate, analyze and systematize related existing research based on challenges and methodologies from different viewpoints to guide future development and evaluation by bridging the gaps using various sensors, i.e., cameras, LiDAR and Pseudo-LiDAR. At first, this research illustrates critical analysis on existing sophisticated methods by identifying six significant key areas based on current scenarios, challenges, and significant problems to be addressed for solution. Next, this research presents strict comprehensive analysis for validating 3D object detection methods based on eight authoritative 3D detection benchmark datasets depending on the size of the datasets and eight validation matrices. Finally, valuable insights of existing challenges are presented for future directions. Overall extensive review proposed in this research can contribute significantly to embark further investigation in multimodal 3D object detection.

Author 1: A F M Saifuddin Saif
Author 2: Zainal Rasyid Mahayuddin

Keywords: 3D object detection; deep learning; vision; depth map; point cloud

PDF

Paper 24: Emotion Estimation Method with Mel-frequency Spectrum, Voice Power Level and Pitch Frequency of Human Voices through CNN Learning Processes

Abstract: Emotion estimation method with Mel-frequency spectrum, voice power level and pitch frequency of human voices through CNN (Convolutional Neural Network) learning processes is proposed. Usually, frequency spectra are used for emotion estimation. The proposed method utilizes not only Mel-frequency spectrum, but also voice pressure level (voice power level) and pitch frequency to improve emotion estimation accuracy. These components are used through CNN learning processes using training samples which are provided by Keio University (emotional speech corpus) together with our own training samples which are collected by our students in emotion estimation processes. In these processes, the target emotion is divided into two categories, confident and non-confident. Through experiments, it is found that the proposed method is superior to the traditional method with only Mel-frequency by 15%.

Author 1: Taiga Haruta
Author 2: Mariko Oda
Author 3: Kohei Arai

Keywords: e-Learning; emotion estimation; Mel-frequency spectrum; fundamental frequency (pitch frequency); sound pressure level (voice power level)

PDF

Paper 25: Cybersecurity in Deep Learning Techniques: Detecting Network Attacks

Abstract: Deep learning techniques have been found to be useful in a variety of fields. Cybersecurity is one such area. In cybersecurity, both Machine Learning and Deep Learning classification algorithms can be used to monitor and prevent network attacks. Additionally, it can be utilized to identify system irregularities that may signal an ongoing attack. Cybersecurity experts can utilize machine learning and deep learning to help make systems safer. Eleven classification techniques, including eight machine learning algorithms (Decision Tree, Random Forest, and Gradient Boosting) and one statistical technique, were employed to examine the popular HTTP DATASET CSIC 2010. (K-Means). Along with XGBoost, AdaBoost, Multilayer Perceptrons, and Voting, three deep learning algorithms are Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and LSTM plus CNN. To evaluate the performance of such models, precision, accuracy, f1-score, and recall are often used metrics. The results showed that when comparing the three deep learning algorithms by the aforementioned metrics, the LSTM with CNN produced the best performance outcomes in this paper. These findings will show that our use of this algorithm allows us to detect multiple attacks and defend against any external or internal threat to the network.

Author 1: Shatha Fawaz Ghazal
Author 2: Salameh A. Mjlae

Keywords: HTTP DATASET CSIC 2010; deep learning; cybersecurity attacks; detection attacks; network attacks

PDF

Paper 26: Permission and Usage Control for Virtual Tourism using Blockchain-based Smart Contracts

Abstract: Virtual Tourism (VT) is a booming business with potential perspectives in the entertainment and financial industry. Due to travel restrictions, safety concerns, and expensive travelling the younger generation is showing interest in virtual tourism instead of traditional tourism. However, virtual tourism does not financially benefit the service providers as compared to traditional tourism stakeholders. An online system is essential to provide a central point of access to various tourism sites along with usage, permission, and payment control. In this paper, a secure blockchain-based broker service for users and content providers is proposed, which allows tourism sites to announce their virtual tours and provide accessibility and accountability. Meanwhile, it enables users to register, subscribe, access, and be billed according to their usage. The permission control module ensures authentication and authorization, while the usage control provides accountability to the predefined service level agreement. The transactions are stored on the blockchain to ensure the integrity of data and smart contracts are used to ensure automatic usage and permission control. An implementation on Hyperledger Fabric is provided as a proof of concept with performance measurements as a case study.

Author 1: Muhammad Shoaib Siddiqui
Author 2: Toqeer Ali Syed
Author 3: Adnan Nadeem
Author 4: Waqas Nawaz
Author 5: Ahmad Alkhodre

Keywords: Virtual tourism; permission control; usage control; access control; blockchain

PDF

Paper 27: A Fast Multicore-based Window Entropy Algorithm

Abstract: Malware analysis is a major challenge in cybersecurity due to the regular appearance of new malware and its effect in cyberspace. The existing tools for malware analysis enable reverse engineering to understand the origin, purpose, attributes, and potential consequences of malicious software. An entropy method is one of the techniques used to analyze and detect malware, which is defined as a measure of information encoded in a series of values based upon the probability of those values appearing. The window entropy algorithm is one of the methods that can be applied to calculate entropy values in an effective manner. However, it requires a significant amount of time when the size of the file is large. In this paper, we solve this problem in two ways. The first way of improvement is determining the best window size that leads to minimizing the running time of the window entropy algorithm. The second way of improvement is by parallelizing the window entropy algorithm on a multicore system. The experimental studies using artificial data show that the improved sequential algorithm can reduce the window entropy method’s running time by 79% on an average. Also, the proposed parallel algorithm outperforms the modified sequential algorithm by 77% and has super-linear speed up.

Author 1: Suha S.A. Shokr
Author 2: Hazem M. Bahig

Keywords: Entropy; window method; malware analysis; parallel algorithm; multicore

PDF

Paper 28: Routing with Multi-Criteria QoS for Flying Ad-hoc Networks (FANETs)

Abstract: Flying Ad-hoc Network (FANET) is a type of Ad-hoc network on backbone of Unmanned Aerial Vehicle (UAV). These networks are used for providing communication services in case of natural disasters. Dynamic changes in link quality and mobility distort the Quality of Service (QoS) for routing in FANETs. This work proposed a Multi Criteria QoS Optimal Routing (MCQOR) guided by prediction of link quality and three-dimensional (3D) movement of FANET nodes. The network is clustered based on prediction of movement of nodes. Over the clustered topology, routing path is selected in reactive manner with joint optimization of packet delivery ratio, delay, and network overhead. In addition, cross layer feedback is used to reduce the packet generation rate and congestion in network. Through simulation analysis, the proposed routing protocol is found to have 3.8% higher packet delivery ratio, 26% lower delay and 14% lower network overhead compared to existing works

Author 1: Ch Naveen Kumar Reddy
Author 2: Krovi Raja Sekhar

Keywords: Flying ad-hoc Network; multi-criteria QoS; unmanned aerial vehicles; joint optimization

PDF

Paper 29: The Use of ICTs in the Digital Culture for Virtual Learning of University Students Applying an Artificial Neural Network Model

Abstract: Artificial neural networks are mathematical models of artificial intelligence that intend to reproduce the behavior of the human brain and whose main objective is the construction of systems that are capable of demonstrating certain intelligent behavior. The purpose of the investigation is to determine the influence of the use of Information and Communication Technologies (ICTs) in the digital culture in the learning process of university students in Peru and Bolivia in the context of the Coronavirus – COVID 19 sanitary emergency, through the application of artificial neural network models. The investigation has a quantitative focus, the applied type, with a correlational level and a non-experimental design. Data was recollected by means of a digital questionnaire, applied to students of two universities. The population is composed of 3980 students of the Universidad Privada Domingo Savio (UPDS, Tarija, Bolivia) and 1506 of the Universidad Nacional de Moquegua (UNAM, Moquegua, Peru). The sample consists of 496 students. The hypothetical-deductive and the artificial intelligence methods were used. It was determined that the ability to install software and data protection programs, the use of mobile devices for academic purposes and the command of specialized software are the most influential factors in the digital culture of the students at UNAM and UPDS.

Author 1: José Luis Morales Rocha
Author 2: Mario Aurelio Coyla Zela
Author 3: Nakaday Irazema Vargas Torres
Author 4: Helen Gaite Trujillo

Keywords: Artificial neural network; digital culture; ICT; virtual learning; COVID 19

PDF

Paper 30: A Study of Modelling IoT Security Systems with Unified Modelling Language (UML)

Abstract: The Internet of Things (IoT) has emerged as a technology with the application in different areas. Hence, security is one of the major challenges that has the potential to stifle the growth of the IoT. In fact, IoT is vulnerable to several cyber attacks and needs challenging techniques to achieve its security. In this paper, the use of a UML (Unified Modelling Language) aims at modeling IoT systems in various views. The purpose of this study is to discuss the need for more modeling in terms of security. For this reason, this paper focuses on modeling security of IoT systems. The objective is to make a comparison in terms of layers by describing the IoT architecture and presenting its components. In other words, the research question is to look for the modeling of security in the IoT layers. There is no standard that takes into account the security of the IoT architecture, there are different proposals of IoT levels, which means that each author has his own vision and own proposition. Moreover, there is a lack of modelling languages for IoT security systems. The main interest of this study is to choose the layer on which we should be interested. The question then is as follows: “which is the layer whose modeling is relevant?” The obtained results were conclusive and provided the best insight into all the specifications of each layer of the IoT architecture studied.

Author 1: Hind Meziane
Author 2: Noura Ouerdi

Keywords: Internet of things (IoT); IoT systems; IoT security; modelling; Unified Modelling Language (UML); UML extensions; IoT applications

PDF

Paper 31: DevOps Enabled Agile: Combining Agile and DevOps Methodologies for Software Development

Abstract: The Agile and DevOps software development methodologies have made revolutionary advancements in software engineering. These methodologies vastly improve software quality and also speed up the process of developing software products. However, several limitations have been discovered in the practical implementation of Agile and DevOps, including the lack of collaboration between the development, testing and delivery sectors of different software projects and high skill requirements. This paper presents a solution to bridge the existing gaps between Agile and DevOps methodologies by integrating DevOps principles into Agile to devise a hybrid DevOps Enabled Agile for software development. This study includes the development of a small-scale, experimental pilot project to demonstrate how software development teams can combine the advantages of Agile and DevOps methodologies to fulfill the gaps and provide further improvements to the speed and quality of software development process while maintaining feasible skill requirements.

Author 1: Shah Murtaza Rashid Al Masud
Author 2: Md. Masnun
Author 3: Afia Sultana
Author 4: Anamika Sultana
Author 5: Fahad Ahmed
Author 6: Nasima Begum

Keywords: Agile; DevOps; gaps; collaboration; skill; DevOps Enabled Agile; software development

PDF

Paper 32: Issues in Requirements Specification in Malaysia’s Public Sector: An Evidence from a Semi-Structured Survey and a Static Analysis

Abstract: Requirement specifications (RS) are essential and fundamental artefacts in system development. RS is the primary reference in software development and is commonly written in natural language. Bad requirement quality, such as requirement smells, may lead to project delay, cost overrun, and failure. Focusing on requirement quality in the Malaysian government, this paper investigates the methods for preparing Malay RS and personnel competencies to identify the root cause of this issue. We conducted semi-structured interviews that involved 17 respondents from eight critical Malaysian public sector agencies. This study found that ambiguity, incompleteness, and inconsistency are the top three requirement smells that cause project delays and failures. Furthermore, based on our static analysis, we collected the initial Malay RS documents from various Malaysian public sector agencies; we found that 30% of the RS were ambiguous. Our analysis also found that respondents with more than 10 years of experience could manually identify the smells in RS. Most respondents chose the Public Sector Application Systems Engineering (KRISA) handbook as a guideline for preparing Malay RS documents. Respondents acknowledged a correlation between the quality of RS and project delays and failures.

Author 1: Mohd Firdaus Zahrin
Author 2: Mohd Hafeez Osman
Author 3: Alfian Abdul Halin
Author 4: Sa'adah Hassan
Author 5: Azlena Haron

Keywords: Ambiguity; requirements engineering; requirement smell; requirement specification; semi-structured interview

PDF

Paper 33: A Guideline for Designing Mobile Applications for Children with Autism within Religious Boundaries

Abstract: Autism spectrum disorder is a condition related to brain development that impacts how a person perceives and socialises with others, causing problems in social interaction and communication. The disorder also includes limited and repetitive patterns of behavior. Children with autism spectrum disorder (ASD) develop at a different rate and don’t necessarily develop skills in the same order as typically developing children. Nowadays, children with autism spectrum disorder (ASD) are having difficulties in gaining religious skills. This is due to the lack of schools that provide special religious education for disabled children. Many technologies have been developed to help children with autism for education. Mobile applications have extensively been used to enhance their daily learning. Researchers are extremely trailing their applications but not many applications are able to meet the requirements and needs of children with autism, especially in the religious context. The lack of religious mobile application guidelines is crucial as a reference for researchers. This paper aims to propose a guideline to design a mobile application for children with autism in religious context. A systematic review of previous literature on mobile application guidelines for autism and religious mobile application guidelines was conducted. This study resulted in two key findings: (1) elements of multimedia consist of text, images and sounds (2) features of application consist of interface, navigation, customisation and interaction. The proposed guidelines are potentially to be used by researchers who are interested in designing religious mobile applications for children with autism.

Author 1: Ajrun Azhim Zamry
Author 2: Muhammad Haziq Lim Abdullah
Author 3: Mohd Hafiz Zakaria

Keywords: Autism Spectrum Disorder (ASD); guidelines; mobile applications; religion; assistive technology; communication

PDF

Paper 34: Fuzzy Support Vector Machine based Fall Detection Method for Traumatic Brain Injuries

Abstract: Falling is a major health issue that can lead to both physical and mental injuries. Detecting falls accurately can reduce the severe effects and improve the quality of life for disabled people. Therefore, it is critical to develop a smart fall detection system. Many approaches have been proposed in wearable-based systems. In these approaches, machine learning techniques have been conducted to provide automatic classification and to improve accuracy. One of the most commonly used algorithms is Support Vector Machine (SVM). However, classical SVM can neither use prior knowledge to process accurate classifications nor solve problems characterized by ambiguity. More specifically, some values of falls are inaccurate and similar to the features of normal activities, which can also greatly impact the performance of the learning ability of SVMs. Hence, it became necessary to look for an effective fall detection method based on a combination of Fuzzy Logic (FL) and SVM algorithms so as to reduce false positive alarms and improve accuracy. In this paper, various training data are assigned to the corresponding membership degrees. Some data points with a high chance of falling are assigned a high degree of membership, yielding a high contribution for SVM decision-making. This does not only achieve accurate fall detection, but also reduces the hesitation in labeling the outcomes and improves the heuristic transparency of the SVM. The experimental results achieved 100% specificity and precision, with an overall accuracy of 99.96%. Consequently, the experiment proved to be effective and yielded better results than the conventional approaches.

Author 1: Mohammad Kchouri
Author 2: Norharyati Harum
Author 3: Ali Obeid
Author 4: Hussein Hazimeh

Keywords: Fall detection; fuzzy logic; SVM; traumatic brain injuries; wearable sensor

PDF

Paper 35: An Effective Ensemble-based Framework for Outlier Detection in Evolving Data Streams

Abstract: In the last few years, data streams have drawn lots of researchers’ attention due to their various applications, such as healthcare monitoring systems, fraud and intrusion detection, the internet of things (IoT), and financial market applications. A data stream is an unbounded sequence of data continually generated over time and is prone to evolution. Outliers in streaming data are the elements that significantly deviate from the majority of elements and then have to be detected as they may be error values or events of interest. Detection of outliers is a challenging issue in streaming data and is one of the most crucial tasks in data stream mining. Existing outlier detection methods for static data are unsuitable for use in data stream settings due to the unique characteristics of streaming data such as unpredictability, uncertainty, high-dimensionality, and changes in data distribution. Thus, in this paper, a novel ensemble learning framework called Ensemble-based Streaming Outlier Detection (ESOD) is presented to perfectly detect outliers over streaming data using a sliding window technique that is updated in response to the incoming events from the data streaming environment to overcome the concept evolution nature of streaming data. The proposed framework has three phases, namely the training phase, testing/offline phase, and outlier detection/online phase. A detection weighted vote technique is used to determine the final decisions for potential outliers. In the extensive experimental study, which was conducted on 11 real-world benchmark datasets, the proposed framework was assessed using many accuracy metrics. The experiment results showed that the proposed framework beats many other state-of-the-art methods.

Author 1: Asmaa F. Hassan
Author 2: Sherif Barakat
Author 3: Amira Rezk

Keywords: Outlier detection; data streams; data stream mining; ensemble learning; concept evolution

PDF

Paper 36: Towards a Fair Evaluation of Feature Extraction Algorithms Robustness in Structure from Motion

Abstract: Structure from Motion is a pipeline for 3D reconstruction in which the true geometry of an object or a scene is inferred from a sequence of 2D images. As feature extraction is usually the first phase in the pipeline, the reconstruction quality depends on the accuracy of the feature extraction algorithm. Fairly evaluating the robustness of feature extraction algorithms in the absence of reconstruction ground truth is challenging due to the considerable number of parameters that affect the algorithms' sensitivity and the tradeoff between reconstruction size and error. The evaluation methodology proposed in this paper is based on two elements. The first is using constrained 3D reconstruction, in which only fixed numbers of extracted and matched features are passed to subsequent phases. The second is comparing the 3D reconstructions using size-error curves (introduced in this paper) rather than the value of reconstruction size, error, or both. The experimental results show that the proposed methodology is more transparent.

Author 1: Dina M. Taha
Author 2: Hala H. Zayed
Author 3: Shady Y. El-Mashad

Keywords: Feature extraction; feature matching; structure from motion; 3D reconstruction

PDF

Paper 37: Ensemble Tree Classifier based Analysis of Water Quality for Layer Poultry Farm: A Study on Cauvery River

Abstract: Indian poultry industry has evolved from a simple backyard occupation to a large commercial agri-based enterprise. Chicken dominates poultry production in India, accounting for almost 95% of total egg production. Several factors affect the egg production such as feeding material, drinking water, environmental factors etc. Analyzing the water quality is one of the important tasks. Cauvery River is considered as the study area because of its importance in several states of South India which have significant contribution in poultry farming. The aim of the proposed study is to develop an automated approach of water quality analysis and present a novel machine learning approach which considers an improved feature ranking method and ensemble tree classifier with majority voting. The experimental result shows that proposed approach performs better with an accuracy of 95.12%.

Author 1: Deepika
Author 2: Nagarathna
Author 3: Channegowda

Keywords: Water quality; poultry; machine learning; Cauvery river; feature ranking; ensemble tree classifier; accuracy

PDF

Paper 38: Detection of Abnormal Human Behavior in Video Images based on a Hybrid Approach

Abstract: The analysis of human movement has attracted the attention of many scholars of various disciplines today. The purpose of such systems is to perceive human behavior from a sequence of video images. They monitor the population to find common properties among pedestrians on the scene. In video surveillance, the main purpose of detecting specific or malicious events is to help security personnel. Different methods have been used to detect human behavior from images. This paper has used an efficient computational algorithm for detecting anomalies in video images based on the combined approach of the differential evolution algorithm and cellular neural network. In this method, the input image's gray-level image is first generated. Because it may be possible to identify several large areas in the image after the threshold, the largest white area is selected as the target area. The images are then used to remove noise, smooth the image, and fade the morphology. The results showed that the proposed method has higher speed and accuracy than other methods. The advantage of the algorithm is that it has a runtime of three seconds on a home computer, and the average sensitivity criterion is 98.6% (97.2%).

Author 1: BAI Ya-meng
Author 2: WANG Yang
Author 3: WU Shen-shen

Keywords: Cellular neural network; detection of abnormalities; differential evolution algorithm; video images

PDF

Paper 39: Transformer-based Neural Network for Electrocardiogram Classification

Abstract: A transformer neural network is a powerful method that is used for sequence modeling and classification. In this paper, the transformer neural network was combined with a convolutional neural network (CNN) that is used for feature embedding to provide the transformer inputs. The proposed model accepts the raw electrocardiogram (ECG) signals side by side with extracted morphological ECG features to boost the classification performance. The raw ECG signal and the morphological features of the ECG signal experience two independent paths with the same model architecture where the output of each transformer decoder is concatenated to go through the final linear classifier to give the predicted class. The experiments and results on the PTB-XL dataset with 7-fold cross-validation have shown that the proposed model achieves high accuracy and F-score, with an average of 99.86% and 99.85% respectively, which shows and proves the robustness of the model and its feasibility to be applied in industrial applications.

Author 1: Mohammed A. Atiea
Author 2: Mark Adel

Keywords: Electrocardiogram classification; transformer neural network; convolutional neural network

PDF

Paper 40: BCT-CS: Blockchain Technology Applications for Cyber Defense and Cybersecurity: A Survey and Solutions

Abstract: Blockchain technology has now emerged as a ground-breaking technology with possible solutions to applications from securing smart cities to e-voting systems. Although it started as a digital currency or cryptocurrency, bitcoin, there is no doubt that blockchain is influencing and will influence business and society more in the near future. We present a comprehensive survey of how blockchain technology is applied to provide security over the web and to counter ongoing threats as well as increasing cybercrimes and cyber-attacks. During the review, we also investigate how blockchain can affect cyber data and information over the web. Our contributions included the following: (i) summarizing the Blockchain architecture and models for cybersecurity (ii) classifying and discussing recent and relevant works for cyber countermeasures using blockchain (iii) analyzing the main challenges and obstacles of blockchain technology in response to cyber defense and cybersecurity and (iv) recommendations for improvement and future research on the integration of blockchain with cyber defense.

Author 1: Naresh Kshetri
Author 2: Chandra Sekhar Bhushal
Author 3: Purnendu Shekhar Pandey
Author 4: Vasudha

Keywords: Applications; blockchain technology; blockchain solutions; countermeasures; cyber-attacks; cyber defense; cybersecurity; survey

PDF

Paper 41: Low-rate DDoS attack Detection using Deep Learning for SDN-enabled IoT Networks

Abstract: Software Defined Networks (SDN) can logically route traffic and utilize underutilized network resources, which has enabled the deployment of SDN-enabled Internet of Things (IoT) architecture in many industrial systems. SDN also removes bottlenecks and helps process IoT data efficiently without overloading the network. An SDN-based IoT in an evolving environment is vulnerable to various types of distributed denial of service (DDoS) attacks. Many research papers focus on high-rate DDoS attacks, while few address low-rate DDoS attacks in SDN-based IoT networks. There’s a need to enhance the accuracy of LDDoS attack detection in SDN-based IoT networks and OpenFlow communication channel. In this paper, we propose LDDoS attack detection approach based on deep learning (DL) model that consists of an activation function of the Long-Short Term Memory (LSTM) to detect different types of LDDoS attacks in IoT networks by analyzing the characteristic values of different types of LDDoS attacks and natural traffic, improve the accuracy of LDDoS attack detection, and reduce the malicious traffic flow. The experiment result shows that the model achieved an accuracy of 98.88%. In addition, the model has been tested and validated using benchmark Edge IIoTset dataset which consist of cyber security attacks.

Author 1: Abdussalam Ahmed Alashhab
Author 2: Mohd Soperi Mohd Zahid
Author 3: Amgad Muneer
Author 4: Mujaheed Abdullahi

Keywords: SDN; LDDoS attack; OpenFlow; Deep Learning; Long-Short Term Memory

PDF

Paper 42: Stock Price Forecasting using Convolutional Neural Networks and Optimization Techniques

Abstract: Forecasting the correct stock price is intriguing and difficult for investors due to its irregular, inherent dynamics, and tricky nature. Convolutional neural networks (CNN) have impressive performance in forecasting stock prices. One of the most crucial tasks when training a CNN on a stock dataset is identifying the optimal hyperparameter that increases accuracy. In this research, we propose the use of the Firefly algorithm to optimize CNN hyperparameters. The hyperparameters for CNN were tuned with the help of Random Search (RS), Particle Swarm Optimization (PSO), and Firefly (FF) algorithms on different epochs, and CNN is trained on selected hyperparameters. Different evaluation metrics are calculated for training and testing datasets. The experimental finding demonstrates that the FF method finds the ideal parameter with a minimal number of fireflies and epochs. The objective function of the optimization technique is to reduce MSE. The PSO method delivers good results with increasing particle counts, while the FF method gives good results with fewer fireflies. In comparison with PSO, the MSE of the FF approach converges with increasing epoch.

Author 1: Nilesh B. Korade
Author 2: Mohd. Zuber

Keywords: Convolutional neural networks; swarm intelligence; random search; particle swarm optimization; firefly

PDF

Paper 43: Vision-based Human Detection by Fine-Tuned SSD Models

Abstract: Human-robot interaction (HRI) and human-robot collaboration (HRC) has become more popular as the industries are taking initiative to idealize the era of automation and digitalization. Introduction of robots are often considered as a risk due to the fact that robots do not own the intelligent as human does. However, the literature that uses deep learning technologies as the base to improve HRI safety are limited, not to mention transfer learning approach. Hence, this study intended to empirically examine the efficacy of transfer learning approach in human detection task by fine-tuning the SSD models. A custom image dataset is developed by using the surveillance system in TT Vision Holdings Berhad and annotated accordingly. Thereafter, the dataset is partitioned into the train, validation, and test set by a ratio of 70:20:10. The learning behaviour of the models was monitored throughout the fine-tuning process via total loss graph. The result reveals that the SSD fine-tuned model with MobileNetV1 achieved 87.20% test AP, which is 6.1% higher than the SSD fine-tuned model with MobileNetV2. As a trade-off, the SSD fine-tuned model with MobileNetV1 attained 46.2 ms inference time on RTX 3070, which is 9.6 ms slower as compared to SSD fine-tuned model with MobileNetV2. Taking test AP as the key metric, SSD fine-tuned model with MobileNetV1 is considered as the best fine-tuned model in this study. In conclusion, it has shown that the transfer learning approach within the deep learning domain can help to protect human from the risk by detecting human at the first place.

Author 1: Tang Jin Cheng
Author 2: Ahmad Fakhri Ab. Nasir
Author 3: Anwar P. P. Abdul Majeed
Author 4: Mohd Azraai Mohd Razman
Author 5: Thai Li Lim

Keywords: Human detection; deep learning; transfer learning; SSD; fine-tuning; human-robot interactions

PDF

Paper 44: Data Warehouse Analysis and Design based on Research and Service Standards

Abstract: Data are not easy to organize, especially if the data are big in quantity and stored manually and in a non-computerized way. Therefore, in the last few years, many organizations or companies used information systems to help their activities organize and manage the data. Universitas Jenderal Soedirman (UNSOED) is a state college that has existed for a long time and has many Study Programs and Faculties, including the Faculty of Engineering. Data organization in UNSOED is mainly performed through computerization. However, retrieving data needs to be improved because UNSOED has various information systems, and the data produced keeps increasing over time. The data have yet to be evaluated following the needs of Tri Dharma, with indicators of achievement as expressed in the Regulation of Minister of Education and Culture (PERMENDIKBUD) on the National Standard of Higher Education (SNDIKTI). Data warehouse technology can be applied to storing, collecting, and processing media within a specific time from various data sources. The data processing results in the data warehouse are later displayed using the tools Knowage which may help the executives of the Faculty of Engineering make a decision and monitor the businesses, mainly about research and service, by the society of academicians in the Faculty of Engineering regularly from time to time.

Author 1: Lasmedi Afuan
Author 2: Nurul Hidayat
Author 3: Dadang Iskandar
Author 4: Arief Kelik Nugroho
Author 5: Bangun Wijayanto
Author 6: Ana Romadhona Yasifa

Keywords: Data warehouse; knowage; SNDIKTI; UNSOED

PDF

Paper 45: Toward an Ontological Cyberattack Framework to Secure Smart Cities with Machine Learning Support

Abstract: With the emergence and the movement toward the Internet of Things (IoT), one of the most significant applications that have gained a great deal of concern is smart cities. In smart cities, IoT is leveraged to manage life and services within a minimal, or even no, human intervention. IoT paradigm has created opportunities for a wide variety of cyberattacks to threaten systems and users. Many challenges have been faced in order to encounter IoT cyberattacks, such as the diversity of attacks and the frequent appearance of new attacks. This raises the need for a general and uniform representation of cyberattacks. Ontology proposed in this paper can be used to develop a generalized framework, and to provide a comprehensive study of potential cyberattacks in a smart city system. Ontology can serve in building this intended general framework by developing a description and a knowledge base for cyberattacks as a set of concepts and relation between them. In this article we have proposed an ontology to describe cyberattacks, we have identified the benefits of such ontology, and discussed a case study to show how we can we utilize the proposed ontology to implement a simple intrusion detection system with the assistance of Machine Learning (ML). The ontology is implemented using protégé ontology editor and framework, WEKA is utilized as well to construct the inference rules of the proposed ontology. Results show that intrusion detection system developed using the ontology has shown a good performance in revealing the occurrence of different cyber-attacks, accuracy has reached 97% in detecting cyber-attacks in a smart city system.

Author 1: Ola Malkawi
Author 2: Nadim Obaid
Author 3: Wesam Almobaideen

Keywords: Cyberattack; Internet of Things (IoT); ontology; machine learning; intrusion detection system

PDF

Paper 46: A Novel Annotation Scheme to Generate Hate Speech Corpus through Crowdsourcing and Active Learning

Abstract: The number of user-generated posts is growing exponentially with social media usage growth. Promoting violence against or having the primary purpose of inciting hatred against individuals or groups based on specific attributes via social media posts is daunting. As the posts are published in multiple languages with different forms of multimedia, social media finds it challenging to moderate before reaching the audience and assessing the posts as hate speech becomes sophisticated due to subjectivity. Social media platforms lack contextual and linguistic expertise and social and cultural insights to identify hate speech accurately. Research is being carried out to detect hate speech on social media content in English using machine learning algorithms, etc., using different crowdsourcing platforms. However, these platforms' workers are unavailable from countries such as Sri Lanka. The lack of a workforce with the necessary skill set and annotation schemes symbolizes further research essentiality in low-resource language annotation. This research proposes a suitable crowdsourcing approach to label and annotates social media content to generate corpora with words and phrases to identify hate speech using machine learning algorithms in Sri Lanka. This paper summarizes the annotated Facebook posts, comments, and replies to comments from public Sri Lankan Facebook user profiles, pages and groups of 52,646 instances, unlabeled tweets based on 996 Twitter search keywords of 45,000 instances of YouTube Videos of 45,000 instances using the proposed annotation scheme. 9%, 21% and 14% of Facebook, Twitter and YouTube posts were identified as containing hate content. In addition, the posts were categorized as offensive and non-offensive, and hate targets and corpus associated with hate targets focusing on an individual or group were identified and presented in this paper. The proposed annotation scheme could be extended to other low-resource languages to identify the hate speech corpora. With the use of a well-implemented crowdsourcing platform with the proposed novel annotation scheme, it will be possible to find more subtle patterns with human judgment and filtering and take preventive measures to create a better cyberspace.

Author 1: Nadeera Meedin
Author 2: Maneesha Caldera
Author 3: Suresha Perera
Author 4: Indika Perera

Keywords: Annotation; crowdsourcing; hate speech detection; social media data analytics

PDF

Paper 47: Detecting Brain Diseases using Hyper Integral Segmentation Approach (HISA) and Reinforcement Learning

Abstract: Medical Images are most widely done by the various image processing approaches. Image processing is used to analyze the various abnormal tissues based on given input images. Deep learning (DL) is one of the fast-growing field in the computer science and specifically in medical imaging analysis. Tumor is a mass tissue that contains abnormal cells. Normal tumor tissues may not grow in other places but if it contains the cancerous (malignant) cells these tissues may grow rapidly. It is very important to know the cause of brain tumors in humans and these should be detected in the early stages. Magnetic Resonance Imaging (MRI) images are most widely used to detect the tumors in the brain and these are also used to detect the tumors all over the body. Tumors are of various types such as noncancerous (benign) and cancerous (malignant). Sometimes tumors may convert into cancer cells based on the stage of the tumor. In this paper, a hyper integral segmentation approach (HISA) is introduced to detect cancerous tumors and non-cancerous tumors. Detecting cancerous cells in the tumors may reduce the life threat to the affected persons. The agent based reinforcement classification (ABRC) is used to classify the Alzheimer's disease (AD) and cancerous and non-cancerous cells based on the abnormalities present in the MRI images. Two publically available datasets are selected such as MRI images and AD-affected MRI images. Performance is analyzed by showing the improved metrics such as accuracy, f1-score, sensitivity, dice similarity score, and specificity.

Author 1: M. Praveena
Author 2: M. Kameswara Rao

Keywords: Benign; malignant; magnetic resonance imaging; Alzheimer's disease

PDF

Paper 48: Combining AHP and Topsis to Select Eligible Social and Solidarity Economy Actors for a Call for Grants

Abstract: The procedure for selecting projects in order to offer a grant for actors of the social and solidarity economy can be a delicate task for decision-makers (Public or Private Establishments), which is based on several eligibility and refusal criteria (economic, social and environmental); the task that can sometimes take several months before returning the results. This study proposes an integrated framework based on two multi-criteria decision methods, analytical hierarchy process (AHP) and technique for order performance by similarity to an ideal solution (TOPSIS), to select and rank viable projects to obtain a grant from the INDH (National Initiative for Human Development). Initially, the projects were randomly selected from a list of submitted projects to receive a grant. Later, AHP obtains weights of various criteria through pairwise comparison, and projects are ranked using TOPSIS. The proposed methodology is empirically applied to the social and solidarity economy sector and provides a detailed and effective decision-making tool for selecting suitable actors to obtain a grant. The results indicate that the conservation of natural resources and the rate of job creation are the essential criteria in the process of selecting projects.

Author 1: Salma Chrit
Author 2: Abdellah Azmani
Author 3: Monir Azmani

Keywords: AHP; TOPSIS; project selection; decision making; multi criteria decision method

PDF

Paper 49: Classification of Electromyography Signal of Diabetes using Artificial Neural Networks

Abstract: Diabetes is one of the most chronic diseases, with an increasing number of sufferers yearly. It can lead to several serious complications, including diabetic peripheral neuropathy (DPN). DPN must be recognized early to receive appropriate treatment and prevent disease exacerbation. However, due to the rapid development of machine learning classification, like in the health science sector, it is very easy to identify DPN in the early stages. Therefore, the aim of this study is to develop a new method for detecting neuropathy based on the myoelectric signal among diabetes patients at a low cost with utilizing one of the machine learning techniques, the artificial neural network (ANN). To that aim, muscle sensor V3 is used to record the activity of the anterior tibialis muscle. Then, the representative time domain features which is mean absolute value (MAV), root mean square (RMS), variance (VAR), and standard deviation (SD) used to evaluate fatigue. During neural network training, a different number of hidden neurons were used, and it was found that using seven hidden neurons showed a high accuracy of 98.6%. Thus, this work indicates the potential of a low-cost system for classifying healthy and diabetic individuals using an ANN algorithm.

Author 1: Muhammad Fathi Yakan Zulkifli
Author 2: Noorhamizah Mohamed Nasir

Keywords: Electromyography; diabetic neuropathy; classification; machine learning; artificial neural networks

PDF

Paper 50: Constraints on Hyper-parameters in Deep Learning Convolutional Neural Networks

Abstract: Convolutional Neural Network (CNN), a type of Deep Learning, has a very large number of hyper-meters in contrast to the Artificial Neural Network (ANN) which makes the task of CNN training more demanding. The reason why the task of tuning parameters optimization is difficult in the CNN is the existence of a huge optimization space comprising a large number of hyper-parameters such as the number of layers, number of neurons, number of kernels, stride, padding, rows or columns truncation, parameters of the backpropagation algorithm, etc. Moreover, most of the existing techniques in the literature for the selection of these parameters are based on random practice which is developed for some specific datasets. In this work, we empirically investigated and proved that CNN performance is linked not only to choosing the right hyper-parameters but also to its implementation. More specifically, it is found that the performance is also depending on how it deals when the CNN operations require setting of hyper-parameters that do not symmetrically fit the input volume. We demonstrated two different implementations, crop or pad the input volume to make it fit. Our analysis shows that padding performs better than cropping in terms of prediction accuracy (85.58% in contrast to 82.62%) while takes lesser training time (8 minutes lesser).

Author 1: Ubaid M. Al-Saggaf
Author 2: Abdelaziz Botalb
Author 3: Muhammad Faisal
Author 4: Muhammad Moinuddin
Author 5: Abdulrahman U. Alsaggaf
Author 6: Sulhi Ali Alfakeh

Keywords: Neural networks; convolution; pooling; hyper-parameters; CNN; deep learning; zero-padding; stride; back-propagation

PDF

Paper 51: Design of a Speaking Training System for English Speech Education using Speech Recognition Technology

Abstract: A good English speaking training system can provide an aid to the learning of English. This paper briefly introduced the English speaking training system and described the speaking training scoring and pronunciation resonance peak display modules in the system. The speaking training scoring module scored pronunciation with the Long Short-Term Memory (LSTM). The pronunciation resonance peak display module extracted the resonance peak with Fourier transform and visualized it. Finally, the speaking scoring module, the pronunciation resonance peak display module, and the effect of the whole system in improving students’ speaking pronunciation was tested. The results showed that the LSTM-based speaking scoring algorithm had highest scoring accuracy than pattern matching and the recurrent neural network (RNN) algorithm, and its accuracy was 95.21% when scoring the LibriSpeech dataset and 90.12% when scoring the local English dataset. The pronunciation resonance peak display module displayed the change of mouth shape before and after training, and the pronunciation after training was closer to the standard pronunciation. The P value in the comparison of the speaking level before and after training with the system was 0.001, i.e., the difference was significant, which indicated that the students’ English speaking proficiency significantly improved.

Author 1: Hengheng He

Keywords: English speech; long short-term memory; speaking training; speech recognition

PDF

Paper 52: Development of Underwater Pipe Crack Detection System for Low-Cost Underwater Vehicle using Raspberry Pi and Canny Edge Detection Method

Abstract: The effective loading area decreases because of cracking, leading to a rise in stress and eventual structural failure. Monitoring for cracks is an important part of keeping any pipeline or building in excellent working order. There are several obstacles that make manual inspection and monitoring of subsea pipes challenging. The fundamental objective of this study is to create a relatively inexpensive underwater vehicle that can use an image processing technique to reliably spot cracks on the exteriors of industrial pipes. The tasks involved in this project include the planning, development, and testing of an underwater vehicle that can approach the circular pipes, take pictures, and determine whether there are fractures. In this project, we will utilize the Canny edge detection technique to identify the crack. The system could function in either an online or offline mode. Using a Raspberry Pi and a camera, the paper will discuss the procedures followed to locate the pipe cracks that activate the underwater vehicle. While Python is used for image processing to capture photographs, analyze images, and expose flaws in particular images, the underwater vehicle's movement will be controlled via a connected remote control. When the physical model has been built and tested, the results are recorded, and the system's benefits and shortcomings are discussed.

Author 1: Mohd Aliff
Author 2: Nur Farah Hanisah
Author 3: Muhammad Shafique Ashroff
Author 4: Sallaudin Hassan
Author 5: Siti Fairuz Nurr
Author 6: Nor Samsiah Sani

Keywords: Crack detection; pipeline; underwater vehicle: image processing; Raspberry Pi; canny edge detection

PDF

Paper 53: Method for 1/f Fluctuation Component Extraction from Images and Its Application to Improve Kurume Kasuri Quality Estimation

Abstract: Method for 1/f fluctuation component extraction from images is proposed. As an application of the proposed method, Kurume Kasuri textile quality evaluation is also proposed. Frequency component analysis is used for 1/f fluctuation component extraction. Also, an attempt is conducted to discriminate the typical Kurume Kasuri textile quality, (1) Relatively smooth edge lines are included in the Kurume Kasuri textile patterns, (2) Relatively non-smooth edge lines are included in the patterns, (3) Between both of patterns (1) and (2) by using template matching method of FLANN of OpenCV. Through experiments, it is found that the proposed method does work for extraction of 1/f fluctuation component and also found that Kurume Kasuri textile quality can be done with the result of 1/f fluctuation component extraction.

Author 1: Jin Shimazoe
Author 2: Kohei Arai
Author 3: Mariko Oda
Author 4: Jewon Oh

Keywords: 1/f fluctuation component extraction; Kurume Kasuri textile quality; FLANN; OpenCV

PDF

Paper 54: Image Verification and Emotion Detection using Effective Modelling Techniques

Abstract: The feelings expressed on the face reflect the manner of thinking and provide useful insights of happenings inside the brain. Face Detection enables us to identify a face. Recognizing the facial expressions for different emotions is to familiarize the machine with human like capacity to perceive and identify human feelings, which involves classifying the given input images of the face into one of the seven classes which is achieved by building a multi class classifier. The proposed methodology is based on convolutional neural organizations and works on 48x48 pixel-based grayscale images. The proposed model is tested on various images and gives the best accuracy when compared with existing functionalities. It detects faces in images, recognizes them and identifies emotions and shows improved performance because of data augmentation. The model is experimented with varying depths and pooling layers. The best results are obtained sequential model of six layers of Convolutional Neural Network and softmax activation function applied to last layer. The approach works for real time data taken from videos or photos.

Author 1: Sumana Maradithaya
Author 2: Vaishnavi S

Keywords: Face detection; face recognition; emotion detection; data augmentation

PDF

Paper 55: Big Data Analytics Quality in Enhancing Healthcare Organizational Performance: A Conceptual Model Development

Abstract: The advancement of Big Data Analytics (BDA) has aided numerous organizations in effectively and efficiently adopting BDA as a holistic solution. However, BDA quality assessment has not yet been fully addressed, therefore it is necessary to identify essential BDA quality factors to assure the enhancement of organizational performance, particularly in the healthcare sector. Hence, the goals of this study are to recognize and analyse the determining factors of BDA quality as well as to suggest a conceptual model for enhancing the performance of healthcare organizations via BDA quality assessment. The proposed conceptual model is based on a related theoretical model and previous research on BDA quality. The essential BDA quality factors being selected as determinants consist of reliability, completeness, accuracy, timeliness, format, accessibility, usability, maintainability, and portability. The findings of this ongoing study are used to develop a conceptual model that is proposed in line with the ten-research hypothesis and may offer a better assessment quality model to improve the performance of healthcare organizations.

Author 1: Wan Mohd Haffiz Mohd Nasir
Author 2: Rusli Abdullah
Author 3: Yusmadi Yah Jusoh
Author 4: Salfarina Abdullah

Keywords: Big data analytics; BDA quality factors; BDA quality assessment; organizational performance; healthcare

PDF

Paper 56: Parkinson’s Disease Identification using Deep Neural Network with RESNET50

Abstract: Recent Parkinson's disease (PD) research has focused on recognizing vocal defects from people's prolonged vowel phonations or running speech since 90% of Parkinson's patients demonstrate vocal dysfunction in the early stages of the illness. This research provides a hybrid analysis of time and frequency and deep learning techniques for PD signal categorization based on ResNet50. The recommended strategy eliminates manual procedures to perform feature extraction in machine learning. 2D time-frequency graphs give frequency and energy information while retaining PD morphology. The method transforms 1D PD recordings into 2D time-frequency diagrams using hybrid HT/Wigner-Ville distribution (WVD). We obtained 91.04% accuracy in five-fold cross-validation and 86.86% in testing using RESNET50. F1-score achieved 0.89186. The suggested approach is more accurate than state-of-the-art models.

Author 1: Anila M
Author 2: Pradeepini Gera

Keywords: Parkinson’s disease; speech impairment; artificial intelligence; RESNET50; deep learning; ht/wigner-ville distribution; 2D time-frequency

PDF

Paper 57: Design of Mobile Application Auction for Ornamental Fish to Increase Farmer Sales Transactions in Indonesia

Abstract: This article focuses on designing a mobile application for ornamental fish auction transactions for fish cultivators in order to increase their sales. This mobile app was created using a prototyping methodology and a four step process. The first is communication, the second is Quick plan and design, the third is construction prototyping, which develops a tender application, and the last stage is development delivery and feedback. Data validation is carried out for users such as farmers, bidders, or buyers in developing the application. The results of this paper propose a mobile auction application that provides auction information and bidding by bidders and sellers. The results show that the application is validated and declared usable and feasible to conduct auctions and bids as needed. This application can increase sales and improve the economic life of ornamental fish farmers in Indonesia.

Author 1: Henry Antonius Eka Widjaja
Author 2: Meyliana
Author 3: Erick Fernando
Author 4: Stephen Wahyudi Santoso
Author 5: Surjandy
Author 6: A.Raharto Condrobimo

Keywords: Mobile application; auction; ornamental fish; prototyping model

PDF

Paper 58: An Automatic Adaptive Case-based Reasoning System for Depression Remedy Recommendation

Abstract: Social media data represents the fuel for advanced analytics concerning people’s behaviors, physiological and health status. These analytics include identifying users’ depression levels via Twitter and then recommend remedies. Remedies come in the form of suggesting some accounts to follow, displaying motivational quotes, or even recommending a visit to a psychiatrist. This paper proposes a remedy recommendation system which exploits case-based reasoning (CBR) with random forest. The system recommends the appropriate remedy for a person. The main contribution of this work is the creation of an automated, data-driven, and scalable adaptation module without human interference. The results of every stage of the system were verified by certified psychiatrist. Another contribution of this work is setting the weights in case similarity measurement by the features’ importance, extracted from the depression identification system. CBR retrieval accuracy (exact hit) reached 82% while the automatic adaptation accuracy (exact remedy) reached 88%. The adaptation presented an error-tolerance advantage which enhances the overall accuracy.

Author 1: Hatoon S. AlSagri
Author 2: Mourad Ykhlef
Author 3: Mirvat Al-Qutt
Author 4: Abeer Abdulaziz AlSanad
Author 5: Lulwah AlSuwaidan
Author 6: Halah Abdulaziz Al-Alshaikh

Keywords: Case-based reasoning (CBR); depression; remedy; adaptation; similarity; twitter

PDF

Paper 59: A Novel Hierarchical Shape Analysis based on Sampling Point-Line Distance for Regular and Symmetry Shape Detection

Abstract: Regular and symmetry shapes occurred in natural and manufactured objects. Detecting these shapes are essential and still tricky task in computer vision. This paper proposes a novel hierarchical shape detection (HiSD) method, which consists of circularity and roundness detection, and regularity and symmetry detection phases. The first phase recognizes the circular and elliptical shapes using aspect ratio and roundness measurements. The second phase, the main phase in the HiSD, recognizes the regular and symmetry shapes using density distribution measurement (DDM) and the proposed sampling point-line distance distribution (SPLDD) algorithm. The proposed method presets effective with low computation cost shape detection approach which is not sensitive to specific category of objects. It enables to detect different types of objects involving the arbitrary, regular, and symmetry shapes. Experimental results show that the proposed method performs well compared to the existing state-of-the-art algorithms.

Author 1: Kehua Xian

Keywords: Shape recognition; hierarchical shape detection; sampling point-line distance distribution; regular and symmetry shape detection

PDF

Paper 60: Development of Automatic Segmentation Techniques using Convolutional Neural Networks to Differentiate Diabetic Foot Ulcers

Abstract: The quality of computer vision systems to detect abnormalities in various medical imaging processes, such as dual-energy X-ray absorptiometry, magnetic resonance imaging (MRI), ultrasonography, and computed tomography, has significantly improved as a result of recent developments in the field of deep learning. There is discussion of current techniques and algorithms for identifying, categorizing, and detecting DFU. On the small datasets, a variety of techniques based on traditional machine learning and image processing are utilized to find the DFU. These literary works have kept their datasets and algorithms private. Therefore, the need for end-to-end automated systems that can identify DFU of all grades and stages is critical. The study's goals were to create new CNN-based automatic segmentation techniques to separate surrounding skin from DFU on full foot images because surrounding skin serves as a critical visual cue for evaluating the progression of DFU as well as to create reliable and portable deep learning techniques for localizing DFU that can be applied to mobile devices for remote monitoring. The second goal was to examine the various diabetic foot diseases in accordance with well-known medical categorization schemes. According to a computer vision viewpoint, the authors looked at the various DFU circumstances including site, infection, neuropathy, bacterial infection, area, and depth. Machine learning techniques have been utilized in this study to identify key DFU situations as ischemia and bacterial infection.

Author 1: R V Prakash
Author 2: K Sundeep Kumar

Keywords: Magnetic resonance imaging (MRI); diabetic foot ulcers (DFU); convolutional neural networks; ischemia& machine learning algorithms & dual-energy x-ray absorptiometry

PDF

Paper 61: Energy Consumption Reduction Strategy and a Load Balancing Mechanism for Cloud Computing in IoT Environment

Abstract: Modern networks are built to be linked, agile, programmable, and load-efficient in order to overcome the drawbacks of an unbalanced network, such as network congestion, elevated transmission costs, low reliability, and other problems. The many technological devices in our environment have a considerable potential to make the connected world concept a reality. The Internet of Things (IoT) is a research community initiative to bring this idea to life. Cloud computing is crucial to making it happen. The load balancing and scheduling significantly increase the possibility of using resources and provide the grounds for reliability. Even if the intended node is under low or high loading, the load balancing techniques can increase its efficiency. This paper presents a scheduling technique for optimal resource allocation with enhanced particle swarm optimization and virtual machine live migration technique. The proposed technique prevents excessive or low server overloads through optimal allocation and scheduling tasks to physical servers. The proposed strategy was implemented in the cloudsim simulator environment and compared and showed that the proposed method is more effective and is well suited to decreasing execution time and energy consumption. This solution provides grounds to reduce energy consumption in the cloud environment while decreasing execution time. The simulation results showed that the amount of energy consumption compared to particle crowding has decreased by 10% and compared to PSO (Particle Swarm Optimization) scheduling by more than 8%. Also, the execution time has been reduced by 18% compared to particle swarm scheduling and by 8% compared to PSO.

Author 1: Tai Zhang
Author 2: Huigang Li

Keywords: Internet of things; load balancing; cloud computing; virtual machine migration

PDF

Paper 62: A Review of Lightweight Object Detection Algorithms for Mobile Augmented Reality

Abstract: Augmented Reality (AR) has led to several technologies being at the forefront of innovation and change in every sector and industry. Accelerated advances in Computer Vision (CV), AR, and object detection refined the process of analyzing and comprehending the environment. Object detection has recently drawn a lot of attention as one of the most fundamental and difficult computer vision topics. The traditional object detection techniques are fully computer-based and typically need massive Graphics Processing Unit (GPU) power, while they aren't usually real-time. However, an AR application required real-time superimposed digital data to enable users to improve their field of view. This paper provides a comprehensive review of most of the recent lightweight object detection algorithms that are suitable to be used in AR applications. Four sources including Web of Science, Scopus, IEEE Xplore, and ScienceDirect were included in this review study. A total of ten papers were discussed and analyzed from four perspectives: accuracy, speed, small object detection, and model size. Several interesting challenges are discussed as recommendations for future work in the object detection field.

Author 1: Mohammed Mansoor Nafea
Author 2: Siok Yee Tan
Author 3: Mohammed Ahmed Jubair
Author 4: Mustafa Tareq Abd

Keywords: Augmented reality (AR); object detection; computer vision (CV); non-graphics processing unit (Non-GPU); real time

PDF

Paper 63: The Influence of Virtual Secure Mode (VSM) on Memory Acquisition

Abstract: Recently, acquiring the Random Access Memory (RAM) full memory and access data is gaining significant interest in digital forensics. However, a security feature on the Windows operating system - Virtual Secure Mode (VSM) - presents challenges to the acquisition process by causing a system crash known as a Blue Screen of Death (BSoD). The crash is likely to occur when memory acquisition tools are being used. Subsequently, it disrupts the goal of memory acquisition since the system must be restarted, and the RAM content is no longer available. This study analyzes the implications of VSM on memory acquisition tools as well as examines to what extent its impact on the acquisition process. Two memory acquisition tools, namely FTK Imager and Belkasoft RAM Capturer, were used to conduct the acquisition process. Static and dynamic code analyses were performed by using reverse engineering techniques that are disassembler and debugger. The results were compared based on the percentage of unreadable memory between active and inactive VSM. Static analysis showed that there is no difference between all applications’ functions for both active and inactive VSM. Further Bugcheck analysis of the MEMORY.DMP is pointed to the ad_driver.sys module in FTK Imager that causes the system to crash. The percentage of unreadable memory while running on active VSM and inactive VSM for Belkasoft is about 0.6% and 0.0021%, respectively. These results are significant as a reference to digital investigators as consistent with the importance of RAM dump in live forensics.

Author 1: Niken Dwi Wahyu Cahyani
Author 2: Erwid M Jadied
Author 3: Nurul Hidayah Ab Rahman
Author 4: Endro Ariyanto

Keywords: Live forensics; memory acquisition; virtualization; virtual secure mode

PDF

Paper 64: Optimizing Faculty Workloads and Room Utilization using Heuristically Enhanced WOA

Abstract: The creation and generation of schedules that are free of conflicts manually every academic semester present higher education institutions with a duty that is laborious and demanding of their resources. The course timetabling optimization, as an education timetabling problem, is a popular example of an NP-hard combinatorial problem. Numerous attempts have been made over the course of the past few decades to find a solution to this problem, but no one has yet developed a foolproof approach that can examine all alternatives to find the best method. The promising swarm-based optimization algorithm called Whale Optimization Algorithm was heuristically enhanced in the present study and is called HEWOA. It was designed as a solution to the course timetabling problem discussed in the current study. HEWOA was able to generate an efficient timetable for the large dataset of 1700 events for an average time of 14.92 seconds only, with an average generation of 7.2 and a best time of 8.38 seconds. These results reveal that the performance of HEWOA was better than that of various hybrids of the Genetic Algorithm that was compared in the present study.

Author 1: Lea D. Austero
Author 2: Ariel M. Sison
Author 3: Junrie B. Matias
Author 4: Ruji P. Medina

Keywords: Heuristics; mutation; optimization; swarm; timetabling; whale optimization algorithm

PDF

Paper 65: Transformation Model of Smallholder Oil Palm Supply Chain Ecosystem using Blockchain-Smart Contract

Abstract: The development of new technology has the potential to disrupt and transform an existing system as well as information technology. This study aims to build a proposed model in order to transform an old system into a Blockchain-based system. The smallholder oil palm supply chain currently uses a traditional information system and technology, hence its integrity, transparency, and security are vulnerable. To solve this problem, frontier technology is needed, such as a blockchain, which has trust, transparency, and traceable characteristics to improve performance and quality. The method used in this study was digital transformation in the context of operational processes and technology. In addition, the As-Is To-Be model was used as a mechanism to develop a transformation model for the smallholder oil palm supply chain system. Specifically, the As-Is model was implemented in identifying and analyzing the information system and technology in an already existing system, while the To-Be was used to determine the blockchain potential and characteristics. This becomes the proposed model for the transformation of the old system into a Blockchain technology-based. Also, a prospective and mechanism for system transformation were produced in the aspect of transactions, data, and architecture, as well as the flow of change strategies needed in the transformation of the blockchain-based smallholder oil palm supply chain system.

Author 1: Irawan Afrianto
Author 2: Taufik Djatna
Author 3: Yandra Arkeman
Author 4: Irman Hermadi

Keywords: Transformation model; smallholder oil palm supply chain; blockchain; smart contract

PDF

Paper 66: Speckle Reduction in Medical Ultrasound Imaging based on Visual Perception Model

Abstract: Ultrasound imaging technology is one of the most important clinical imaging modalities due to its safety, low cost, in addition to its versatile applications. The main technical problem in this technology is its noisy appearance due to the presence of speckle, which makes reading imaging more difficult. In this study, a new method of speckle reduction in medical ultrasound images is proposed based on adaptive shifting of the contrast sensitivity function of human vision using a bias field map estimated from the original image. The aim of this work is to have an effective image enhancement strategy that reduces speckle while preserving diagnostically useful image features and allowing practical implementation in real-time for medical ultrasound imaging applications. The new method is used to improve the visual perception of image quality of ultrasound images by adding a local brightness bias to the areas with speckle noise. This allows the variations in image pixels due to speckle noise to be better perceived by the human observer because of the visual perception model. The performance of the proposed method is objectively assessed using quantitative image quality metrics and compared to previous methods. Furthermore, given that image quality perception is subjective, the level of added bias is controlled by a single parameter that accommodates the different needs for different users and applications. This method has potential to offer better viewing conditions of ultrasound images, which translates to higher diagnostic accuracy.

Author 1: Yasser M. Kadah
Author 2: Ahmed F. Elnokrashy
Author 3: Ubaid M. Alsaggaf
Author 4: Abou-Bakr M. Youssef

Keywords: Contrast sensitivity function; image quality metrics; speckle reduction; ultrasound imaging

PDF

Paper 67: Multi-level Video Captioning based on Label Classification using Machine Learning Techniques

Abstract: Video captioning is the heuristic and most essential task in the current world to save time by converting long and highly content-rich videos into simple and readable reports in text form. It is narrating the events happening in videos in natural language sentences. It makes the way to many more interesting tasks by the use of labels, tags, and terms such as video content retrieval, video search, video tagging, etc. Video captioning is currently being attempted by many researchers using some exciting Deep learning techniques. But this approach is to find the best of machine learning for the process of captioning videos in a different way. The novel part of the proposed approach is classifying videos by using the labels existing in video frames that belong to the various categories and producing consecutive Multi-Level captions that describe the entire video in a round-robin way. Informative features are extracted from the video frames such as Gray Level Co-occurrence Matrix (GLCM) features, Hu moments, and Statistical features to provide optimal results. This model is designed with two superior and optimal classifiers such as Support Vector Machine (SVM) and Naive Bayes separately. The models are demonstrated with the prevailing standard dataset Microsoft Research Video Description corpus (MSVD) and evaluated by the benchmark classification metrics such as Accuracy, Precision, Recall, and F1-Score.

Author 1: J. Vaishnavi
Author 2: V. Narmatha

Keywords: Video captioning; label classification; Hu moments; GLCM; statistical features; SVM; Naive Bayes

PDF

Paper 68: Students’ Perspective on Sustaining Education and Promoting Humanising Education through e-Learning

Abstract: The COVID-19 pandemic has shifted the education sector towards an e-learning approach to sustain education. Sustaining student education through e-learning significantly impacts student learning experiences and outcomes, which can be influenced by e-learning infrastructure, e-learning materials, and learning engagement. The concept of humanising education refers to the learning process that reflects students' moralities and values. Focus group discussion is a practical approach to assess students’ perspectives on sustaining and humanising their education through e-learning. This qualitative focus group study aimed to discuss the e-learning factors that will sustain education and promote humanising education in the virtual learning environment. Thirty students from Information Technology (IT) and business fields have participated and provided a different view on sustaining education through e-learning. Thematic analysis was used to analyse the focus group data. Five themes were identified: (1) e-learning technologies and infrastructure; (2) e-learning principles: pedagogy and materials; (3) health and wellness; (4) equality; and (5) engagement: communication and collaboration. The analysis enlightened the e-learning design for sustainable e-learning. In addition, this paper outlines how the findings support sustainable development goals.

Author 1: Aidrina Sofiadin

Keywords: Education; sustainability; humanizing education; sustainable e-learning

PDF

Paper 69: Diagnosis of Carcinoma from Histopathology Images using DA-Deep Convnets Model

Abstract: Cancer is a major origin of mortality around the globe, responsible for roughly high morbidity and mortality in 2020, or almost one per six deaths. Cervical, lung, and breast are the most common types of cancers. Cervical is the fourth highest common in women worldwide. Cervical would then kill approximately 4,280 women. Infections that cause, such as human papillomavirus (HPV) and hepatitis, account for approximately 30% of cases in low- and lower-middle-income countries. Many cancers are curable if detected as early as possible. In this proposed work, developed the DA-Deep convnets model (Data augmentation with a deep, Convolutional Neural Network) for the detection of cervical cancer from biopsy images. Deep Convolutional Neural Network presents one of the most applied DL approaches in medical imaging. Today, enhancements in image analysis and processing, particularly medical imaging, have become a major factor in the improvement of various systems in areas such as medical prognosis, treatment, and diagnosis. Based on our proposed model we achieved 99.2% accuracy in detecting the input image has cancer or not.

Author 1: K. Abinaya
Author 2: B. Sivakumar

Keywords: Cancer; cervical cancer; convolutional neural network; deep learning

PDF

Paper 70: KMIT-Pathology: Digital Pathology AI Platform for Cancer Biomarkers Identification on Whole Slide Images

Abstract: Analysis and identification of cancer imaging bio markers on biopsy tissues are done through optical microscope. Digital tissue scanners and Deep learning models automate this task and produce unbiased diagnostics. The digital tissue scanner is called as virtual microscopy which digitize the glass slide tissues and the digitized images are called as Whole Slide Images (WSI). They are multi-layered (level) images having high resolution, huge in size and stored as a pyramidal tiff file. As normal web browsers are unable to handle WSI, a special web imaging platform is needed to obtain, store, visualize and process WSI. This platform must provide basic facilities for uploading, viewing and annotating WSI which are the inputs to the deep learning models. The integration of deep learning models with the platform and the WSI database provides a complete solution to cancer diagnostics and detection. This paper proposes two AI deep learning models for the diagnostics and the detection of cancer imaging bio markers on breast cancer and prostate cancer WSI. Efficientnet deep learning model is used to detect ISUP (International Society of Urologic Pathologists) grading for prostate cancer which is trained and tested by 5000 prostate WSI and produces 80% accuracy with 0.6898 quadratic weighted kappa (QWK) score. R2Unet model is used to identify tubule structures for breast cancer which is a morphological component to grade breast cancer. The model is trained and tested by 17432 WSI tiles and generates f1 metric accuracy as 0.9961 with mean_io_u 0.8612. The paper also shows the complete execution of these two Deep learning models (from uploading WSI to visualize the AI detected results) on the newly developed WSI imaging web platform.

Author 1: Rajasekaran Subramanian
Author 2: R. Devika Rubi
Author 3: Rohit Tapadia
Author 4: Rochan Singh

Keywords: AI for cancer prediction and diagnostics; deep learning for WSI analysis; tubule prediction on breast cancer; ISUP grading for prostate cancer; WSI imaging platform

PDF

Paper 71: Visually Impaired Person Assistance Based on Tensor FlowLite Technology

Abstract: The most exciting thing about computer visualization is to detect a Real time object application system. This is abundantly used in many areas. With the more increase of development of deep learning such as self-driving cars, robots, safety tracking, and guiding visually impaired people, many algorithms have improved to find the relationship between video analysis and images analysis. Entire algorithms behave uniquely in the network architecture, and they have the same goal of detecting numerous objects in a composite image. It is very important to use our technology to train visually impaired people whenever they need them, as they are visually impaired and limit the movement of people in unknown places. This paper offers an application system that will identify all the possible day-to-day objects of our surroundings, and on the other side, it promotes speech feedback to the person about the sudden as well as far objects around them. This project was advanced using two different algorithms: Yolo and Yolo-v3, tested to the same criteria to measure its accuracy and performance. The SSD_MobileNet model is used in Yolo Tensor Flow and the Darknet model is used in Yolo_v3. Speech feedback: A Python library incorporated to convert statements to speech-to-speech. Both algorithms are analyzed using a web camera in a variety of circumstances to measure the correctness of the algorithm in every aspect.

Author 1: Nethravathi B
Author 2: Srinivasa H P
Author 3: Hithesh Kumar P
Author 4: Amulya S
Author 5: Bhoomika S
Author 6: Banashree S Dalawai
Author 7: Chakshu Manjunath

Keywords: Tensor flow; SSD; Yolo; Yolo_v3; gifts; deep learning

PDF

Paper 72: Design of Robust Quasi Decentralized Type-2 Fuzzy Load Frequency Controller for Multi Area Power System

Abstract: Interconnected power systems receive power through tie lines. Sudden perturbation in load causes uneven power distribution issues resulting in sudden changes in the voltage and frequency in the given system (tie-line power exchange error). The Load Frequency Controller (LFC) has an ability to stabilize the system for the above mentioned disturbances. In this paper, a novel load frequency controller based on Type-2 Fuzzy Quasi-Decentralized Functional Observer (T2FQFO) is proposed. In the proposed methodology the observer gains are obtained mathematically which guarantees, the stability of the system. The efficacy of the proposed technique has been tested on an IEEE standard testing systems. The results shows the proposed T2FQFO has higher performance when compared with Fuzzy Quasi-Decentralized Functional observer, Quasi- Decentralized Functional observer and classical state observer. And the results say that the peak over shoot and settling time have been improved by 25% (Approx.) by Type-2 Quasi Decentralized Functional Observer ((T2FQFO) than other observers.

Author 1: Jesraj Tataji Dundi
Author 2: Anand Gondesi
Author 3: Rama Sudha Kasibhatla
Author 4: A. Chandrasekhar

Keywords: Load frequency control; type-2 fuzzy quasi-decentralized functional observer; fuzzy quasi-decentralized functional observers; state observer; type-2 fuzzy controller

PDF

Paper 73: EEG-Based Silent Speech Interface and its Challenges: A Survey

Abstract: People with speech disorders could have social and welfare difficulties. Therefore, the silent speech interface (SSI) is needed to help them communicate. This interface decodes the speech from a human’s biosignal. The brain signals contain information from speech production to cover people with numerous speech disorders. Brain signals can be acquired non-invasively by electroencephalograph (EEG) and later transformed into the features for the input of speech pattern recognition. This review discusses the advancement of EEG-based SSI research and its current challenges. It mainly discussed the acquisition protocol, spectral-spatial-temporal characterization of EEG-based imagined speech, classification techniques with leave-one-subject or session-out cross-validation, and related real-world environmental issues. It aims to aid future imagined speech decoding research in exploring the proper methods to overcome the problems.

Author 1: Nilam Fitriah
Author 2: Hasballah Zakaria
Author 3: Tati Latifah Erawati Rajab

Keywords: Imagined speech; silent speech interface; electroencephalograph (EEG); speech recognition

PDF

Paper 74: Deeply Learned Invariant Features for Component-based Facial Recognition

Abstract: Face recognition underage variation is a challenging problem. It is a difficult task because ageing is an intrinsic variation, not like pose and illumination, which can be controlled. We propose an approach to extract invariant features to improve facial recognition using facial components. Can facial recognition over age progression be improved by resizing independently each individual facial component? The individual facial components: eyes, mouth, and nose were extracted using the Viola-Jones algorithm. Then we utilize the eyes region rectangle with upper coordinates to detect the forehead and lower coordinates with the nose rectangle to detect the cheeks. The proposed work uses Convolutional Neural Network with an ideal input image size for each facial component according to many experiments. We sum up component scores by applying weighted fusion for a final decision. The experiments prove that the nose component provides the highest score contribution among other ones, and the cheeks are the lowest. The experiments were conducted on two different facial databases- MORPH, and FG-NET databases. The proposed work achieves a state-of-the-art accuracy that reaches 100% on the FG-NET dataset and the results obtained on the MORPH dataset outperform the accuracy results of the related works in the literature.

Author 1: Adam Hassan
Author 2: Serestina Viriri

Keywords: Invariant features; facial components; facial recognition; convolutional neural network; weighted fusion

PDF

Paper 75: Research on the Design of Online Teaching Platform of College Dance Course based on IGA Algorithm

Abstract: As a comprehensive art form, dance plays an integral role in developing the overall quality of students. However, with the increasing progress of IT technology, the traditional classroom-based teaching mode of dance course can no longer adapt to the current educational environment. This research designs an e-teaching platform system for college dance lessons based on the IGA algorithm (Improved Genetic Algorithm, IGA). First, it establishes a mathematical model of artificial intelligence questions and then proposes the functional design of an online teaching platform for college dance course based on the IGA algorithm. The feasibility of the proposed online teaching platform system of college dance course based on the IGA algorithm is validated via a number of testes. When the iteration count is 500, the success rate of the IGA algorithm reaches 99%; when the iteration count is 100, the average fitness is 0.929; when the iteration count is 100 times, the moderate calculation value is 0.936. While for the traditional Genetic algorithm (GA), the results are 88.6%, 0.73, and 0.752, respectively. By comparing with the traditional teaching mode based on GA algorithm, the proposed method based on IGA algorithm is obviously superior in many aspects.

Author 1: Yunyun Xu

Keywords: Online teaching; improved genetic algorithm; college dance course; intelligent test set; simulated annealing

PDF

Paper 76: Early-Warning Dropout Visualization Tool for Secondary Schools: Using Machine Learning, QR Code, GIS and Mobile Application Techniques

Abstract: Investment in education through the provision of secondary school to the community is geared to develop human capital in Tanzania. However, these investments have been hampered by unacceptable higher rates of school dropouts, which seriously affect female students, since most schools do not have effective mechanisms for quality data management for immediate and effective decision making. Therefore, this study aims to solve the problem of data management from the school level in order to assist higher levels to receive appropriate and effective data on time through the use of emerging technologies such as machine learning, QR codes, and mobile application. To implement this solution, the study has explored the predictors of school dropout using a mixed approach with questionnaires and interview discussion. 600 participants participated in problem identification in the Arusha region. Through the use of design science research methodology, Unified Modeling Language, MYSQL, QR codes and mobile application techniques were integrated with Support Vector Machine to develop the proposed solution. Finally, the evaluation process considered 100 participants, and the results showed that an average of 89% of participants provided positive feedback on the functionalities of the developed tool to prevent dropouts in secondary schools in Africa at large.

Author 1: Judith Leo

Keywords: Dropout; education; girls; machine learning; students; QR code; mobile application

PDF

Paper 77: The Best Techniques to Deal with Unbalanced Sequential Text Data in Deep Learning

Abstract: Datasets with a balanced distribution of data are often difficult to find in real life. Although various methods have been developed and proven successful using shallow learning algorithms, handling unbalanced classes using a deep learning approach is still limited. Most of these studies only focus on image data using the Convolution Neural Network (CNN) architecture. In this study, we tried to apply several class handling techniques to three datasets of unbalanced text data. Both use a data-level approach with resampling techniques on word vectors and algorithm-level using Weighted Cross-Entropy Loss (WCEL) to handle cases of imbalanced text classification. With Bidirectional Long-Short Term Memory (BiLSTM) architecture. We tested each method using three datasets with different characteristics and levels of imbalance. Based on the experiments that have been carried out, each technique applied has a different performance on each dataset.

Author 1: Sumarni Adi
Author 2: Awaliyatul Hikmah
Author 3: Bety Wulan Sari
Author 4: Andi Sunyoto
Author 5: Ainul Yaqin
Author 6: Mardhiya Hayaty

Keywords: Imbalanced text classification; deep learning; resampling technique; weighted cross-entropy loss

PDF

Paper 78: A New Framework for Accelerating Magnetic Resonance Imaging using Deep Learning along with HPC Parallel Computing Technologies

Abstract: MRI (magnetic resource imaging) has played a vital role in emerging technologies because of its non-invasion principle. MR equipment is traditional procedure being used for imaging biological structures. In medical domain, MRI is a most important tool being used for staging in clinical diagnosis that has ability to furnish rich physiological and functional information and radiation and non-ionizing nature. However, MRI is highly demanding in several clinical applications. In this paper, we have proposed a novel deep learning based method that accelerates MRI using a huge number of MR images. In proposed method, we used supervised learning approach that performs network training of given datasets. It determines the required network parameters that afford an accurate reconstruction of under-sampled acquisitions. We also designed offline based neural network (NN) that was trained to discover the relationship between MR images and K-space. All the experiments were performed over advanced NVIDIA GPUs (Tesla k80 and GTX Titan) based computers. It was observed that the proposed model outperformed and attained <0.2% error rate. With our best knowledge, our method is the best approach that can be considered as leading model in future.

Author 1: Hani Moaiteq Aljahdali

Keywords: Magnetic resonance imaging (MRI); segmentation; classification; acceleration; deep learning

PDF

Paper 79: Artificial Neural Network based Power Control in D2D Communication

Abstract: As a viable technique for next-generation wireless networks, Device-to-Device (D2D) communication has attracted interest because it encourages the usage of point-to-point communications between User Equipment (UE) without passing over base stations (BS). Device-to-device (D2D) communication has been proposed in cellular networks as a supplementary paradigm to primarily increase network connection. This research takes into account a cellular network where users are trying device-to-device (D2D) connection. A D2D pair is composed of two D2D users (DUEs), a transmitter, and a receiver. To improve spectral efficiency, we use the premise that the D2D pairs only employ one communication channel. In order to minimize interference between D2D pairs and increase capacity, a power control is required. In the scenario where only typical cellular channel gains between base stations and DUEs are known and channel gains among DUEs are completely inaccessible, we address the issue of D2D power control. For each individual D2D pair, we use an artificial neural network (ANN) to calculate the transmission power. We show that the maximum aggregate capacity for the D2D pairs may be reached while anticipating the transmission power setting for D2D pairs using cellular channel gains.

Author 1: Nethravathi H M
Author 2: S Akhila

Keywords: Artificial Neural Network (ANN); Base Stations (BS); CUE; Device-to-Device (D2D); DUE; ML; User Equipment (UE)

PDF

Paper 80: Providing a Framework for Security Management in Internet of Things

Abstract: With the advent of Internet of Things technology, tremendous changes are taking place. Perhaps what humans never even imagined will come in the near future, and just as the Internet surrounds all aspects of people's daily lives, intelligent objects will autonomously take over all aspects of people's lives. So far, a lot of research and development has been done in the field of Internet of Things, but there are still many challenges in this field. One of the most important challenges is the issue of security in the Internet of Things. Therefore, in this paper, while reviewing the requirements, models and security architectures of the Internet of Things, a framework for security management in the Internet of Things is proposed, which takes into account various aspects and requirements. The proposed framework uses various ideas such as cryptography, encryption, anomaly detection, intrusion detection, and behavior pattern analysis and can be considered as a basis for future research. The purpose of this research is to determine security requirements and provide a method to improve security management in the Internet of Things. Based on the tests, the proposed method is completely 100% resistant against data modification attacks. Against impersonation attacks up to 97% and against denial of service attacks up to 89% resistant detection accuracy.

Author 1: XUE Zhen
Author 2: LIU Xingyue

Keywords: Internet of things (IoT); security management; security requirement; security model; security architecture

PDF

Paper 81: Novel Strategies Employing Deep Learning Techniques for Classifying Pathological Brain from MR Images

Abstract: Brain tumors are the most widespread as well as disturbing sickness, among a very precise expectancy of life almost in their serious structure. As a consequence, therapy planning is a critical component in enhancing the characteristics of the patient’s life. Image modalities like computed tomography (CT), magnetic resonance imaging (MRI), along with ultrasound images are commonly used to assess malignancies in the brain, breast, etc. MRI scans, in evidence, are employed in this study to identify the brain tumors. The application of excellent catego-rization systems on Magnetic Resonance Imaging (MRI) aids in the accurate detection of brain malignancies. The large quantity of data produced through MRI scan, on the other hand, renders physical distribution of tumor and non-tumor in a given time period impossible. It does, however, come with major obstruction. As a consequence, in order to decrease human mortality, a dependable and automated categorizing approach is necessary. The enormous geological and anatomical heterogeneity of the environment surrounding the brain tumor makes automated classification of brain tumor a difficult undertaking. This paper proposes a classification of Convolutional Neural Networks (CNN) for automated brain tumour diagnosis. To study as well as compare the findings, other convolutional neural network designs such as MobileNet V2, ResNet101, and DenseNet121 are used. Small kernels are employed to carry out the more intricate architectural design. This experiment was carried out using Python and Google Colab. The weight of a neuron is characterized as minute.

Author 1: Mitrabinda Khuntia
Author 2: Prabhat Kumar Sahu
Author 3: Swagatika Devi

Keywords: Brain tumor; CT; MRI; CNN; MobileNet V2; ResNet101; DenseNet121

PDF

Paper 82: Towards a Blockchain-based Medical Test Results Management System

Abstract: The role of test results in the diagnosis and treatment of patients’ diseases at medical facilities cannot be ignored. Patients must have a series of tests that are related to their symptoms. This can be repeated as many times as possible, depending on the type of disease and treatment. Seriously, in the cases where the patients lose their medical test record (i.e., patient’s medical history), the diagnosis is difficult due to the lack of information about the medical history as well as the symptoms/complications in the previous treatments. Storing this treatment information in medical centers can address risks related to user failure (e.g., loss of medical test records and wet/fire documents). However, users face a bit of difficulty when they change to other medical centers for medical examination and treatment since the data is stored locally, and difficult to share this with others. Current solutions focus on empowering users (i.e., patients) to share medical information related to disease treatment. However, the main barrier to these approaches is the knowledge of the users. They must embrace some background in terms of the technologies, risks, and rights they may share with treatment facilities. To solve this problem, we propose a Blockchain-based medical test result management system where all information is stored and verified by the stakeholder. The data will be stored decentralized and updated throughout the treatment process. We implement a proof-of-concept based on the Hyperledger Fabric platform. To demonstrate the effectiveness of the proposed system, we conduct evaluation methods based on three main tasks of the system: initializing, accessing, and updating data on six different scenarios (i.e., increasing in size of processing requests). The evaluation based on Hyperledger Caliper helped us to have a deeper analysis of the proposed model.

Author 1: Phuc Nguyen Trong
Author 2: Hong Khanh Vo
Author 3: Luong Hoang Huong
Author 4: Khiem Huynh Gia
Author 5: Khoa Tran Dang
Author 6: Hieu Le Van
Author 7: Nghia Huynh Huu
Author 8: Tran Nguyen Huyen
Author 9: Loc Van Cao Phu
Author 10: Duy Nguyen Truong Quoc
Author 11: Bang Le Khanh
Author 12: Kiet Le Tuan

Keywords: Blood donation; blockchain; hyperledger fabric; blood products supply chain

PDF

Paper 83: Wheat Diseases Detection and Classification using Convolutional Neural Network (CNN)

Abstract: Ever since the medieval era, the preponderance of our concentration has been concentrated upon agriculture, which is typically recognized to be one of the vital aspects of the economy in contemporary society. This focus on agriculture can be traced back to the advent of the industrial revolution. Wheat is still another type of grain that, in the same way as other types of harvests, satisfies the necessity for the essential nutrients that are required for our bodies to perform their functions correctly. On the other hand, the supply of this harvest is being limited by a variety of rather frequent ailments. This is making it difficult to meet demand. The vast majority of people who work in agriculture are illiterate, which hinders them from being able to take appropriate preventative measures whenever they are necessary to do so. As a direct consequence of this factor, there has been a reduction in the total amount of wheat that has been produced. It can be quite difficult to diagnose wheat illnesses in their early stages because there are so many various forms of environmental variables and other factors. This is because there are numerous distinct sorts of agricultural products, illiteracy of agricultural workers, and other factors. In the past, a variety of distinct models have been proposed as potential solutions for identifying illnesses in wheat harvests. This study demonstrates a two-dimensional CNN model that can identify and categorize diseases that affect wheat harvests. To identify significant aspects of the photos, the software employs models that have previously undergone training. The suggested method can then identify and categorize disease-affected wheat crops as distinct from healthy wheat crops by employing the major criteria described above. The reliability of the findings was assessed to be 98.84 percent after the collection of a total of 4800 images for this study. These images included eleven image classes of images depicting diseased crops and one image class of images depicting healthy crops. To offer the suggested model the capability to identify and classify diseases from a variety of angles, the photographs that help compensate for the collection were flipped at a variety of different perspectives. These findings provide evidence that CNN can be applied to increase the precision with which diseases in wheat crops are identified.

Author 1: Md Helal Hossen
Author 2: Md Mohibullah
Author 3: Chowdhury Shahriar Muzammel
Author 4: Tasniya Ahmed
Author 5: Shuvra Acharjee
Author 6: Momotaz Begum Panna

Keywords: Wheat crop diseases; artificial intelligence; convo-lution neural networks; image processing; feature extraction

PDF

Paper 84: An Effective Decision-Making Support for Student Academic Path Selection using Machine Learning

Abstract: In Benin, after the GCSE (General Certificate of Secondary Education), learners can either enroll in a Technical and Vocational Education and Training (TVET), or further their studies in the general education. Majority of those who take the latter path enroll in Senior High School by choosing the Biology stream or field of study. However, most of them do not have the abilities required to succeed in this field. For instance, for the last edition of the Senior Secondary Education Certificate (French baccalaureate) held in June 2022 in Benin, the Biology field of study had a low success rate of 42%. Therefore, one may consider that there is a problem in the orientation of the students. In recent years, Machine Learning has been used in almost every field to optimize processes or to assist in decision-making. Improving academic performance has always been of general interest. And, good academic performance implies good academic orientation. The goal of this study is to optimally help learners who have just obtained their GCSE to select their field of study. For this purpose, two major elements are predicted: i) Scientific or Literary ability of students, ii) Literature or Mathematics and Physical Sciences (MPS) or Biology stream of learners. More precisely, the average marks in Mathematics, Physics and Chemistry Technology (PCT) and Biology from 6th to 9th grade for 325 students are used. Machine Learning algorithms such as Decision Tree, Random Forest, Linear Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), and Logistic Regression are used to predict learners’ ability and the stream. As a result, for learners’ ability prediction, we obtained the best accuracy of 99% with the random forest algorithm for a split that reserved around 21% of the dataset for testing. As for the learners’ stream prediction, we obtained the best accuracy of 95% with the Linear SVC algorithm for a split that reserved around 20%of the dataset for testing. This study contributes to Educational Data Mining (EDM) by performing academic data exploration using numerous methods. Furthermore, it provides a tool to ease students academic path selection, which may be used by educational institutes to ensure student performance. This paper presents the steps and the outputs of the study, we performed with some recommendations for future research.

Author 1: Pelagie HOUNGUE
Author 2: Michel HOUNTONDJI
Author 3: Theophile DAGBA

Keywords: Academic path; academic performance; machine learning; educational data mining

PDF

Paper 85: Towards an YouTube Verified Content System based on Blockchain Approach

Abstract: YouTube connects people with each other through an online video sharing service platform. With the great devel-opment of the entertainment industry, content on YouTube is accessible to many people of different ages. However, verifying the content posted on YouTube is clean or not is a difficult problem. Dirty content is violent, pornographic and vulgar content that causes serious psychological harm to the segment of users under the age of 18, i.e., especially those of an age who are not yet aware of the harmful effects of content. Toxic will bring to the child’s behavior. Agree that Google (i.e., YouTube) has developed a YouTube Kid application where the videos are only for children under the age of 13. However, cultural and educational differences between regions strongly influence the choice of children. Select content for children. Therefore, the content restrictions on the YouTube Kid application have not yet met all the requirements of parents around the world. There have been many development directions to identify videos containing malicious content based on deep learning. However, there is no method to build a tool to support parents of children to share and identify videos with objectionable content (e.g., violence, pornography, obscene words) on the YouTube platform. In this research paper, we introduce YVC, a YouTube-verified content platform by applying blockchain’s distributed, public validation. This tool helps parents validate YouTube content and issue a report to reduce dirty content on YouTube. To demonstrate the effectiveness of our approach, we implement the proof-of-concept in the three most popular EVM platforms: Ethereum, Fantom, and the Binance smart chain. Compared to the YouTube Kids (i.e., the most common shared video platform for the under 13-year-old kid), our approach is able to capture the video preferences of the parents covering the difference areas/countries.

Author 1: Phuc Nguyen Trong
Author 2: Hong Khanh Vo
Author 3: Luong Hoang Huong
Author 4: Khiem Huynh Gia
Author 5: Khoa Tran Dang
Author 6: Hieu Le Van
Author 7: Nghia Huynh Huu
Author 8: Tran Nguyen Huyen
Author 9: The Anh Nguyen
Author 10: Loc Van Cao Phu
Author 11: Duy Nguyen Truong Quoc
Author 12: Bang Le Khanh
Author 13: Kiet Le Tuan

Keywords: Blockchain technology; public authentication; Ethereum; Fantom; Binance smart chain platform; social media platform

PDF

Paper 86: Blood and Product-Chain: Blood and its Products Supply Chain Management based on Blockchain Approach

Abstract: This paper provides a novel implementation of blockchain technology, and data is stored in a decentralized distributed ledger to assist information protection in blood supply chain management and prevent data loss or identity theft. The present blood supply is used exclusively from the blood of volunteers (known as donors), making blood and its derivatives one of the significant roles in treating diseases today. In particular, depending on the type of product extracted from the blood (e.g., red blood cells, white blood cells, platelets, plasma). They require different procedures and storage environments (e.g., time, temperature, humidity). However, the current blood management processes are done manually - where all medical staff does all data entry. Additionally, data about the complete blood donation process (e.g., blood donors, blood recipients, blood inventories) is held centrally and is challenging to examine accurately. Therefore, ensuring centralized data security is extremely difficult because of stealing personal information or losing data. In this study, we present the blockchain technology-based blood management process and offer Blood and Product-Chain, a decentralized distributed ledger that stores data to address these restrictions. Specifically, we target two main contributions: i) we design the Blood and Product-Chain model to manage all relevant information about blood and its products based on blockchain technology, and ii) we implement the proof-of-concept of Blood and Product-Chain by Hyperledger Fabric and evaluate this in the two scenarios (i.e., data creation and data access).

Author 1: Phuc Nguyen Trong
Author 2: Hong Khanh Vo
Author 3: Luong Hoang Huong
Author 4: Khiem Huynh Gia
Author 5: Khoa Tran Dang
Author 6: Hieu Le Van
Author 7: Nghia Huynh Huu
Author 8: Tran Nguyen Huyen
Author 9: The Anh Nguyen
Author 10: Loc Van Cao Phu
Author 11: Duy Nguyen Truong Quoc
Author 12: Bang Le Khanh
Author 13: Kiet Le Tuan

Keywords: Blood donation; blockchain; hyperledger fabric; blood products supply chain

PDF

Paper 87: 360° Virtual Reality Video Tours Generation Model for Hostelry and Tourism based on the Analysis of User Profiles and Case-Based Reasoning

Abstract: This paper proposes an adaptive software archi-tecture focused on hotel marketing based on immersive virtual reality (VRI) with 360° videos, which includes a component based on Case-Based Reasoning (CBR) to provide experiences that correspond to the analysis of user profiles. For the validation of the system, considering that the use of VR can trigger experi-ences in several dimensions, affective, attitudinal and behavioral responses, as well as the cognitive load were evaluated using visu-alizations of 2D photographs contained in hotel websites , which were compared with 360° videos in a VRI environment. To test the hypotheses, a quasi-experimental study was conducted with an independent sample group, in which subjects were randomly assigned to the two types of visualizations. The contribution of the article lies in the incorporation of marketing concepts and approaches in VRI experiences with 360° videos through virtual objects that are used by the software architecture, as well as in the proposed validation of the effectiveness of the proposal.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Ernesto Suarez
Author 4: Alberto Raposo

Keywords: Immersive virtual reality; adaptive software archi-tecture; case-based reasoning; user profiles

PDF

Paper 88: A Hybrid Genetic Algorithm for Service Caching and Task Offloading in Edge-Cloud Computing

Abstract: Edge-cloud computing is increasingly prevalent for Internet-of-thing (IoT) service provisioning by combining both benefits of edge and cloud computing. In this paper, we aim to improve the user satisfaction and the resource efficiency by service caching and task offloading for edge-cloud computing. We propose a hybrid heuristic method to combine the global search ability of the genetic algorithm (GA) and heuristic local search ability, to improve the number of satisfied requests and the resource utilization. The proposed method encodes the service caching strategies into chromosomes, and evolves the population by GA. Given a caching strategy from a chromosome, our method exploits a dual-stage heuristic method for the task offloading. In the first stage, the dual-stage heuristic method pre-offloads tasks to the cloud, and offloads tasks whose requirements cannot be satisfied by the cloud to edge servers, aiming at satisfying as many tasks’ requirements as possible. The second stage re-offloads tasks from the cloud to edge servers, to get the utmost out of limited edge resources. Experimental results demonstrate the competitive edges of the proposed method over multiple classical and state-of-the-art techniques. Compared with five existing scheduling algorithms, our method achieves 11.3%–23.7% more accepted tasks and 1.86%–18.9% higher resource utilization.

Author 1: Li Li
Author 2: Yusheng Sun
Author 3: Bo Wang

Keywords: Cloud computing; edge computing; genetic algo-rithm; service caching; task offloading

PDF

Paper 89: Aspect based Sentiment & Emotion Analysis with ROBERTa, LSTM

Abstract: Internet usage has increased social media over the past few years, significantly impacting public opinion on online social networks. Nowadays, these websites are considered the most appropriate place to express feelings and opinions. The popular social media site Twitter offers valuable insight into people’s thoughts. Throughout the conflict between Russia and Ukraine, people from all over the world have expressed their opinions. In this study, ”machine–learning” & ”deep–learning” techniques are used to understand people’s emotions and their views about this war are revealed. This study unveils a novel deep-learning approach that merges the best features of the sequence and transformer models while fixing their respective flaws. The model combines Roberta with ABSA(Aspect based sentiment analysis) and Long Short-Term Memory for sentiment analysis. A large dataset of geographically tagged tweets related to the Ukraine-Russia war was collected from Twitter. We analyzed this dataset using the Roberta-based sentiment model. In contrast, the Long Short-Term Memory model can effectively capture long-distance contextual semantics. The Robustly optimized BERT with ABSA approach maps words into a compact, meaningful word embedding space. The accuracy of the suggested hybrid model is 94.7%, which is higher than the accuracy of the state-of-the-art techniques.

Author 1: Uddagiri Sirisha
Author 2: Bolem Sai Chandana

Keywords: Aspect based sentiment analysis; twitter; LSTM; emotion analysis; russia-ukraine war; online social networks; roberta model

PDF

Paper 90: Contactless Surveillance for Preventing Wind-Borne Disease using Deep Learning Approach

Abstract: Covid-19 has been marked as a pandemic world-wide caused by the SARS-CoV-2 virus. Different studies are being conducted with a view to preventing and lessening the infections caused by covid-19. In future, many other wind-borne diseases may also appear and even emerge as “pandemic”. To prevent this, various measures should be an integral part of our daily life such as wearing face masks. It is tough to manually ensure individuals safety. The goal of this paper is to automate the process of contactless surveillance so that substantial prevention can be ensured against all kinds of wind-borne diseases. For automating the process, real time analysis and object detection is a must for which deep learning is the most efficient approach. In this paper, a deep learning model is used to check if a person takes any preventive measures. In our experimental analysis, we considered real time face mask detection as a preventive measure. We proposed a new face mask detection dataset. The accuracy of detecting a face mask along with the identity of a person achieved accuracy of 99.5%. The proposed model decreases time consumption as no human intervention is needed to check an individual person. This model helps to decrease infection risk by using a contactless automation system.

Author 1: Md Mania Ahmed Joy
Author 2: Israt Jaben Bushra
Author 3: Razoana Ayshee
Author 4: Samira Hasan
Author 5: Samia Binta Hassan
Author 6: Md. Sawkat Ali
Author 7: Omar Farrok
Author 8: Mohammad Rifat Ahmmad Rashid
Author 9: Maheen Islam

Keywords: Computer vision; convolution neural network; COVID-19; deep learning; face mask detection; identity detection; object detection

PDF

Paper 91: Secure and Lightweight Authentication Protocol for Smart Metering System

Abstract: One of the main advantages of the new power grid over the traditional grid is the intelligent energy management by the customer and the Operator. Energy supply, demand response management, and consumption regulation are only possible with the smart metering system. Smart meters are the main component of that system. Hence, a compromised smart meter or a successful attack against this entity may cause data theft, data falsification, and server/device manipulation. Therefore, Smart grids’ develop-ment and the guarantee of their services are related to the ability to avoid attacks and disasters by ensuring high security. This paper aims to provide a secure and lightweight security protocol that respects the IOT device constraints. The proposition deploys the distributed OTP calculations combined with the Blake2s hash function and the Ascon AEAD cipher to ensure authentication, confidentiality, and integrity. We propose a performance analysis, an informal and a formal security evaluation made by the AVISPA-SPAN tool. Also, we compare the proposed protocol to other similar works. The assessment proves that the proposed protocol is light, valid, secure, and robust against many attacks that threaten the NAN area of the smart metering system, namely MITM and replay attacks.

Author 1: Hind El Makhtoum
Author 2: Youssef Bentaleb

Keywords: Internet of things; confidentiality; integrity; authen-tication; Ascon; Blake2; AVISPA

PDF

Paper 92: Applying Logarithm and Russian Multiplication Protocols to Improve Paillier’s Cryptosystem

Abstract: Cloud computing provides on-demand access to a diverse set of remote IT services. It offers a number of advantages over traditional computing methods. These advantages include pay-as-you-go pricing, increased agility and on-demand scalabil-ity. It also reduces costs due to increased efficiency and better business continuity. The most significant barrier preventing many businesses from moving to the cloud is the security of crucial data maintained by the cloud provider. The cloud server must have complete access to the data to respond to a client request. That implies the decryption key must be sent to the cloud by the client, which may compromise the confidentiality of data stored in the cloud. One way to allow the cloud to use encrypted data without knowing or decrypting it is homomorphic encryption. In this paper, we focus on improving the Paillier cryptosystem, first by using two protocols that allow the cloud to perform the multiplication of encrypted data and then comparing the two protocols in terms of key size and time.

Author 1: Hamid El Bouabidi
Author 2: Mohamed EL Ghmary
Author 3: Sara Maftah
Author 4: Mohamed Amnai
Author 5: Ali Ouacha

Keywords: Cloud computing; cloud security; homomorphic encryption; paillier cryptosystem; sockets

PDF

Paper 93: Parallelizing Image Processing Algorithms for Face Recognition on Multicore Platforms

Abstract: A good face detection system should have the ability to identify objects with varying degrees of illumination and orientation. It should also be able to respond to all the possible variations in the image. The image of the face depends on the relative camera face pose such as the nose and one eye. The appearance of a face is directly influenced by the facial expression of a person and partially occluded by objects around it. One of the most important and necessary conditions for face recognition is to exclude the background of reliable face classification techniques. However, the face can appear in complex backgrounds and different positions. The face recognition system can mistake some areas of the background for faces. This paper solves some face recognition problems including segmenting, extracting and identifying facial features that are thought to face from the background.

Author 1: Kausar Mia
Author 2: Tariqul Islam
Author 3: Md Assaduzzaman
Author 4: Tajim Md. Niamat Ullah Akhund
Author 5: Arnab Saha
Author 6: Sonjoy Prosad Shaha
Author 7: Md. Abdur Razzak
Author 8: Angkur Dhar

Keywords: Image processing; multi-core platforms; machine learning; face recognition; parallelizing

PDF

Paper 94: A Hybrid Protection Method to Enhance Data Utility while Preserving the Privacy of Medical Patients Data Publishing

Abstract: Medical patient data need to be published and made available to researchers so that they can use, analyse, and evaluate the data effectively. However, publishing medical patient data raises privacy concerns regarding protecting sensitive data while preserving the utility of the released data. The privacy-preserving data publishing (PPDP) process attempts to keep public data useful without risking the medical patients’ pri-vacy. Through protection methods like perturbing, suppressing, or generalizing values, which lead to uncertainty in identity inference or sensitive value estimation, the PPDP aims to reduce the risks of patient data being disclosed and to preserve the potential use of published data. Although this method is helpful, information loss is inevitable when attempting to achieve a high level of privacy using protection methods. In addition, the privacy-preserving techniques may affect the use of data, resulting in imprecise or even impractical knowledge extraction. Thus, balancing privacy and utility in medical patient data is essential. This study proposed an innovative technique that used a hybrid protection method for utility enhancement while preserving medical patients’ data privacy. The utilized technique could partition information horizontally and vertically, resulting in data being grouped into columns and equivalence classes. Then, the attributes assumed to be easily known by any attacker are determined by upper and lower protection levels (UP L and LP L). This work also depends on making the false matches and value swapping to make sure that the attribute disclosure is less likely to happen. The innovative technique makes data more useful. According to the results, the innovative technique delivers about 93.4% data utility when the percentage of exchange level is 5% using LP L and 95% using UP L with a 4.5K medical patient dataset. In conclusion, the innovative technique has minimized risk disclosure compared to other existing works.

Author 1: Shermina Jeba
Author 2: Mohammed BinJubier
Author 3: Mohd Arfian Ismail
Author 4: Reshmy Krishnan
Author 5: Sarachandran Nair
Author 6: Girija Narasimhan

Keywords: Medical patients data publishing; anonymization; protection method for preserving the privacy

PDF

Paper 95: Swarm Intelligence-based Hierarchical Clustering for Identification of ncRNA using Covariance Search Model

Abstract: Covariance Model (CM) has been quite effective in finding potential members of existing families of non-coding Ribonucleic Acid (ncRNA) identification and has provided ex-cellent accuracy in genome sequence database. However, it has significant drawbacks with family-specific search. An existing Hierarchical Agglomerative Clustering (HAC) technique merged overlapping sequences which is known as combined CM (CCM). However, the structural information will be discarded, and the sequence features of each family will be significantly diluted as the number of original structures increases. Additionally, it can only find members of the existing families and is not useful in finding potential members of novel ncRNA families. Furthermore, it is also important to construct generic sequence models which can be used to recognize new potential members of novel ncRNA families and define unknown ncRNA sequence as the potential members for known families. To achieve these objectives, this study proposes to implement Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to ensure the CCMs have the best quality for every level of dendrogram hierarchy. This study will also apply distance matrix as the criteria to measure the compatibility between two CMs. The proposed techniques will be using five gene families with fifty sequences from each family from Rfam database which will be divided into training and testing dataset to test CMs combination method. The proposed techniques will be compared to the existing HAC in terms of identification accuracy, sum of bit-scores, and processing time, where each of these performance measurements will be statistically validated.

Author 1: Lustiana Pratiwi
Author 2: Yun-Huoy Choo
Author 3: Azah Kamilah Muda
Author 4: Satrya Fajri Pratama

Keywords: Covariance model; ncRNA identification; swarm intelligence; hierarchical clustering

PDF

Paper 96: COVIDnet: An Efficient Deep Learning Model for COVID-19 Diagnosis on Chest CT Images

Abstract: A novel coronavirus disease (COVID-19) has been a severe world threat to humans since December 2020. The virus mainly affects the human respiratory system, making breathing difficult. Early detection and Diagnosis are essential to controlling the disease. Radiological imaging, like Computed Tomography (CT) scans, produces clear, high-quality chest images and helps quickly diagnoses lung abnormalities. The recent advancements in Artificial intelligence enable accurate and fast detection of COVID-19 symptoms on chest CT images. This paper presents COVIDnet, an improved and efficient deep learning Model for COVID-19 diagnosis on chest CT images. We developed a chest CT dataset from 220 CT studies from Tamil Nadu, India, to evaluate the proposed model. The final dataset contains 5191 CT images (3820 COVID-infected and 1371 normal CT images). The proposed COVIDnet model aims to produce accurate diagnostics for classifying these two classes. Our experimental result shows that COVIDnet achieved a superior accuracy of 98.98% when compared with three contemporary deep learning models.

Author 1: Briskline Kiruba S
Author 2: Murugan D
Author 3: Petchiammal A

Keywords: Coronavirus disease; reverse transcription poly-merase chain reaction; computed tomography; deep learning

PDF

Paper 97: Rao-Blackwellized Particle Filter with Neural Network using Low-Cost Range Sensor in Indoor Environment

Abstract: Implementation of Rao-Blackwellized Particle Fil-ter (RBPF) in grid-based simultaneous localization and mapping (SLAM) algorithm with range sensors is commonly developed by using sensor with dense measurements such as laser rangefinder. In this paper, a more cost convenient solution was explored where implementation of array of infrared sensors equipped on a mobile robot platform was used. The observation from array of infrared sensors are noisy and sparse. This adds more uncertainty in the implementation of SLAM algorithm. To compensate for the high uncertainties from robot’s observations, neural network was integrated with the grid-based SLAM algorithm. The result shows that the grid-based SLAM algorithm with neural network has better accuracy compared to the grid-based SLAM algorithm without neural network for the aforementioned mobile robot implementation. The algorithm improves the map accuracy by 21% and reduce robot’s state estimate error significantly. The better performance is due to the improvement in accuracy of grid cells’ occupancy value. This affects the importance weight computation in RBPF algorithm hence resulting a better map accuracy and robots state estimate. This finding shows that a promising grid-based SLAM algorithm can be obtained by using merely array of infrared sensors as robot’s observation.

Author 1: Norhidayah Mohamad Yatim
Author 2: Amirul Jamaludin
Author 3: Zarina Mohd Noh
Author 4: Norlida Buniyamin

Keywords: Simultaneous localization and mapping (SLAM); occupancy grid map; neural network; Rao-Blackwellized Particle Filter; infrared sensor

PDF

Paper 98: Multi-Scale ConvLSTM Attention-Based Brain Tumor Segmentation

Abstract: In computer vision, there are various machine learning algorithms that have proven to be very effective. Con-volutional Neural Networks (CNNs) are a kind of deep learning algorithms that became mostly used in image processing with a remarkable success rate compared to conventional machine learning algorithms. CNNs are widely used in different computer vision fields, especially in the medical domain. In this study, we perform a semantic brain tumor segmentation using a novel deep learning architecture we called multi-scale ConvLSTM Attention Neural Network, that resides in Convolutional Long-Short-Term-Memory (ConvLSTM) and Attention units with the use of multiple feature extraction blocks such as Inception, Squeeze-Excitation and Residual Network block. The use of such blocks separately is known to boost the performance of the model, in our case we show that their combination has also a beneficial effect on the accuracy. Experimental results show that our model performs brain tumor segmentation effectively compared to standard U-Net, Attention U-net and Fully Connected Network (FCN), with 79.78 Dice score using our method compared to 78.61, 73.65 and 72.89 using Attention U-net, standard U-net and FCN respectively.

Author 1: Brahim AIT SKOURT
Author 2: Aicha MAJDA
Author 3: Nikola S. Nikolov
Author 4: Ahlame BEGDOURI

Keywords: Convolutional neural networks; image processing; semantic brain tumor segmentation; convolutional long short term memory; inception; squeeze-excitation; residual-network; attention units

PDF

Paper 99: NGram Approach for Semantic Similarity on Arabic Short Text

Abstract: Measuring the semantic similarity between words requires a method that can simulate human thought. The use of computers to quantify and compare semantic similarities has become an important research area in various fields, including artificial intelligence, knowledge management, information re-trieval, and natural language processing. Computational seman-tics require efficient measures for computing concept similarity, which still need to be developed. Several computational measures quantify semantic similarity based on knowledge resources such as the WordNet taxonomy. Several measures based on taxonom-ical parameters have been applied to optimize the expression for content semantics. This paper presents a new similarity measure for quantifying the semantic similarity between concepts, words, sentences, short text, and long text based on NGram features and Synonyms of NGram related to the same domain. The proposed algorithm was tested on 700 tweets, and the semantic similarity values were compared with cosine similarity on the same dataset. The results were analyzed manually by a domain expert who concluded that the values provided by the proposed algorithm were better than the cosine similarity values within the selected domain regarding the semantic similarity between the datasets’ short texts.

Author 1: Rana Husni Al-Mahmoud
Author 2: Ahmad Sharieh

Keywords: Arabic text; Ngram; semantic sentences similarity; short text; ALMaany; natural language; semantic similarity of words; corpus-based measures

PDF

Paper 100: An Efficient Meta-Heuristic-Feature Fusion Model using Deep Neuro-Fuzzy Classifier

Abstract: Diabetic Retinopathy (DR) is the major cause of the loss of vision among adults worldwide. DR patients generally do not have any symptoms till they reach the final stage. The categorization of retinal images is a remarkable application in detecting DR. Due to the level of sugar available in the blood, the categorization of DR severity becomes complicated to determine the grading level of the damages caused in the retina. To rectify these challenges, a new DR severity classification model is proposed for detecting and treating the DR. The main objective of the proposed model is to classify the severity grades that occurred in the retinal region of the human eye. Initially, gathered retinal images are enhanced and the blood vessel segmentations are done by utilizing the optic disc removal and active contouring model. The abnormalities such as “microaneurysms, hemorrhages, and exudates” are segmented by utilizing Fuzzy C-Means Clustering (FCM) and adaptive thresholding. Then, the segmented images are given to “VGG16 and ResNet”, in which the two different feature sets are acquired. Then, these features are added to obtain the second set of features as F2. Again, the enhanced images act as an input to the “VGG16 and ResNet”, which are attained as the first feature set as F1. In the feature concatenation phase, the resultant of two features is used for feature fusion with the aid of weights parameter that is optimized by Modified Mating Probability-based Water Strider Algorithm (MMP-WSA), where the feature fusion is carried out using the mathematical expression. Finally, the multi-class severity classifications are done by using the Optimized Deep Neuro-Fuzzy Classifier (ODNFC), where the optimization of hyper-parameters is done by the proposed MMP-WSA. Thus, the experimental results of the proposed model have been acquired by the precise segment of the abnormalities and better classification results regarding the grade level.

Author 1: Sri Laxmi Kuna
Author 2: A. V. Krishna Prasad

Keywords: Multi-class severity classification; diabetic retinopathy; modified mating probability-based water strider algorithm; optimized deep neuro-fuzzy classifier; fuzzy clustering model; adaptive thresholding; optic disc removal; image enhancement

PDF

Paper 101: A Comprehensive Insight into Blockchain Technology: Past Development, Present Impact and Future Considerations

Abstract: Blockchain technology is based on the idea of a distributed, consensus ledger, which it employs to create a secure, immutable data storage and management system. It is a publicly accessible and collectively managed ledger enabling unprecedented levels of trust and transparency between business and individual collaborations. It has both robust cryptographic security and a transparent design. The immutability feature of blockchain data has the potential to transform numerous industries. People have begun to view blockchain as a revolutionary technology capable of identifying "The Best Possible Solution" in various real-world scenarios. This paper provides a comprehensive insight into blockchains, fostering an objectual understanding of this cutting-edge technology by focusing on the theoretical fundamentals, operating principles, evolution, architecture, taxonomy, and diverse application-based manifestations. It investigates the need for decentralisation, smart contracts, permissioned and permissionless consensus mechanisms, and numerous blockchain development frameworks, tools, and platforms. Furthermore, the paper presents a novel compendium of existing and emerging blockchain technologies by examining the most recent advancements and challenges in blockchain-enabled solutions for a variety of application domains. This survey bridges multiple domains and blockchain technology, discussing how embracing blockchain technology is reshaping society's most important sectors. Finally, the paper delves into potential future blockchain ecosystems providing a clear picture of open research challenges and opportunities for academics, researchers, and companies with a strong fundamental and technical grounding.

Author 1: Farhat Anwar
Author 2: Burhan Ul Islam Khan
Author 3: Miss Laiha Mat Kiah
Author 4: Nor Aniza Abdullah
Author 5: Khang Wen Goh

Keywords: Blockchain; blockchain applications; consensus algorithms; distributed ledger; smart contract

PDF

Paper 102: Factors Influencing the Acceptance of Online Mobile Auctions using User-Centered Agile Software Development: An Early Technology Acceptance Model

Abstract: e-Commerce is booming everywhere, and Saudi Arabia is no exception. However, the adoption and prevalence of online mobile auctions (aka m-auction) remain unsatisfying in Saudi Arabia and the MENA region. This paper uncovers the enabling factors and hindering barriers against the use of mobile auctions by online consumers. To this end, a multiphase mixed methods design is applied to acquire an in-depth understanding of online mobile bidding or auctioning attitudes and practices of the Saudi auctioneers and bidders. Initially, an interactive mobile auction app was developed by applying the principles of user-centered agile software development (UCASD) methodology, which incorporated several design iterations based on feedback from 454 real users. The mobile auction requirements were collected using a mix of research methods, including a survey, focus groups, prototyping, and user testing. The UCASD methodology positively influenced the early evidence-based adoption and use of mobile auctions in the Saudi market. Subsequently, three consecutive focus groups were conducted with another 22 participants to induce further insights regarding the antecedents impacting the intention to embrace online auctions using mobile phones. A taxonomy of requirements coupled with thematic analysis of the discussions gave rise to 13 influential factors of mobile auctions, namely risk, quality of products, trust, ubiquity, usefulness, access to valuable products, ease of use, age, social influence, monetary costs, enjoyment, past experience, and facilitating conditions. Our inductive approach resulted in an early technology acceptance model of mobile auctions. We conclude by reflecting on the challenges observed to suggest some practical guidelines to pave the way for other researchers in this promising area to carry out experimental studies to ameliorate the proposed model.

Author 1: Abdallah Namoun
Author 2: Ahmed Alrehaili
Author 3: Ali Tufail
Author 4: Aseel Natour
Author 5: Yaman Husari
Author 6: Mohammed A. Al-Sharafi
Author 7: Albaraa M. Alsaadi
Author 8: Hani Almoamari

Keywords: Online auction; mobile auction; technology acceptance model; eBay; human-centered design; agile software development; factors

PDF

Paper 103: Towards a Blockchain-based Medical Test Results Management System: A Case Study in Vietnam

Abstract: The role of the testing process cannot be denied in the diagnosis and treatment of patients’ diseases in medical facilities today. The results from this process help doctors and nurses in medical centers make a preliminary and detailed assessment of symptoms and provide a specific course of treat-ment for their patients. In addition, these results are stored as a patient’s medical record that serves as a reference for subsequent therapies. However, the storage of this information (i.e., paper-based, electronic-based) faces some difficulties for both approaches. Especially for developing countries (i.e., Vietnam), this process encounters some major obstacles at health centers in rural areas. Many centralized/decentralized storage methods have been proposed to solve the above problem. Besides, the current popular method is patient-centered (all information shared is decided by the patient) can solve the above problems and be applied by many research directions. However, these methods require the user (i.e., patient) to have a background in security and privacy as well as the cutting-edge technologies installed on their phones. This is extremely difficult to apply in rural areas in developing countries where people are not yet conscious of protecting their personal information. This paper proposes a mechanism for storing and managing test results of patients at medical centers based on blockchain technology - applicable to developing countries. We build a proof-of-concept based on the Hyperledger Fabric platform and exploit the Hyperledger Caliper to evaluate a variety of scenarios related to system performance (i.e., create, query, and update).

Author 1: Phuc Nguyen Trong
Author 2: Hong Khanh Vo
Author 3: Luong Hoang Huong
Author 4: Khiem Huynh Gia
Author 5: Khoa Tran Dang
Author 6: Hieu Le Van
Author 7: Nghia Huynh Huu
Author 8: Tran Nguyen Huyen
Author 9: Loc Van Cao Phu
Author 10: Duy Nguyen Truong Quoc
Author 11: Bang Le Khanh
Author 12: Kiet Le Tuan

Keywords: Blockchain-based system; hyperledger fabric; med-ical test results; medical institution at developing countries

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org