The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Metadata Harvesting (OAI2)
  • Digital Archiving Policy
  • Promote your Publication

IJACSA

  • About the Journal
  • Call for Papers
  • Author Guidelines
  • Fees/ APC
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Guidelines
  • Fees
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Editors
  • Reviewers
  • Subscribe

IJACSA Volume 14 Issue 1

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Eye-tracking Analysis: College Website Visual Impact on Emotional Responses Reflected on Subconscious Preferences

Abstract: This study examined students’ behaviour on the college website and the content of information they were able to obtain. With the eye-tracking sensor, this study aims to investigate the university websites’ effectiveness, satisfaction, and efficiency and collect data regarding users' visual impacts. The research was carried out using mobile phone neuromarketing tools of eye-tracking, facial coding, and supplementary short memory post-survey. The study was focused on two web pages, the homepage, and the CARE page. The analysis results from both web pages were then compared and further discussed. The results suggest that participants mostly elicited sadness (29.55%), neutrality (33.19%), and puzzlement (13.60%) while browsing the homepage, regardless of the areas of interest (AOI). They also elicited slight disgust (4.33%), fear (3.51%), joy (5.21%), and surprise (29.55%). The heat map for the CARE page reveals that the top of the CARE page was a point of attraction for participants. The study found that participants' negative feelings were more intense than good ones concerning homepage scrolling. Also, their pleasant mood intensity increased moderately when they looked at regions with only photos in a subdued color scheme or where brighter colors were used to emphasize essential textual information such as upcoming events and student blogs. This reveals that the website's complexity further affects the cognitive load. Therefore, making it more accessible will be beneficial to students. According to the student's responses, change such as the page's design, color, and text could be implemented.

Author 1: Hedda Martina Šola
Author 2: Fayyaz Hussain Qureshi
Author 3: Sarwar Khawaja

Keywords: Neuromarketing; eye-tracking; student behavior; college website analyses; mood intensities; visual impact; website conversions

Download PDF

Paper 2: Improving MapReduce Speculative Executions with Global Snapshots

Abstract: Hadoop’s MapReduce implementation has been employed for distributed storage and computation. Although efficient for parallelizing large-scale data processing, the chal-lenge of handling poor-performing jobs persists. Hadoop does not fix straggler tasks but instead launches equivalent tasks (also called a backup task). This process is called Speculative Execution in Hadoop. Current speculative execution approaches face challenges like incorrect estimation of tasks run times, high consumption of system resources and inappropriate selection of backup tasks. In this paper, we propose a new speculative execution approach, which determines task run times with consistent global snapshots and K-Means clustering. Task run times are captured during data processing. Two categories of tasks (i.e. fast and stragglers) are detected with K-Means clustering. A silhouette score is applied as decision tool to determine when to process backup tasks, and to prevent extra iterations of K-Means. This helped to reduce the overhead incurred in applying our approached. We evaluated our approach on different data centre configurations with two objectives: i) the overheads caused by implementing our approach and ii) job performance improvements. Our results showed that i) the overheads caused by applying our approach is becoming more negligible as data centre sizes increase. The overheads reduced by 1.9%, 1.5% and 1.3% (comparatively) as the size of the data centre and the task run times increased, ii) longer mapper tasks runs have better chances for improvements, regardless of the amount of straggler tasks. The graphs of the longer mappers were below 10% relative to the disruptions introduced. This showed that the effects of the disruptions were reduced and became more negligible, while there was more improvement in job performance.

Author 1: Ebenezer Komla Gavua
Author 2: Gabor Kecskemeti

Keywords: MapReduce; Hadoop; speculative executions; strag-glers; consistent global snapshots; K-means algorithm

Download PDF

Paper 3: Recognizing Safe Drinking Water and Predicting Water Quality Index using Machine Learning Framework

Abstract: Water quality monitoring, analysis, and prediction have emerged as important challenges in several uses of water in our life. Recent water quality problems have raised the need for artificial intelligence (AI) models for analyzing water quality, classifying water samples, and predicting water quality index (WQI). In this paper, a machine-learning framework has been proposed for classify drinking water samples (safe/unsafe) and predicting water quality index. The classification tier of the proposed framework consists of nine machine-learning models, which have been applied, tested, validated, and compared for classifying drinking water samples into two classes (safe/unsafe) based on a benchmark dataset. The regression tier consists of six regression models that have been applied to the same dataset for predicting WQI. The experimental results clarified good classification results for the nine models with average accuracy, of 94.7%. However, the obtained results showed the superiority of Random Forest (RF), and Light Gradient Boosting Machine (Light GBM) models in recognizing safe drinking water samples regarding training and testing accuracy compared to the other models in the proposed framework. Moreover, the regression analysis results proved the superiority of LGBM regression, and Extra Trees Regression models in predicting WQI according to training, testing accuracy, 0.99%, and 0.95%, respectively. Moreover, the mean absolute error (MAE) results proved that the same models achieved less error rate, 10% than other applied regression models. These findings have significant implications for the understanding of how novel deep learning models can be developed for predicting water quality, which is suitable for other environmental and industrial purposes.

Author 1: Mohamed Torky
Author 2: Ali Bakhiet
Author 3: Mohamed Bakrey
Author 4: Ahmed Adel Ismail
Author 5: Ahmed I. B. EL Seddawy

Keywords: Water quality; artificial intelligence; machine learning; deep learning; classification analysis; and regression analysis

Download PDF

Paper 4: Leaf-based Classification of Important Indigenous Tree Species by Different Feature Extraction Techniques and Selected Classification Algorithms

Abstract: The machine learning algorithms, namely, k-Nearest Neighbor (KNN), Support Vector Machine (SVM), Back-Propagation (BP) networks, and Convolutional Neural Networks (CNN) are four of the mostly used classifiers. Different sets of features are required as input in different application domains. In this paper, a set of significant leaf features and classification model was determined with a high accuracy in classifying important indigenous tree species. Leaf images were acquired using a scanner to control the image quality. The image dataset was then duplicated into two sets. The first set was labeled with their correct classes, preprocessed, and segmented in preparation for feature extraction. The leaf features extracted were leaf shape, leaf color, and leaf texture. Then, training and classification was done by KNN, SVM, and BP networks. On the other hand, the second set was unlabeled for training and classification by CNN. A CNN model was built and chosen with the best training and validation accuracy and the least training and validation loss rate. The study concluded that using all three leaf features for classification by BP networks resulted in a 93.48% accuracy with training done by supervised learning. However, the CNN achieved a high accuracy rate of 98.5% making it the best approach for classification of tree species using digital leaf images in the context of this study.

Author 1: Eugene Val D. Mangaoang
Author 2: Jaime M. Samaniego

Keywords: Machine learning; feature extraction; convolutional neural network; leaf classification

Download PDF

Paper 5: A Novel Deep-learning based Approach for Automatic Diacritization of Arabic Poems using Sequence-to-Sequence Model

Abstract: Over the last 10 years, Arabic language have attracted researchers in the area of Natural Language Processing (NLP). A lot of research papers suddenly emerged in which the main work was the processing of Arabic language and its dialects too. Arabic language processing has been given a special name ANLP (Arabic Natural Language Processing). A lot of ANLP work can be found in literature including almost all NLP applications. Many researchers have been attracted also to Arabic linguistic knowledge. The work expands from Basic Language Analysis to Semantic Level Analysis. But Arabic text semantic analysis cannot be held without considering diacritization, which can greatly affect the meaning. Many Arabic texts are written without diacritization, and Diacritizing them manually is a very tiresome process that may need an expert. Automatic diacritization systems became a demand as an initial step for processing Arabic text for any Arabic Language Processing application as Arabic diacritization is very important to get a readable and understandable Arabic text. For this reason, many researchers recently worked on building systems and tools that automatically diacritize un-diacritized Arabic texts. This work presents a novel deep learning-based sequence-to-sequence model to diacritize un-diacritized Arabic poems. The proposed model was tested and achieved high diacritization accuracy rate.

Author 1: Mohamed S. Mahmoud
Author 2: Nermin Negied

Keywords: Text diacritization; deep learning; sequence-to-sequence; regex; tokenization; ANLP

Download PDF

Paper 6: An Investigation of Cybersecurity Issues of Remote Work during the COVID-19 Pandemic in Saudi Arabia

Abstract: COVID-19 pandemic has dramatically changed the public life style as well as the daily work activities across the world. This indeed has led both public and private sectors to attempt to adapt thereby shifting to remote work and adopting and enabling new technologies and online services in order to sustain their businesses while saving people lives. Unfortunately, a decent number of those endeavors have been undertaken unwarily in a hurry without taking the due diligence of all relevant aspects including cybersecurity and privacy. This survey aims at exploring the current state of the practice during the Covid-19 pandemic lockdown and the revolving challenges of using and publishing online services in Saudi Arabia. It also investigates the needs for investment in the cybersecurity field which would increase the trust on and reliability of them; and thus encouraging organizations to move confidently towards the real digital transformation.

Author 1: Gaseb N Alotibi
Author 2: Abdulwahid Al Abdulwahid

Keywords: Cybersecurity issues; investigative survey; remote work; COVID-19 Pandemic; Saudi Arabia

Download PDF

Paper 7: Patent Text Classification based on Deep Learning and Vocabulary Network

Abstract: Patent documents are a special long text format, and traditional deep learning methods have insufficient feature extraction ability, which results in a weaker classification effect than ordinary text. Based on this, this paper constructs a text feature extraction method based on the lexical network, according to the inner relation between words and classification. Firstly, the inner relationship between words and classification was obtained from linear and probability dimensions and the lexical network were constructed. Secondly, the lexical network is fused with the features extracted from the deep learning model. Finally, the fusion features are trained in the original model to get the final classification result. T This method is a classification enhancement method that can classify patent text alone or enhance the accuracy of various types of neural networks in patent text classification. Experimental results demonstrate that the accuracy of BERT combined with lexical network method is as high as 82.73%, and the accuracy of lexical network method combined with CNN and LSTM is increased by 2.19% and 2.25% respectively. In addition, it was demonstrated that the lexical network feature extraction method accelerated the convergence speed of the model during training and improved the classification ability of the model in Chinese patent texts.

Author 1: Ran Li
Author 2: Wangke Yu
Author 3: Qianliang Huang
Author 4: Yuying Liu

Keywords: Text classification; deep learning; network vocabulary; patent; feature extraction

Download PDF

Paper 8: IoT Technology for Intelligent Management of Energy, Equipment and Security in Smart House

Abstract: The Internet of Things means that many of the daily devices used by humans will share their functions and information with each other or with humans by connecting to the Internet. The most important factor of the Internet of Things is the integration of several technologies and communication solutions. Identification and tracking technologies, wired and wireless sensors and active networks, protocols for increasing communication and intelligence of objects are the most important parts of the Internet of Things. In this article, an attempt has been made to determine the parts that can be used to make a house smart among the concepts and technologies related to web-based programs based on Internet of Things technology. Since it is very time-consuming to investigate the effect of all the Internet of Things technologies in smart homes, by studying and examining various types of research, the web-based program based on the Internet of Things is selected as an independent variable, and its effect on smart home management is investigated. For this purpose, a web-based program based on the Internet of Things for intelligent building energy management, intelligent equipment management, and intelligent security has been designed and implemented. As experimental results shown the proposed method the proposed method achieves better results compared to other existing methods in energy consumption by 33.8% reducing energy usage.

Author 1: Fangmin Yuan
Author 2: Yan Zhang
Author 3: Junchao Zhang

Keywords: Internet of things technology; smart homes; intelligent energy management; fuzzy logic

Download PDF

Paper 9: The Effect of Augmented Reality Mobile Application on Visitor Impact Mediated by Rational Hedonism: Evidence from Subak Museum

Abstract: This study expands our comprehension of museum visitor impact within a system quality, information quality, and augmented reality (AR) media content quality on mobile applications. Museums meet new defiance of escalating expectancies of their visitors. As a result of the universal mobile phone tool, AR has arisen as the latest technology offered to the museum to increase its visitors. These expectancies are fostered by the improvement of modern technologies like AR on the mobile app. Across an online survey of 241 visitors, the study determines the constructs affecting visitor impact within museum' mobile apps and the consequential results of AR-linked visitor impact. The study proposes a recent set of AR features, explicitly, system quality, information system, and AR media content quality, and establishes their influence on rational hedonism and satisfaction experienced, thus enhancing visitor impact. The findings also show that the rational hedonism and satisfaction experienced are positioned as full mediators for the relationship between system quality & information quality and visitor impact. In contrast, these mediators partially influence the indirect relationship between AR media content quality and visitor impact. Moreover, the results affirm that AR media content quality within the mobile application is the most critical construct to directly enhance visitor impact, whereas the system quality and information quality have no influence yet. From a practical point of view, the importance of AR technology for the museum can support entice new visitors to museums and improve to make more incomes.

Author 1: Ketut Agustini
Author 2: Dessy Seri Wahyuni
Author 3: I Nengah Eka Mertayasa
Author 4: Ni Made Ratminingsih
Author 5: Gede Ariadi

Keywords: System quality; information quality; augmented reality media content quality; rational hedonism; satisfaction experienced

Download PDF

Paper 10: Developing a Computer Simulation to Study the Behavior of Factors Affecting the Flooding of the Gash River

Abstract: In recent years, the city of Kassala has suffered from frequent flooding disasters in the Gash River, which is the city's lifeblood. But the problem of frequent flooding of the river has made it a life-threatening nightmare. The importance of research lies in the fact that it is one of the few attempts to discuss and study the causes and effects of the Gash River floods. It aims to identify the factors affecting river floods. It proposes an algorithm to simulate flooding by randomly generating different factors that effectively affect river flooding. The descriptive analytical approach, the analytical, inductive approach, and the analytical deductive approach to desk research were used, taking advantage of the primary statistical method in its observation and evaluation, which relies on primary and secondary information to help make scientific, practical, and objective. The research came out with significant results related to the problems that threaten the town of Kassala from the frequent floods of the Gash River. The study's results proved that there is a deviation and discrepancy between the floods rate during the year, which gives a negative indication, and that deposited quantities vary in different proportions from one period to another, which causes a significant threat in the future. The research suggests other solutions that help reduce the problems and their effects. In addition to the above, the study proposes various recommendations that will be the basis for future studies to reach the required solutions and goals.

Author 1: Abdalilah G. I. Alhalangy

Keywords: Gash river; flood simulation; influencing factors to flood; simulation

Download PDF

Paper 11: Pinpointing Factors in the Success of Integrated Information System Toward Open Government Data Initiative: A Perspective from Employees

Abstract: As the Supervisory Institution in Statistics, Badan Pusat Statistik (BPS) launched an integrated information system (IS) to exercise the Open Government Data (OGD) initiative and to impose the One Data Policy Act. Albeit challenges arise, BPS manages to provide more than 120 thousand publicly accessible datasets. With the success of OGD, many scholars have opted to examine a similar issue from the perspective of users/citizens. However, employees’ perspective remains substantial as employees are the OGD provider. This research administers employees’ views to pinpoint influencing factors in the success of OGD adoption through an IS. The authors seek to comprehend the factors from IS and acceptance manner, thus integrating the Information System Success Model (ISSM) and Unified Theory of Acceptance and Use of Technology (UTAUT) as the measurement model. This study also administers a cross-sectional questionnaire with close-ended questions to obtain data from 253 IS users in BPS. Using structural equation modelling (SEM), the authors find that all ISSM constructs influence the success of IS while only one construct from UTAUT plays a pivotal role in defining the success. Information Quality, System Quality, Service Quality, User Satisfaction, and System Use remain paramount to the successful implementation, while Performance Expectancy becomes the sole influencing UTAUT factor affecting success. This study therefore offers substantial benefits by aiding other researchers in OGD-related areas and providing in-depth evidence for practitioners in implementing IS for OGD initiatives.

Author 1: Wahyu Setiawan Wibowo
Author 2: Ahmad Fadhil
Author 3: Dana Indra Sensuse
Author 4: Sofian Lusa
Author 5: Prasetyo Adi Wibowo Putro
Author 6: Alivia Yulfitri

Keywords: Open data; open government data; OGD; employees’ perspective; ISSM; UTAUT; success factors; acceptance; impact; integrated IS; One Data Policy; BPS; SEM

Download PDF

Paper 12: Autism Spectrum Disorder Detection: Video Games based Facial Expression Diagnosis using Deep Learning

Abstract: In this study, a novel method is proposed for determining whether a child between the ages of 3 and 10 has autism spectrum disorder. Video games have the ability to immerse a child in an intense and immersive environment. With the expansion of the gaming industry over the past decade, the availability and customization of games for children has increased dramatically. When children play video games, they may display a variety of facial expressions and emotions. These facial expressions can aid in the diagnosis of autism. Footage of children playing a game may yield a wealth of information regarding behavioral patterns, especially autistic behavior. You can submit any video of a child playing a game to the interface, which is powered by the algorithm presented in this work. We utilized a dataset of 2,536 facial images of autistic and typically developing children for this purpose. The accuracy and loss function are presented to examine the 92.3% accurate prediction outcomes generated by the CNN model and deep learning.

Author 1: Morched Derbali
Author 2: Mutasem Jarrah
Author 3: Princy Randhawa

Keywords: Autism in children; machine learning; deep learning; convolution neural network (CNN); video games; prediction

Download PDF

Paper 13: An Improved SVM Method for Movement Recognition of Lower Limbs by MIMU and sEMG

Abstract: Aiming at the problems that the movement recognition accuracy of lower limbs needs to be improved, the optimized SVM recognition method by using voting mechanism is proposed in this paper. First, CS algorithm is applied to optimize the kernel function parameter and the penalty factor for SVM model. And then, voting mechanism is used to ensure the recognition accuracy of SVM classification algorithm. Finally, the experiments have been implemented and different classification algorithms have been compared. The recognition results shows that the movement recognition accuracy for the lower limbs by the optimized SVM recognition algorithm using voting mechanism is about 98.78%, which is higher than other commonly used classification algorithm with or without voting mechanism. The recognition method for the lower limbs proposed in this paper can be used in the field of rehabilitation training, smart healthcare and so on.

Author 1: Xu Yun
Author 2: Xu Ling
Author 3: Gao Lei
Author 4: Liu Zhanhao
Author 5: Shen Bohan

Keywords: Surface electromyography; micro inertial measurement unit; support vector machine; voting mechanism

Download PDF

Paper 14: Bidirectional Recurrent Neural Network based on Multi-Kernel Learning Support Vector Machine for Transformer Fault Diagnosis

Abstract: Traditional neural network has many weaknesses, such as a lack of mining transformer timing relation, poor generalization of classification, and low classification accuracy of heterogeneous data. Aiming at questions raised, this paper proposes a bidirectional recurrent neural network model based on a multi-kernel learning support vector machine. Through a bidirectional recurrent neural network for feature extraction, the features of the before and after time fusion and obvious data are outputted. The multi-kernel learning support vector machine method was carried out on the characteristics of data classification. The study of multi-kernel support vector machines in the weighted average of the way nuclear fusion improves the accuracy of characteristic data classification. Numerical simulation analysis of the temporal channel length for sequential network diagnostic performance, the effects of multi-kernel learning on the generalization ability of support vector machine, the influence on heterogeneous data processing capabilities, and transformer fault data classification experiment verifies the correctness and effectiveness of the bidirectional recurrent neural network based on multi-kernel learning support vector machine model. The experiment result shows that the diagnosis performance of bidirectional recurrent networks based on a multi-kernel learning support vector machine is better, and the prediction accuracy of the model is improved by more than 1.78% compared with several commonly used neural networks.

Author 1: Xun Zhao
Author 2: Shuai Chen
Author 3: Ke Gao
Author 4: Lin Luo

Keywords: Multi-kernel learning; support vector machine; bidirectional recurrent neural network; fault diagnosis

Download PDF

Paper 15: DataOps Lifecycle with a Case Study in Healthcare

Abstract: The DataOps methodology has become a solution to many of the difficulties faced by data science and analytics projects. This research introduces a novel DataOps lifecycle along with a detailed description of each phase. The proposed cycle enhances the implementation of data science and analytics projects for achieving business value. As a proof of concept, the new cycle phases are applied in a healthcare case study using the UCI Heart Disease dataset. Two goals are achieved. First, a dataset reduction by features analytic in which the four most effective features are selected. Second, different machine learning algorithms are applied to the dataset. The recorded results show that using the four most effective features is comparable with using the full features (thirteen features), and both approaches show high accuracy and sensitivity. The average accuracy of the highest four features is 82.32%, and the thirteen features is 84.28%. That means that the selected four features affect the applications with 97.67% accuracy. Besides, the average sensitivity of the highest four features is 87.94%, while the thirteen features are 87.12%. The study shows an interesting and significant result that data modeling needn't be done for all data science projects which reduced the dataset.

Author 1: Shaimaa Bahaa
Author 2: Atef Z. Ghalwash
Author 3: Hany Harb

Keywords: DataOps lifecycle; DataOps in machine learning; DataOps in healthcare; DataOps in data science; feature extraction; feature selection

Download PDF

Paper 16: Evaluation of e-Service Quality Impacts Customer Satisfaction: One-Gate Integrated Service Application in Indonesian Weather Agency

Abstract: Badan Meteorologi, Klimatologi, dan Geofisika (BMKG) is the weather agency in Indonesia. It has One-Gate Integrated Service Application, also known as Pelayanan Terpadu Satu Pintu (PTSP) BMKG Application, which is a web-based e-commerce concept application. Its goal is to provide users with information and services related to Meteorology, Climatology, and Geophysics (MCG) by using information and communication technology. This is part of Indonesia's move toward e-government. Since January 2020, all MCG service and information activities through PTSP BMKG must be done using the application. With a questionnaire and multivariate analysis, this study aims to determine how the quality of service affects customer satisfaction with the PTSP BMKG application. Scientists use the E-S-Qual scale to prove that it works and is a good measure. The results of this study show that customer satisfaction is affected positively and significantly by efficiency, fulfillment, system availability, and privacy simultaneously. Partially, customer satisfaction with the PTSP BMKG application is affected positively and considerably by how well the application works and how well it meets customers' needs. This has implications for the evaluation that BMKG needs to do.

Author 1: Aji Prasetyo
Author 2: Deny Irawan
Author 3: Dana Indra Sensuse
Author 4: Sofian Lusa
Author 5: Prasetyo Adi Wibowo
Author 6: Alivia Yulfitri

Keywords: e-Service quality; one-gate integrated service; e-government; multivariate analysis; Partial Least Square (PLS); Structural Equation Model (SEM)

Download PDF

Paper 17: Investigating the Input Validation Vulnerabilities in C Programs

Abstract: Input validation is a fairly universal programming practice that helps reduce the chances of producing protection-related vulnerabilities in software. In this paper, an experiment is conducted to specifically determine the input validation issues found in programs and the problematic functions that lead to such issues. The experiment evaluated 12 arbitrarily selected open source C projects written by different programmers. The top two most common input validation problems are buffer overflow/XSS and potential memory mismanagement. In addition, the functions that caused the first problem are (a) strings/text functions (e.g., strcpy and strcmp), and (b) functions that read from standard input, STDIN (e.g., scanf and gets). In contrast, the functions that caused the second problem are (a) memory allocation/deallocation functions (e.g., memmove and malloc), and (b) file manipulation functions (e.g., fopen and fseek). Furthermore, the goto construct—to a small extent—plays a role. The recommendations are that (a) developers are encouraged to use memory-safe programming languages, otherwise, they should perform different types of checks for the validity of inputs as soon as they are entered, and (b) they should have the required knowledge of secure source code and use tools/suites to manage malformed strings.

Author 1: Shouki A. Ebad

Keywords: Input validation; buffer overflow; memory mismanagement; safe C functions

Download PDF

Paper 18: An Optimized Method for Polar Code Construction

Abstract: Polar codes are traditionally constructed by calculating the reliability of channels, then sorting them by intensive calculations to select the most reliable channels. However, these operations can be complicated especially when, the polar code length, N becomes great. This paper proposes a new low-complexity procedure for polar codes construction over binary erasure and additive white Gaussian noise (AWGN) channels. Using the proposed algorithm, the code construction complexity is reduced from O(Nlog N) to O(N), where N=2n (n≥1). The proposed approach involves storing the classification of channels by reliabilities in a vector of length L, and then deriving the classification of M channels for every M where M<=L. The proposed method is consistent with Bhattacharya parameter based Construction and Density Evolution with Gaussian Approximation (DEGA) based construction. In this paper, the Successive Cancellation Decoding algorithm (SCDA) is used. Thanks to its low complexity and its high error-correction capability.

Author 1: Issame El Kaime
Author 2: Reda Benkhouya
Author 3: Abdessalam Ait Madi
Author 4: Hassane Erguig

Keywords: Polar codes; SNR; successive cancellation decoding; error correction; low-complexity; code construction; additive white Gaussian noise; Bhattacharya parameter; density evolution with Gaussian approximation

Download PDF

Paper 19: Current Multi-factor of Authentication: Approaches, Requirements, Attacks and Challenges

Abstract: Now-a-days, with the rapid and broad emergence of local or remote access to services on the internet. Authentication represents an important security control requirement and the MFA is recommended to mitigate the weaknesses in the SFA. MFA techniques can be classified into two main approaches: based biometric and non-biometric approaches. However, there is a problem to maintain the tradeoff between security and accuracy. The studies that have been reviewed on both authentication mechanisms are found contradictory in the direction of others. In the direction of authentication-based biometrics the researchers tended to increase the recognition accuracy, while in the other direction, the researchers proposed to combine many authentication factors to increase the security layers. The main contribution of this survey is to review and spotlight on the current state of the arts in both authentication mechanisms to achieve a secure user identity. This paper provides a review of authentication protocols and security requirements. In addition to a detailed review with a comparison of secure one-time passcode generation and distribution. Furthermore, a comprehensive review of cancelable biometrics techniques, attacks, and requirements. Finally, providing a summary of key challenges and future research directions.

Author 1: Ali Hameed Yassir Mohammed
Author 2: Rudzidatul Akmam Dziyauddin
Author 3: Liza Abdul Latiff

Keywords: MFA; authentication; OTP; cancelable; biometrics; security; identity

Download PDF

Paper 20: Arabic Dialogue Processing and Act Classification using Support Vector Machine

Abstract: Text classification is the technique of grouping documents according to their content into classes and groups. As a result of the vast amount of textual material available online, this procedure is becoming increasingly crucial. The primary challenge in text categorization is enhancing classification accuracy. This role is receiving more attention due to its importance in the development of these systems and the categorization of Arabic dialogue processing. In the research, attempts were made to define dialogue processing. It concentrates on classifying words that are used in dialogue. There are various types of dialogue processing, including hello, farewell, thank you, confirm, and apologies. The words are used in the study without context. The proposed approach recovers the properties of function words by replacing collocations with standard number tokens and each substantive keyword with a numerical approximation token. With the use of the linear support vector machine (SVM) technique, the classification method for this study was obtained. The act is classified using the linear SVM technique, and the anticipated accuracy is evaluated against that of alternative algorithms. This study encompasses Arabic dialogue acts corpora, annotation schema, and classification problems. It describes the outcomes of contemporary approaches to classifying Arabic dialogue acts. A custom database in the domains of banks, chat, and airline tickets is used in the research to assess the effectiveness of the suggested solutions. The linear SVM approach produced the best results.

Author 1: Abraheem Mohammed Sulayman Alsubayhay
Author 2: Md Sah Hj Salam
Author 3: Farhan Bin Mohamed

Keywords: Dialogue processing; Act; Arabic language; linear support vector machine; without cue

Download PDF

Paper 21: Tree-based Machine Learning and Deep Learning in Predicting Investor Intention to Public Private Partnership

Abstract: Public private partnership (PPP) is the government initiate in accelerating public infrastructure development growth. However, the scheme exposes private sector to various risks including political risk which in turn affect financial performance and reporting of participating firms. Given that one of the issues facing the government is the lack of participation from the private sector in such arrangements. Thus, the main objective of this study is to observe the machine learning prediction models on private investor intention in participating the PPP program. Tree-based machine learning and deep learning are two different types of promising algorithms, which proven to be useful in widely domain of prediction problems but never been tested on the concerned problem of this study. Based on real data of investors for Indonesian listed firms, this paper presents the ability of the selected machine learning algorithms by means of different assessments point of view. First assessment is on the algorithms’ performances in producing accurate prediction. Second assessment is to identify the variance of PPP attributes in each of the prediction model with the machine learning algorithms. The performance results show that all the prediction models with the machine learning algorithms and the PPP attributes were well-fitted at R squared above 80%. The findings contribute a significant knowledge to various fields of scholars to implement a more in-depth analysis on the machine learning methods and investors’ prediction.

Author 1: Ahmad Amin
Author 2: Rahmawaty
Author 3: Maya Febrianty Lautania
Author 4: Suraya Masrom
Author 5: Rahayu Abdul Rahman

Keywords: Tree-based machine learning; deep learning; prediction; investor intention; public private partnership

Download PDF

Paper 22: VHDL based Design of an Efficient Hearing Aid Filter using an Intelligent Variable-Bandwidth-Filter

Abstract: Filtering techniques have been elaborated in the HA field to improve signal clarity and enhance the hearing capacity of deaf people. However, public sounds are highly noisy, so filtering those signals is not an easy task. Hence, the present article has aimed to develop a novel Ant Lion based power Noise-Variable Bandwidth Filter (ALPN-VBF) for the HA applications. Here, the proposed optimized power efficient filter has incorporated several functions like de-noising and frequency tuning based on the word features. Here, the signal’s noise has been removed with the maximum possible range with the help of High-pass-Filter (HPF) and low-pass filter (LPF). Finally, the developed model is tested with a few audiograms, and the filter parameters have been analyzed and compared with other models. The testing results have proved that the designed filter is better in frequency tuning and signal transmission than the previous approaches by attaining less delay and reduced power consumption rate.

Author 1: Ujjwala S Rawandale
Author 2: Sanjay R. Ganorkar
Author 3: Mahesh T. Kolte

Keywords: Hearing aid system; variable bandwidth filter; audiograms; matching error; power consumption; signal filtering

Download PDF

Paper 23: Performance Evaluation of Photovoltaic Projects in Latin America

Abstract: Photovoltaic solar energy has been booming worldwide due to the scarcity of non-renewable resources, from there arises the need to modernize and innovate in the methodologies for the use of energy resources as well as in the correct installation of this system at an urban or level rural. In recent decades in Latin America there have been presented advances in the implementation of photovoltaic projects, which is why this document aims to evaluate the feasibility of these at a technical and technological level, so that they are in accordance with the systems implemented in Asia, Europe, and North America. The analysis determined that the main factors affecting the feasibility of a photovoltaic execution project are economic and technological, in addition to the adverse impacts found on the ecosystem and the local population, but in general it was observed that they are weaknesses that can be corrected, since there are different countries that are working on establishing strategies to educate their community, so there is an improvement in the quality of life in sectors with high CO₂ pollution and lack of fossil fuels.

Author 1: Cristian León-Ospina
Author 2: Heyner Arias-Zarate
Author 3: Cesar Hernandez

Keywords: Evaluation; Latin America; photovoltaic; project

Download PDF

Paper 24: Ransomware: Analysis of Encrypted Files

Abstract: Ransomware is a type of malware that damage the system by encrypting all the files existing in the computer. To get access, the victim has to pay a ransom to get a key to decrypt his data. When the virus is running in machine, the user cannot stop it on the first try, so he may lose his entire files. One of the goals of this work is to detect ransomware based on encrypted files in real time and to minimize the cost of losing files. We will try to do an analysis of a received file (without opening it and seeing its contents). This scanning action can prevent a ransomware from spreading in the system. Most Ransomware files are sent in “.exe” format, but in this work, we will try to use other file formats that can accept malware, for example, .doc or .docx, .xls or .xlsx, .ppt or .pptx, .jpg, etc. In fact, an attacker can focus only on the files that contain useful data. In this paper, we are going to identify the types of files if they are suspicious or normal (without opening them) from their headers. For that first, we are going to analyze each extension separately (.docx, .exe, .pptx, .xlsx, .jpg, etc.) by identifying their headers and signatures. Then we will take several files with different extensions to analyze them by doing a program who detect if a file is benign or suspicious.

Author 1: Houria MADANI
Author 2: Noura OUERDI
Author 3: Abdelmalek Azizi

Keywords: Ransomware; encrypted files; signature; file format; static analysis

Download PDF

Paper 25: MMZ: A Study on the Implementation of Mathematical Game-based Learning Tool

Abstract: Mathematic has always been one of the hardest subject to be learnt during school. This is the very same issue that student have been facing no matter the level of education that they are in, and this is the reason why Math Maze Zone or also known as MMZ had been developed. MMZ is a mathematical game-based learning tool for primary school students to help them prepare for their final examination for the mathematics subject. The proposed game will consist of mathematical questions and the game design will be focusing on a maze where user will have to search for a way out from the maze while going through the checkpoint within the maze. The checkpoint will consist of mathematical questions and when user answered correctly, they will be able to continue their journey to explore the maze. MMZ focused on 1 chapter only for now which is Chapter 8: Space and Shape. Although the game only consists of one-chapter, primary school students can play the game to enhance their knowledge and to help them to be more engaged with mathematics subject by playing the game. The results will be taken from the perspective of adult who have close relationship with standard six students. Five main sections will also be identified along with MMZ to ensure getting a good result.

Author 1: Nur Syaheera Binti Sulaiman
Author 2: Hamzah Asyrani Bin Sulaiman
Author 3: Nor Saradatul Akmar Binti Zulkifli
Author 4: Tuty Asmawanty Binti Abdul Kadir

Keywords: Primary school; educational mathematic game; game components

Download PDF

Paper 26: An Improved Poisson Surface Reconstruction Algorithm based on the Boundary Constraints

Abstract: The usage of the point cloud surface reconstruction to generate high-precision 3D models has been widely applied in various fields. In order to deal with the problems of insufficient accuracy, pseudo-surfaces and high time cost caused by the traditional surface reconstruction algorithms of the point cloud data, this paper proposes an improved Poisson surface reconstruction algorithm based on the boundary constraints. For large density point cloud data obtained from 3D laser scanning, the proposed method firstly uses an octree instead of the KD-tree to search the near neighborhood; then, it uses the Open Multi-Processing (OpenMP) to accelerate the normal estimation based on the moving least squares algorithm; meanwhile, the least-cost spanning tree is employed to adjust the consistency of the normal direction; and finally a screened Poisson algorithm with the Neumann boundary constraints is proposed to reconstruct the point cloud. Compared with the traditional methods, the experiments on three open datasets demonstrated that the proposed method can effectively reduce the generation of pseudo-surfaces. The reconstruction time of the proposed algorithm is about 16% shorter than that of the traditional Poisson reconstruction algorithm, and produce better reconstruction results in the term of quantitative analysis and visual comparison.

Author 1: Zhouqi Liu
Author 2: Lei Wang
Author 3: Muhammad Tahir
Author 4: Jin Huang
Author 5: Tianqi Cheng
Author 6: Xinping Guo
Author 7: Yuwei Wang
Author 8: ChunXiang Liu

Keywords: Point cloud; moving least squares; Poisson reconstruction; Neumann boundary constraints

Download PDF

Paper 27: Data Augmentation for Deep Learning Algorithms that Perform Driver Drowsiness Detection

Abstract: Driver drowsiness is one of the main causes of driver-related motor vehicle collisions, as this impairs a person’s concentration whilst driving. With the enhancements of computer vision and deep learning (DL), driver drowsiness detection systems have been developed previously, in an attempt to improve road safety. These systems experienced performance degradation under real-world testing due to factors such as driver movement and poor lighting. This study proposed to improve the training of DL models for driver drowsiness detection by applying data augmentation (DA) techniques that model these real-world scenarios. This paper studies six DL models for driver drowsiness detection: four configurations of a Convolutional Neural Network (CNN), two custom configurations as well as the architectures designed by the Visual Geometry Group (VGG) (i.e. VGG16 and VGG19); a Generative Adversarial Network (GAN) and a Multi-Layer Perceptron (MLP). These DL models were trained using two datasets of eye images, where the state of eye (open or closed) is used in determining driver drowsiness. The performance of the DL models was measured with respect to accuracy, F1-Score, precision, negative class precision, recall and specificity. When comparing the performance of DL models trained on datasets with and without DA in aggregation, it was found that all metrics were improved. After removing outliers from the results, it was found that the average improvement in both accuracy and F1 score due to DA was +4.3%. Furthermore, it is shown that the extent to which the DA techniques improve DL model performance is correlated with the inherent model performance. For DL models with accuracy and F1-Score ≤ 90%, results show that the DA techniques studied should improve performance by at least +5%.

Author 1: Ghulam Masudh Mohamed
Author 2: Sulaiman Saleem Patel
Author 3: Nalindren Naicker

Keywords: Data augmentation; deep learning; computer vision; drowsiness detection; road safety

Download PDF

Paper 28: An Improved Ant Colony Algorithm for Virtual Resource Scheduling in Cloud Computing

Abstract: In order to solve the problems of uneven spatial distribution of data nodes and unclear weight relationship of virtual scheduling features in cloud computing platform, a virtual resource scheduling method based on improved ant colony algorithm is studied and designed to improve the performance of virtual resource scheduling in cloud computing platform by this method. After analyzing the information resource sequence change of the cloud computing platform, according to the STR - Tree partition graph, a simulated annealing-based algorithm is employed to classify the resource types after optimal scheduling into IO types, middle types and CPU types, and the time span and load balance are set as the measurement indexes. The simulation results show that after applying this method, the occupied resources of the main platform are 535 MB, which are much lower than the other two comparison algorithms, and the method has improved the allocation rationality, resource balance, maximum queue length and energy consumption. This result indicates that applying this virtual resource scheduling method can effectively improve the intelligent scheduling of virtual resources in the cloud computing platform.

Author 1: Chunlei Zhong
Author 2: Gang Yang

Keywords: Improved ant colony algorithm; cloud computing; virtual resources; intelligent scheduling

Download PDF

Paper 29: Deep Analysis of Risks and Recent Trends Towards Network Intrusion Detection System

Abstract: In the modern world, information security and communications concerns are growing due to increasing attacks and abnormalities. The presence of attacks and intrusion in the network may affect various fields such as social welfare, economic issues and data storage. Thus intrusion detection (ID) is a broad research area, and various methods have emerged over the years. Hence, detecting and classifying new attacks from several attacks are complicated tasks in the network. This review categorizes the security threats and challenges in the network by accessing present ID techniques. The major objective of this study is to review conventional tools and datasets for implementing network intrusion detection systems (NIDS) with open source malware scanning software. Furthermore, it examines and compares state-of-art NIDS approaches in regard to construction, deployment, detection, attack and validation parameters. This review deals with machine learning (ML) based and deep learning (DL) based NIDS techniques and then deliberates future research on unknown and known attacks.

Author 1: D. Shankar
Author 2: G. Victo Sudha George
Author 3: Janardhana Naidu J N S S
Author 4: P Shyamala Madhuri

Keywords: Network; dataset; communication; intrusion detection system; attacks; deep learning; machine learning

Download PDF

Paper 30: Ọdịgbo Metaheuristic Optimization Algorithm for Computation of Real-Parameters and Engineering Design Optimization

Abstract: This paper proposes a new population-based global optimization algorithm, Ọdịgbo Metaheuristic Optimization Algorithm–ỌMOA, for solving complex bounded-constraint/single objective real-parameter problems found in most engineering and scientific applications. It’s inspired by the human socio-cultural informal discipleship learning pattern inherent in the behavior of the Ndịgbo peoples; the subject – primary (Nwa-ahịa), in mercantile cycle grows to a secondary (Mazi) owing to the intuitive stratagem (dialect - Ịgba) embedded in an aged-long cultural model “Ịgba-ọsọ-ahịa” (meaning, strategic marketing skills, and practice). The model mimics the search routine for satisfying a customer’s need in the market, built into exploration and exploitation applied in the mathematical model. About 30 complex classical unconstrained functions are tested, comparing results with that of five similar state-of-the-art algorithms. Also, 29 CEC-2017 single objective real constraint benchmark serious dimensional problems were simulated and compared against the winners of that competition. Validation includes statistical (t-test, p-value) comparison and for 50 Dimension constraint problems as ỌMOA demonstrated superior performance. TCS (9.18%), WBP (6.3%), PVDP (601%), RGP (319%), RBP (760%), GTCD (202%), HIMMELBLAU (4%), and CDP (88.12%) are the improvements made on 8 CEC-2020 engineering real design problems against the former best performances; OMOA is simple to implement, replicate and applicable across domains. Also, some new, improved optimum was obtained in Shubert and Schaffer 4 function compared to the global optimums.

Author 1: Ikpo C V
Author 2: Akowuah E K
Author 3: Kponyo J J
Author 4: Boateng K O

Keywords: Human socio-cultural; nature-inspired; informal-learning; global optimization

Download PDF

Paper 31: Modeling of Whale Optimization with Deep Learning based Brain Disorder Detection and Classification

Abstract: Brain disorders are a significant source of economic strain and unfathomable suffering in modern society. Imaging techniques help diagnose, monitor and treat mental health, neurological, and developmental disorders. To aid in the Computer-Aided Diagnosis of brain diseases, deep learning (DL) was used for the analysis of neuroimages from modalities including Positron Emission Tomography (PET), Structural Magnetic Resonance Imaging (SMRI), and functional MRI. In this study, a Whale Optimization Algorithm is used with Deep Learning to analyse MRI scans for signs of neurological disease. WOADL-BDDC may detect and label abnormalities in the brain based on an MRI scan. It uses a two-step pre-processing procedure, first using guided filtering to get rid of background noise and then using U-Net segmentation to get rid of the top of the head. QuickNAT, along with RMSProp, is used to segment the brain. When analysing data, WOADL-BDDC uses radionics to collect information from every layer. When used in a convolutional recurrent neural network model, the Whale Optimization Algorithm may accurately categorize mental illness. WOADL-BDDC is put through its paces using ADNI 3D. Compared to state-of-the-art classification results from Vgg16, Graph CNN, Modified ResNet18, Non-linear SVM, ResNet50-SVM, ResNet50-RF, the suggested technique achieved the greatest accuracy. It was demonstrated that the suggested model is superior to other models for classification from MRI images. In simulations, the proposed approach is shown to be effective in optimizing hyperparameters with an accuracy of 94.38 % on TR set and 94.87% on TS set, Precision of 96.43% on TR set and 97.62% on TS set, and an F1-Score of 89.35 % and 92.10% on TR and TS set, respectively.

Author 1: Uvaneshwari M
Author 2: M. Baskar

Keywords: Brain disorder detection; magnetic resonance imaging; deep learning; convolutional recurrent neural network; whale optimization algorithm

Download PDF

Paper 32: A Study on the Designation Institution for Supercomputer Specialized Centers in Republic of Korea

Abstract: In Korea, specialized centers are designated for 10 strategic fields for the purpose of jointly utilizing supercomputer resources at the national level. Based on the “National Supercomputing Innovation Strategy,” it plans to select 10 centers in three stages by 2030, and has now completed the designation of the first-stage specialized centers in 2022. With the second designation in 2024 ahead, it is urgent to review and improve the existing designation institution for fairer and more effective selection of specialized centers. Therefore, this paper analyzed the influence of evaluation items and the influence of evaluation items on evaluation results by using logistic regression analysis and network centrality analysis to prepare improvement plans for the existing evaluation model. As a result of the analysis, improvement measures were derived, such as subdividing evaluation items with low impact, expanding the items, and lowering the allotment of evaluation items with low impact.

Author 1: Hyungwook Shim
Author 2: Yonghwan Jung
Author 3: Jaegyoon Hahm

Keywords: Supercomputer; specialized center; evaluation system; logistic regression model; network centrality analysis

Download PDF

Paper 33: Time Series Forecasting using LSTM and ARIMA

Abstract: Time series analysis is the process of evaluating sequential data to extract meaningful statistics. In the current era, organizations rely greatly on data analysis to solve and predict possible answers to a specific problem. These predictions help greatly in decision-making. In time series problems, the data is used to train the different machine and deep learning models. The models train on provided data displays particular outcomes. These outcomes anticipate possible solutions. In this paper, the two most effective Python models LSTM (Long Short Term Memory Loss) and ARIMA (Autoregressive integrated moving average) are used. These are the two most recommended models while dealing with Time series forecasting. The selected dataset is from Mulkia Gulf Real Estate available at MarketWatch. The main objective of this research paper is to study and compare the results of the two models used and determine which one is the best-suited model for that particular type of prediction. However, these are widely used models but the focus point of this research is determining the performance variance between these two models. LSTM became famous in 1997 as a training model that can remember patterns based on previous data while ARIMA is famous for forecasting variables of interest using a linear combination of previous values of the variable. The findings state that ARIMA is better for time series forecasting than LSTM based on the mean average of the basic evaluation parameters.

Author 1: Khulood Albeladi
Author 2: Bassam Zafar
Author 3: Ahmed Mueen

Keywords: Stock analysis; machine learning; deep learning; time series; ARIMA; LSTM

Download PDF

Paper 34: Enhancing the Intrusion Detection Efficiency using a Partitioning-based Recursive Feature Elimination in Big Cloud Environment

Abstract: In the era of cloud computing, the effectiveness of utilizing supervised machine-learning-based intrusion detection models for categorizing and detecting malicious network attacks depends on the preparation, extraction, and selection of the optimal subset of features from the dataset. Therefore, before beginning the training phase of the machine learning classifier models, it is required to remove redundant data, manage missing values, extract statistical features from the dataset, and choose the most valuable and appropriate attributes using the Python Jupyter Notebook. In this study, partitioning-based recursive feature elimination (PRFE) method was suggested to decrease the complexity space and training time for machine learning models while increasing the accuracy rate of detecting malicious attacks. On the information security and object technology cloud intrusion dataset (ISOT-CID), some of the most popular supervised machine learning classification techniques, including support vector machines (SVM) and decision trees (DT), have been assessed using the suggested PRFE technique. In comparison to some of the most popular filter and wrapper-based feature selection strategies, the results of the practical experiments demonstrated an improvement in accuracy, recall, F-score, and precision rate after using the PRFE technique on the ISOT-CID dataset. Additionally, the time required to train the machine-learning models was reduced.

Author 1: Hesham M. Elmasry
Author 2: Ayman E. Khedr
Author 3: Hatem M. Abdelkader

Keywords: Machine learning models; big cloud environment; intrusion detection system (IDS); Jupyter Notebook; feature selection; ISOT-CID introduction

Download PDF

Paper 35: Parameter Extraction and Performance Analysis of 3D Surface Reconstruction Techniques

Abstract: Digital image-based 3D surface reconstruction is a streamlined and proper means of studying the features of the object being modelled. The generation of true 3D content is a very crucial step in any 3D system. A methodology to reconstruct a 3D surface of objects from a set of digital images is presented in this paper. It is simple, robust, and can be freely used for the construction of 3D surfaces from images. Digital images are taken as input to generate sparse and dense point clouds in 3D space from the detected and matched features. Poisson Surface, Ball Pivoting, and Alpha shape reconstruction algorithms have been used to reconstruct photo-realistic surfaces. Various parameters of these algorithms that are critical to the quality of reconstruction are identified and the effect of these parameters with varying values is studied. The results presented in this study will give readers an insight into the behaviour of various algorithmic parameters with computation time and fineness of reconstruction.

Author 1: Richha Sharma
Author 2: Pawanesh Abrol

Keywords: 3D reconstruction; point cloud; feature detection; feature matching

Download PDF

Paper 36: Classifying Weather Images using Deep Neural Networks for Large Scale Datasets

Abstract: Classifying weather from outdoor images helps prevent road accidents, schedule outdoor activities, and improve the reliability of vehicle assistant driving and outdoor video surveillance systems. Weather classification has applications in various fields such as agriculture, aquaculture, transportation, tourism, etc. Earlier, expensive sensors and huge manpower were used for weather classification making it very tedious and time-consuming. Automating the task of classifying weather conditions from images will save a huge time and resources. In this paper, a framework based on the transfer learning technique has been proposed for classifying the weather images with the features learned from pre-trained deep CNN models in much lesser time. Further, the size of the training data affects the efficiency of the model. The larger amount of high-quality data often leads to more accurate results. Hence, we have implemented the proposed framework using the spark platform making it scalable for big datasets. Extensive experiments have been performed on weather image dataset and the results proved that the proposed framework is reliable. From the results, it can be concluded that weather classification with the InceptionV3 model and Logistic Regression classifier yields the best results with a maximum accuracy of 97.77%.

Author 1: Shweta Mittal
Author 2: Om Prakash Sangwan

Keywords: Weather classification; big data; transfer learning; deep learning; Sparkdl; convolutional networks

Download PDF

Paper 37: Exploring College Academic Performance of K to12 IT and Non-IT-related Strands to Reduce Academic Deficiencies

Abstract: Improving students' academic performance is a significant concern among academics. Despite various strategies to improve academic performance, there are still a significant number of students who fail academically. This study sought to investigate the possible reasons for the academic deficiencies among Infotech Development Systems Colleges, Inc. Information Technology (IT) students in the IT and Non-IT-related strands, if their strand is significant to their academic performance in college, as well as formulate a solution based on the identified reasons and recommendations to reduce the negative academic remarks. The researchers employed survey questionnaires and interviews to conduct exploratory data analysis. Similarly, the Senior High School academic performance and actual grades of the respondents from AY 2018-2019 to First Semester of AY 2021-2022. On the tested hypothesis showed statistical significance (p < 0.05) on IT-related strand. The study further reveals that the Non-IT-related strand has more students with academic deficiencies compared to IT-related strand and highlights a variety of reasons cited. Respondents cited misalignment of strand to current program, instructor not speaking clearly, unreliable internet connection, and failure to complete and submit an academic task as reasons for academic deficiencies. The researchers designed a model that can potentially eliminate academic inadequacies. The model takes into account both internal and external factors; for internal variable it includes effective time management, a positive attitude and mindset, prompt and punctual completion of requirements, and good study habits. While for the external factors, competent and student-friendly instructors, a stable, strong, and accessible internet connection, a conducive learning environment, relevant available resources and facilities, adaptation of limited face-to-face or hybrid classes, and alignment of SHS strand to college program of choice are recommended.

Author 1: Marilou S. Benoza
Author 2: Thelma Palaoag

Keywords: Academic performance; exploratory research; reduce academic deficiencies; thematic analysis

Download PDF

Paper 38: AI based Dynamic Prediction Model for Mobile Health Application System

Abstract: In recent decades, mobile health (m-health) applications have gained significant attention in the healthcare sector due to their increased support during critical cases like cardiac disease, spinal cord problems, and brain injuries. Also, m-health services are considered more valuable, mainly where facilities are deficient. In addition, it supports wired and advanced wireless technologies for data transmission and communication. In this work, an Artificial Intelligence (AI)-based deep learning model is implemented to predict healthcare data, where the data handling is performed to improve the dynamic prediction performance. It includes the working modules of data collection, normalization, AI-based classification, and decision-making. Here, the m-health data are obtained from the smart devices through the service providers, which comprises the health information related to blood pressure, heart rate, glucose level, etc. The main contribution of this paper is to accurately predict Cardio Vascular Disease (CVD) from the patient dataset stored in cloud using the AI-based m-health system. After obtaining the data, preprocessing can be performed for noise reduction and normalization because prediction performance highly depends on data quality. Consequently, we use the Gorilla Troop Optimization Algorithm (GTOA) to select the most relevant functions for classifier training and testing. Classify his CVD type according to a selected set of features using bidirectional long-term memory (Bi-LSTM). Moreover, the proposed AI-based prediction model's performance is validated and compared using different measures.

Author 1: Adari Ramesh
Author 2: C K Subbaraya
Author 3: G K Ravi Kumar

Keywords: Artificial Intelligence (AI); M-Health System; Data Collection; Cloud Storage; Gorilla Troop Optimization (GTO); Bi-directional Long Short-Term Memory (Bi-LSTM); dynamic prediction

Download PDF

Paper 39: Reducing Cheating in Online Exams Through the Proctor Test Model

Abstract: The World Health Organization (WHO) officially declared coronavirus (COVID-19) a pandemic on March 11, 2020. Educational institutions must change most face-to-face learning activities in class to online. This situation forces academic institutions to change the format of assessing student learning outcomes. Online exam surveillance applications utilizing cameras and other blocking browsers (proctors) are becoming popular. However, the appearance of the proctor model supervised exam system also raises controversy. The main discussion regarding this proctor system is the integrity of assessment and the capacity of students to adapt to this new method of supervision. The main question is whether students feel comfortable using the proctor system in exams and whether this system affects students' scores. To answer this question, we have analyzed the scores obtained from a trial of 152 scores of students learning Arabic at Hasanuddin University Makassar, Indonesia. The experiment involved three exam models: online format from home using the Sikola Learning Management System (Modality 1), online directly using the Proctor System in the Sikola Learning Management System (Modality 2), and a paper exam format in person under the supervision of a lecturer (Modality 3). The results show that students prefer Modality 1 (online at home with the Sikola LMS system). There is a statistical difference between the scores obtained by students from the three modalities analyzed. Student scores with modality 1 are higher than the other two modalities. On the other hand, there was no difference in scores between modalities 2 and 3. The online exam system (modality 2) can be applied to online exams in higher education institutions because it can reduce or even keep students from cheating.

Author 1: Yusring Sanusi Baso
Author 2: Nurul Murtadho
Author 3: Syihabuddin
Author 4: Hikmah Maulani
Author 5: Andi Agussalim
Author 6: Haeruddin
Author 7: Ahmad Fadlan
Author 8: Ilham Ramadhan

Keywords: Reducing cheating; online exam; proctor test models; Indonesian learners of Arabic; Silsilat Al-Lisan

Download PDF

Paper 40: A Framework of Outcome-based Assessment and Evaluation for Computing Programs

Abstract: This paper is to present a framework for student outcome-based assessment and evaluation, including the process and detailed activities leading to continue assessment of the successes of an academic program which is essential to its sustainability. Moreover, this paper provides a survey of the literature that reviews the different means of assessing and evaluating an academic program together with the critical performance metrics which aid in quantifying such evaluation. The presented framework is implemented on the Information Technology program over a course of five years. The paper provides empirical insights about how careful implementation of the presented framework enabled the College of Information Technology in Ahlia University to achieve outstanding results in quality assurance and to be ABET accredited. The results of the implementation prove the effectiveness of the framework in improving the student performance and the program. This paper fulfils an identified need to study how student outcome-based assessment and evaluation model enables an academic institute to foster quality assurance instead of relying on ad hoc practices which might lead them to trial-and-error approach. The presented framework could be followed by other institution aiming for international accreditations.

Author 1: Wasan S. Awad
Author 2: Khadija A. Almhosen

Keywords: Student outcomes; program assessment; program evaluation; program accreditation; ABET accreditation; continuous improvement

Download PDF

Paper 41: A Hybrid Filtering Technique of Digital Images in Multimedia Data Warehouses

Abstract: The similarity search approach used for image Data Warehouse (DW) can provide better insights into discovering the most similar images compared to the input query. Due to the later innovation improvement, the mixed media complexity is discernibly expanded and modern inquires about regions are opened depending on comparable mixed media substance recovery. Content-Based Image Retrieval (CBIR) algorithms are utilized for the retrieval of images related to the inquiry image from gigantic databases or DW. The queries that are used for DW are complex, take a lot of time to process and many give less accurate results. For these reasons, this paper needs to have an effective technique to improve the similarity search query process that reflects a more positive result. In this paper, show how to extract features from a set of images (color, shape, and texture features) by using CBIR algorithm with Color Edge Detection (CED) method. Once these features are extracted, the proposed method will minimize the distance between these features vectors and the query image one using a Genetic Algorithm (GA). This paper illustrates the extraction of endless strong and imperative features from the database of the images, therefore, the capacity of these features in storing within the frame of features vectors. Accordingly, an imaginative closeness assessment with a metaheuristic algorithm (Genetic Algorithm (GA) with Simulating Annealing (SA)) has been attained between the query image features and those having a place in the database image. This paper introduces a new algorithm CEDF (Color Edge Detection with Gaussian Blur Filter) that applies the Gaussian Blur Filter after using CED method for feature detection of the image. Experimental results show that CEDF method gives better result than the other already-known methods.

Author 1: Nermin Abdelhakim Othman
Author 2: Ahmed Ayman Saad
Author 3: Ahmed Sharaf Eldin

Keywords: Data Warehouse (DW); Content-Based Image Retrieval (CBIR); Color Edge Detection (CED); Genetic Algorithm (GA); Simulating Annealing (SA); Memetic Algorithm (MA)

Download PDF

Paper 42: Agro-Food Supply Chain Traceability using Blockchain and IPFS

Abstract: Many blockchain initiatives significantly use the InterPlanetary File System (IPFS) to store user data off-chain. The centralized administration, ambiguous data, unreliable data, and ease of creating information islands are all issues with traditional traceability systems. This study develops a monitoring system using blockchain technology to record and inquire about product information in the supply network of Non-Perishable (NP) agro goods to address the above issues. The transparency and trustworthiness of traceability data were considerably improved by employing blockchain technology's distributed, tamper-proof, and traceable properties. To alleviate the strain on the blockchain and enable efficient information inquiry, a storage structure is built in which both public and private data are stored in the blockchain and the Inter Planetary File System (IPFS) using cryptography. Because of its ability to trace the origin of food, blockchain technology contributes to the development of reliable food supply chains and the establishment of rapport between farmers and their customers. Since it provides a secure location for data to be kept, it can pave the way for implementing data-driven farming techniques. In addition to improving data security, recording farm data in IPFS and storing encrypted file IPFS hashes in smart contracts solves the issue of blockchain storage explosion. And when used in tandem with smart contracts, it enables instantaneous outflows between parties in response to changes in data stored in the blockchain. The paper also offers simulations of the implementation and analysis of the performance. The findings validate that our system improves security for sensitive information, safeguards supply chain data, and meets the needs of real-world applications. Furthermore, it boosts throughput efficiency while reducing latency.

Author 1: Subashini Babu
Author 2: Hemavathi Devarajan

Keywords: Blockchain; IPFS; supply chain management; traceability; Ethereum

Download PDF

Paper 43: Implementation of Flood Emergency Response System with Face Analytics

Abstract: Disaster management system is developed to monitor flood, tsunami and earthquake, which is effectively preparing for and responding to the disaster. In fact, Malaysia is building the emergency response system for managing the flood disaster since it is highly occurred. However, the current flood emergency response system has limitations, which, has no integration that data entered into spreadsheets and transferred to different storages, and, data analytics that assist in data collection and decision-making. Even though flood emergency response system have been improved, which introducing sirens and loudspeakers to reach flood victims but still difficult to access the basic facilities and receive flood responses. Therefore, this study implements a flood emergency response system with face analytics to assist data acquisition, which analyze flood victim’s faces using CCTV infrastructure with HOG algorithm that incorporated with a dashboard. The dashboard categorizes the number of flood occurrence, maximum flood period (days), the number of displaced flood victims and loss assessment. Findings have shown that the dashboard helps the enforcement agencies to implement the real-time information about flood victims. Based on the number flood frequency occurrence in Malaysia from 2017 to 2021, the percentage produced was 27%, 19%, 12%, 19% and 23%. Moreover, the duration of floods has been decreased from 30% to 17% in 5 years that shows the flood emergency response helps the Malaysia government to improve the infrastructure. The significant of this study beneficial to the local enforcement unit and evacuation centers in identifying the flood victims.

Author 1: E. Mardaid
Author 2: Z. Zainal Abidin
Author 3: S. A. Asmai
Author 4: Z. Abal Abas

Keywords: Disaster management system; flood emergency response system; flood emergency response system with face analytics; flood emergency response system with face analytics using HOG algorithm; flood emergency response system dashboard

Download PDF

Paper 44: A Novel Smart Deepfake Video Detection System

Abstract: Rapid advancements in deep learning-based technologies have developed several synthetic video and audio generation methods producing incredibly hyper-realistic deepfakes. These deepfakes can be employed to impersonate the identity of a source person in videos by swapping the source's face with the target one. Deepfakes can also be used to clone the voice of a target person utilizing audio samples. Such deepfakes may pose a threat to societies if they are utilized maliciously. Consequently, distinguishing either one or both deepfake visual video frames and cloned voices from genuine ones has become an urgent issue. This work presents a novel smart deepfake video detection system. The video frames and audio are extracted from given videos. Two feature extraction methods are proposed, one for each modality; visual video frames, and audio. The first method is an upgraded XceptionNet model, which is utilized for extracting spatial features from video frames. It produces feature representation for visual video frames. The second one is a modified InceptionResNetV2 model based on the Constant-Q Transform (CQT) method. It is employed to extract deep time-frequency features from the audio modality. It produces feature representation for the audio. The corresponding extracted features of both modalities are fused at a mid-layer to produce a bimodal information-based feature representation for the whole video. These three representation levels are independently fed into the Gated Recurrent Unit (GRU) based attention mechanism helping to learn and extract deep and important temporal information per level. Then, the system checks whether the forgery is only applied to video frames, audio, or both, and produces the final decision about video authenticity. The newly suggested method has been evaluated on the FakeAVCeleb multimodal videos dataset. The experimental results analysis assures the superiority of the new method over the current–state-of-the-art methods.

Author 1: Marwa Elpeltagy
Author 2: Aya Ismail
Author 3: Mervat S. Zaki
Author 4: Kamal Eldahshan

Keywords: Deepfake; deepfake detection; bimodal; XceptionNet; InceptionResNetV2; constant-Q transform; CQT; Gated Recurrent Unit; GRU; video authenticity; deep learning; multimodal

Download PDF

Paper 45: Three on Three Optimizer: A New Metaheuristic with Three Guided Searches and Three Random Searches

Abstract: This paper presents a new swarm intelligence-based metaheuristic called a three-on-three optimizer (TOTO). This name is chosen based on its novel mechanism in adopting multiple searches into a single metaheuristic. These multiple searches consist of three guided searches and three random searches. These three guided searches are searching toward the global best solution, searching for the global best solution to avoid the corresponding agent, and searching based on the interaction between the corresponding agent and a randomly selected agent. The three random searches are the local search of the corresponding agent, the local search of the global best solution, and the global search within the entire search space. TOTO is challenged to solve the classic 23 functions as a theoretical optimization problem and the portfolio optimization problem as a real-world optimization problem. There are 13 bank stocks from Kompas 100 index that should be optimized. The result indicates that TOTO performs well in solving the classic 23 functions. TOTO can find the global optimal solution of eleven functions. TOTO is superior to five new metaheuristics in solving 17 functions. These metaheuristics are grey wolf optimizer (GWO), marine predator algorithm (MPA), mixed leader-based optimizer (MLBO), golden search optimizer (GSO), and guided pelican algorithm (GPA). TOTO is better than GWO, MPA, MLBO, GSO, and GPA in solving 22, 21, 21, 19, and 15 functions, respectively. It means TOTO is powerful to solve high-dimension unimodal, multimodal, and fixed-dimension multimodal problems. TOTO performs as the second-best metaheuristic in solving a portfolio optimization problem.

Author 1: Purba Daru Kusuma
Author 2: Ashri Dinimaharawati

Keywords: Optimization; metaheuristic; swarm intelligence; portfolio optimization; Kompas 100; bank

Download PDF

Paper 46: An Empirical Study on the Affecting Factors of Cloud-based ERP System Adoption in Iraqi SMEs

Abstract: This paper aims to investigate the main factors that have an impact on the adoption of cloud-based enterprise resource planning (ERP) among small- and medium-sized enterprises (SMEs) in the Republic of Iraq using TOE, DOI, and HOT-fit as a theoretical framework. Data was collected from 136 decision maker senior executives, and IT managers in SMEs in the Republic of Iraq. A web-based survey questionnaire was used for data collection processes. The research model and the derived hypotheses were tested using SPSS and SmartPLS. The findings indicate several specific factors have a significant effect on the adoption of cloud-based ERP. This conclusion can be utilized in enhancing the strategies for approaching cloud-based ERP by pinpointing the reasons why some SMEs choose to adopt this technology and success during the adoption phase, while others still do not go forward with the adoption. This study provides an overview and empirically shows the main determinants logistical factors that might face SMEs in the Republic of Iraq. The findings also help SMEs consider their information technologies investments when they think to adopt cloud-based ERP.

Author 1: Mohammed G. J
Author 2: MA Burhanuddin
Author 3: Dawood F. A. A
Author 4: Alyousif S
Author 5: Alkhayyat A
Author 6: Ali M. H
Author 7: R. Q. Malik
Author 8: Jaber M. M

Keywords: SMEs; TOE; DOI and HOT-fit frameworks; Cloud-based ERP; ICT; SmartPLS; SPSS

Download PDF

Paper 47: A Learning-based Correlated Graph Model for Spinal Cord Injury Prediction from Magnetic Resonance Spinal Images

Abstract: In epidemiological research on spine surgery, machine learning represents a promising new area. It is made up of several algorithms that work together to identify patterns in the data. Machine learning provides many benefits over traditional regression techniques, including a lower necessity for a priori predictor information and a higher capacity for managing huge datasets. Recent research has made significant progress toward using machine learning more effectively in spinal cord injury (SCI). Machine learning algorithms are employed to analyze non-traumatic and traumatic spinal cord injuries. Non-traumatic spinal cord injuries often reflect degenerative spine conditions that cause spinal cord compression, such as degenerative cervical myelopathy. This article proposes a novel correlated graph model (CGM) that adopts correlated learning to predict various outcomes published in traumatic and non-traumatic SCI. In the studies mentioned, machine learning is used for several purposes, including imaging analysis and epidemiological data set prediction. We discuss how these clinical predictive models are based on machine learning compared to traditional statistical prediction models. Finally, we outline the actions that must be taken in the future for machine learning to be a more prevalent statistical analysis method in SCI.

Author 1: P. R. S. S. V Raju
Author 2: V. Asanambigai
Author 3: Suresh Babu Mudunuri

Keywords: Spinal cord injury; regression; machine learning; graph model

Download PDF

Paper 48: The Practices of Online Assessment in a Digital Device in the Context of University Training: The Case of Hassan II University

Abstract: This research presents online assessment in a digital device in the context of university training, aimed at improving their practices with emerging technologies based on an experiment with students from Hassan II University. Or, online assessment is a systematic process that helps measure the knowledge and skills of learners through multiple technological tools in a digital device. Indeed, digital devices intend to revolutionize higher education with the use of Information and Communication Technologies (ICT). Nevertheless, digital devices pose the problem of student identity verification during online assessment. In reality, automated online assessment systems are extremely vulnerable to cheating. So, our aim of this research is to explore, firstly, the types of online assessment that could be implemented in a digital device and secondly, how to verify the identity of the student during an online course on a digital device? The sample of our experiment consists of (N = 108) students from the Hassan II University of Casablanca, divided into two classes of the ITEF and MIMPA Masters and, our study was based on an online questionnaire for (N = 37) teachers at Hassan II University in Casablanca. The results obtained is to put into practice in digital devices diagnostic evaluations and formative evaluations using biometric methods for identity verification with a limited number. However, biometrics is inapplicable in summative assessments due to the problem of massiveness and hindrances in the online exam. For this reason, measures must be put in place to promote the smooth running of the online assessment.

Author 1: Fatima-ezzahra Mrisse
Author 2: Nadia Chafiq
Author 3: Mohammed Talbi
Author 4: Kamal Moundy

Keywords: Online assessment; digital device; Information and Communication Technologies (ICT); biometrics; identify digital; student

Download PDF

Paper 49: Comparative and Evaluation of Anomaly Recognition by Employing Statistic Techniques on Humanoid Robot

Abstract: This paper presents the study to differentiate between normal and anomaly conditions detected by humanoid robots using comparative statistics. The study has been conducted in robotic software as a platform to examine the scenario and evaluate between the anomalies and normal behaviour in different conditions. This study employed a machine vision technique to run an image segmentation process and carry out semi-supervised object training within a controlled environment. The robot is trained by differentiating the measurement size of the target object, its location, and the object’s visibility within three different frames. The effect is measured by extracting the positive predictive value (PPV) value, mean and standard deviation value from the captured image using statistical techniques in machine vision. The results showed that the mean value decreased by around 50% from the normal scenario when an anomaly occurred. Aside from that, the standard deviation values were more than twofold compared to the common scenario, especially after the object’s size grew. In contrast, the deviation value is remarkably small when the target is situated in the middle of adjacent frames, compared to the value when the entire shape is positioned in the frame. Simultaneously, the mean values from the processed image produced a minor difference.

Author 1: Nuratiqa Natrah Mansor
Author 2: Muhammad Herman Jamaluddin
Author 3: Ahmad Zaki Shukor
Author 4: Muhammad Sufyan Basri

Keywords: Anomaly detection; humanoid robot; vision system; statistical analysis; robot recognition

Download PDF

Paper 50: Queueing Model based Dynamic Scalability for Containerized Cloud

Abstract: Cloud computing has become a growing technology and has received wide acceptance in the scientific community and large organizations like government and industry. Due to the highly complex nature of VM virtualization, lightweight containers have gained wide popularity, and techniques to provision the resources to these containers have drawn researchers towards themselves. The models or algorithms that provide dynamic scalability which meets the demand of high performance and QoS utilizing the minimum number of resources for the containerized cloud have been lacking in the literature. The dynamic scalability facilitates the cloud services in offering timely, on-demand, and computing resources having the characteristic of dynamic adjustment to the end users. The manuscript has presented a technique which has exploited the queuing model to perform the dynamic scalability and scale the virtual resources of the containers while reducing the finances and meeting up the user’s Service Level Agreement (SLA). The paper aims in improving the usage of virtual resources and satisfy the SLA requirements in terms of response time, drop rate, system throughput, and the number of containers. The work has been simulated using Cloudsim and has been compared with the existing work and the analysis has shown that the proposed work has performed better.

Author 1: Ankita Srivastava
Author 2: Narander Kumar

Keywords: Cloud computing; scalability; containers; containerized cloud models; queueing model

Download PDF

Paper 51: An Improved Breast Cancer Classification Method Using an Enhanced AdaBoost Classifier

Abstract: The goal of this research is to create a machine learning (ML) classifier that can improve breast cancer (BC) diagnosis and prediction. The principle components analysis (PCA) technique is used in this work to minimize the dimensions of the BC dataset and achieve better classification metrics. The developed classifier outperformed others in terms of F1 score and accuracy score. Using the original BC dataset, four different classifiers are applied to determine the best classifier in terms of performance metrics. The used classifiers were RandomForest, DecisionTree, AdaBoost, and GradientBoosting. The RandomForest classifier obtained (95.7%) f1 score and (94.5%) accuracy score, the DecisionTree classifier obtained (93%) f1 score and (91%) accuracy score, the GradientBoosting classifier obtained (95%) f1 score and (93.5%) accuracy score, and the AdaBoost classifier obtained (95.8%) f1 score and (94.5%). The AdaBoost classifier was utilized to create the final model using the reduced PCA dataset because it scored the highest performance metrics. The developed classifier is named as “pcaAdaBoost”. The optimized pcaAdaBoost achieved higher performance metrics in terms of f1 score (99%) and accuracy score (98.8%). The results reveal that the optimized pcaAdaBoost scored highest performance measures in terms of cross-validation and testing outcomes, with an overall accuracy of (99%). The improved results justify the use of dimensionality reduction in high-dimension datasets to reduce complexity and improve performance measures.

Author 1: Yousef K. Qawqzeh
Author 2: Abdullah Alourani
Author 3: Sameh Ghwanmeh

Keywords: Breast cancer; diagnosis; prediction; AdaBoost; RandomForest; PCA

Download PDF

Paper 52: Efficient Multimedia Content Transmission Model for Disaster Management using Delay Tolerant Mobile Adhoc Networks

Abstract: Natural and manmade disasters such as earthquakes, floods, unprecedented rainfall, etc. pose several threats to our society. The citizens upload disaster information in the form of multimedia content such as pictures, audio, and videos. Efficient information and communication framework are critical for disaster management. Mobile Adhoc Networks (MANET) have been used effectively for disaster management. However, Disaster management prerequisites following Quality of Service (QoS) requirements such as bandwidth, high delivery ratio, low overhead, and minimal latency; however, the existing data transmission scheme induces high latency and overhead among intermediate devices; In order to meet the QoS requirement of disaster management applications in this paper, High Delivery Efficiency and Low Latency Multimedia Content Transmission (HDELL-MCT) scheme for MANETs is presented. Then, an improved buffer management scheme is presented for meeting disaster management performance and latency prerequisites. The experiment is conducted using ONE Simulator, the outcome shows the HDELL-MCT scheme achieves very good performance considering different QoS metrics such as improving delivery ratio by 38.02%, reducing latency by 7.53% and minimizing hop communication overhead by 65.1% in comparison with existing multimedia content transmission model.

Author 1: Sushant Mangasuli
Author 2: Mahesh Kaluti

Keywords: Buffer management; disaster management; Mobile Adhoc Network; opportunistic routing; Quality of Service

Download PDF

Paper 53: The Effect of Thermal and Electrical Conductivities on the Ablation Volume during Radiofrequency Ablation Process

Abstract: Radiofrequency ablation (RFA) is the treatment of choice for certain types of cancers, especially liver cancer. However, the main issue with RFA is that the larger the tumor volume, the longer the ablation period. That causes more pain for the patient, so the surgeons perform a larger number of ablation sessions or surgeries. The current commonly used electrode material, nickel-titanium alloy, used in RFA is characterized by low thermal and electrical conductivities. Using an electrode material with higher electrical conductivity and thermal conductivity provides more thermal energy to tumors. In this paper, we design two models: a cool-tip RF electrode and a multi-hook RF electrode, which aim to study the effect of the thermal and electrical conductivities of the electrode material on ablation volume. Gold, silver, and platinum have higher thermal and electrical conductivity than nickel and titanium alloy, and therefore we studied the effect of these materials on the ablation volume using two different designs, which are the RF cooling tip electrode and the multi-hook electrode. The proposed model reduces the ablation time and damages healthy tissue while increasing the ablation volume with values ranging from 2.6 cm3 to 15.4 cm3. The results show ablation volume increasing with materials characterized by higher thermal and electrical conductivities and thus reducing patient pain.

Author 1: Mohammed S. Ahmed
Author 2: Mohamed Tarek El-Wakad
Author 3: Mohammed A. Hassan

Keywords: Radiofrequency ablation (RFA); finite element method (FEM); COMSOL; Cool-tip RF electrode; multi-hooks electrode; large tumor ablation

Download PDF

Paper 54: A Light-weight Authentication Scheme in the Internet of Things using the Enhanced Bloom Filter

Abstract: Authenticated key exchange mechanisms are critical for security-sensitive Internet of Things (IoT) and Wireless Sensor Networks (WSNs). In this area, the Bloom Filter (BF) plays a crucial role directly and indirectly, which has a significant advantage in space and time. Light-weight input authentication is one of the most challenging tasks in IoT. Weak or inefficient defense algorithms can allow fake information to enter the system, share information, send unnecessary messages, and reduce network efficiency. The utilization of an augmented Bloom filter for creating an authentication prominent called En-route Authentication Bitmap (EAB) has a substantial advantage over traditional methods that involve direct usage of Message Authentication Codes (MAC). This effective method of EAB picks the fake information almost accurately, thereby reducing the feeding attacks within not more than two steps taken by the attacker. EAB necessarily needs only a few bytes of bandwidth for efficient defense against at least ten forward steps of the adversary. Without hesitation, the Augmented Bloom filter and its components are becoming more common in network defense mechanisms.

Author 1: Xiaoyan Huo

Keywords: En-route authentication bitmap; message authentication codes; Internet of Things; bloom filter

Download PDF

Paper 55: Generalized Epileptic Seizure Prediction using Machine Learning Method

Abstract: In recent years, the electroencephalography (EEG) signal identification of epileptic seizures has developed into a routine procedure to determine epilepsy. Since physically identifying epileptic seizures by expert neurologists becomes a labor-intensive, time-consuming procedure that also produces several errors. Thus, efficient, and computerized detection of epileptic seizures is required. The disordered brain function that causes epileptic seizures can have an impact on a patient's condition. Epileptic seizures can be prevented by medicine with great success if they are predicted before they start. Electroencephalogram (EEG) signals are utilized to predict epileptic seizures by using machine learning algorithms and complex computational methodologies. Furthermore, two significant challenges that affect both expectancy time and genuine positive forecast rate are feature extraction from EEG signals and noise removal from EEG signals. As a result, we suggest a model that offers trustworthy preprocessing and feature extraction techniques. To automatically identify epileptic seizures, a variety of ensemble learning-based classifiers were utilized to extract frequency-based features from the EEG signal. Our algorithm offers a higher true positive rate and diagnoses epileptic episodes with enough foresight before they begin. On the scalp EEG CHB-MIT dataset on 24 subjects, this suggested framework detects the beginning of the preictal state, the state that occurs before a few minutes of the onset of the detention, resulting in an elevated true positive rate of (91%) than conventional methods and an optimum estimation time of 33 minutes and an average time of prediction is 23 minutes and 36 seconds. Depending on the experimental findings’ The maximum accuracy, sensitivity, and specificity rates in this research were 91 %, 98%, and 84%.

Author 1: Zarqa Altaf
Author 2: Mukhtiar Ali Unar
Author 3: Sanam Narejo
Author 4: Muhammad Ahmed Zaki
Author 5: Naseer-u-Din

Keywords: Epilepsy; electroencephalogram; artificial intelligence; machine learning; CHB-MIT

Download PDF

Paper 56: The Impact of COVID-19 on Digital Competence

Abstract: The study looked into how COVID-19 affected the digital competence of a group of preservice teacher education students at a higher education institution in the Sultanate of Oman. The paper examined students’ digital profile in five areas namely information and data literacy, communication and collaboration, digital content creation, safety and problem solving. Data from 32 undergraduate students was collected by utilizing DigComp, a European Commission digital skills self-assessment tool and findings from a survey. The digital competence framework measures the set of skills, knowledge and attitudes that describes what it means to be digitally competent. These skills are important for students to be effective global citizens in the 21st century. The results of the study revealed that the majority of the students scored Level 3 (Intermediate) in their self-assessment competency test score. The majority of the students perceived that their digital competence improved significantly as the result of online learning which was accelerated by the COVID-19 pandemic. The rationale of this investigation is that it helps educators understand the students’ level of digital competence and the students’ perspectives on ICT skills. In turn, it informs us the ways to monitor the students’ digital progress and the next steps in developing their digital competency.

Author 1: Syerina Syahrin
Author 2: Khalid Almashiki
Author 3: Eman Alzaanin

Keywords: Digital competence; digital skills; digital profile; ICT skills; preservice teacher education

Download PDF

Paper 57: A Survey on Cloudlet Computation Optimization in the Mobile Edge Computing Environment

Abstract: Mobile Edge Computing (MEC) uses to perform computation operations at the edge of a network for mobile devices. This allows the deployment of more powerful and efficient computing resources in a cost-effective, lightweight and scalable manner. MEC can optimize mobile device performance, enhance security and privacy, improve battery life, provide increased bandwidth, and reduce latency across wireless networks. Cloudlets are a new concept of computations that can perform at the edge of the networks. The service provider can deploy cloudlets services in a MEC environment with the ability for mobile devices to offload their tasks to cloudlets. In the MEC environment, the offloading problem depends on cloudlets' availability of computation resources. Also, the deployment method of cloudlets in the environment will affect the task offloading. This paper investigates the approach to the cloudlet deployment and task offloading problem in the MEC environment. First demonstrate that the problem has to be considered a Multi-objective optimization problem since it needs more than one objective to be optimized. Then prove that the problem is NP-completeness, give an overview of existing solutions using the meta-heuristic algorithms, and suggest future solutions for this problem. Finally, explain the advantages of using Variable-length of solution space with meta-heuristic algorithms for this problem.

Author 1: Layth Muwafaq
Author 2: Nor K. Noordin
Author 3: Mohamed Othman
Author 4: Alyani Ismail
Author 5: Fazirulhisyam Hashim

Keywords: Mobile edge computing; cloudlet deployment; task offloading; mobile device; multi-objective optimization; meta-heuristics; variable-length

Download PDF

Paper 58: Proof-of-Work for Merkle based Access Tree in Patient Centric Data

Abstract: With the advent of wearable devices and smart health care, the wearable health care technology for obtaining Patient Centric Data (PCD) has gained popularity in recent years. To establish access control over encrypted data in health records, Ciphertext Policy-Attribute Based Encryption (CP-ABE), is used. The most critical element is granting secure access to the generated information. However, with growing complexity of access policy, computational overhead of encryption and decryption process also increases. As a result, ensuring data access control as well as efficiency in PCD collected by wearables is crucial and challenging. This paper proposes and demonstrates a proof-of-work for the Merkle-based access tree using notion of hiding the sensitive access policy attributes.

Author 1: B Ravinder Reddy
Author 2: T Adilakshmi

Keywords: Merkle tree; hashing; CP-ABE; access policy; PCD

Download PDF

Paper 59: Water Tank Wudhu and Monitoring System Design using Arduino and Telegram

Abstract: Manual water faucets, which are commonly used in mosques and homes, cannot control water use, resulting in a variety of issues, including water waste when the user forgets to close the water faucet, resulting in water continuously coming out. In addition to filling the water tank, which is also an important factor in saving water, the water reserve in the tank must be properly controlled so that its availability is maintained. Based on the existing problems, a water faucet system was made for ablution and monitoring water tanks using Arduino and Telegram. An automatic ablution water faucet system that can drain water automatically with an ultrasonic sensor as a body movement reader and a solenoid valve as a substitute for a faucet The water pump can help fill the water tank automatically and know how much water is in the tank using an ultrasonic sensor; liquid crystal display and Telegram as recipients of text messages from the results of the condition of water faucets, water pumps, and water levels.

Author 1: Ritzkal
Author 2: Yuggo Afrianto
Author 3: Indra Riawan
Author 4: Fitrah Satrya Fajar Kusumah
Author 5: Dwi Remawati

Keywords: Arduino; solenoid valves; ultrasonic sensor; pump water

Download PDF

Paper 60: Risk Analysis of Urban Water Infrastructure Systems in Cauayan City

Abstract: The City of Cauayan Isabela is known as one of the first smart cities and leading agro-industrial centers in the Philippines. Since the center of the economy is in urban areas like Cauayan City, there is a tendency for people and businesses to converge when development and activity take place, with that, a risk analysis was done to analyze hazards for urban water infrastructures in the City of Cauayan. This paper includes an Inventory of the existing urban water infrastructure, with the aid of Geographic Information system Software and gathered data, maps were generated for flood hazards with 5, 25, and 100 yr. return period, liquefaction, ground shaking, and drought of urban water infrastructures. These maps were generated to help the people of Cauayan City, Isabela. The main goal of the paper is to assess the potential prone areas where water infrastructures are located, and monitor areas that are suitable for building such water infrastructures. Problems encountered by the people in utilizing urban water infrastructure can be able to minimize by proper installation of water infrastructures in suitable places which can help the people of the city in water utilization. Since Storm water can cause wide flooding in low elevated areas, to utilize the storm water and to address such problems, an urban water infrastructure with decision support systems intervention can be able to help the city in times of scarcity of water. In addition, the analysis can be used by the local government of the city for proper planning and to project the extent of the hazards.

Author 1: Rafael J. Padre
Author 2: Melanie A. Baguio
Author 3: Edward B. Panganiban
Author 4: Rudy U. Panganiban
Author 5: Carluz R. Bautista
Author 6: Justine Ryan L. Rigates
Author 7: Allisandra Pauline Mariano

Keywords: Water infrastructure; risk analysis; geographic information systems; decision support systems; storm water

Download PDF

Paper 61: Mitigate Volumetric DDoS Attack using Machine Learning Algorithm in SDN based IoT Network Environment

Abstract: Software-Defined Networking (SDN) is a recent trend that is combined with Internet of Things (IoT) in wireless network applications. SDN focus entirely on the upper-level network management and IoT enables monitoring the physical activity of the real-time environment via internet network connectivity. The IoT clusters with SDN often undergoes challenges like network security concerns like getting attacked by a Distributed Denial of Service (DDoS). The mitigation of network management issues is carried out by the frequent software update of SDN. On other hand, the security enhancement is needed to all alleviate the mitigation of security attacks in the network. With such motivation, the research uses machine learning based intrusion detection system to mitigate the DDoS attack in SDN-IoT network. The control layer in the SDN is responsible for the prevention of attacks in IoT network using a strong Intrusion Detection System (IDS) framework. The IDS enables a higher-level attack resistance to the DDoS attack as the framework involves feature selection-based classification model. The simulation is conducted to test the efficacy of the model against various levels of DDoS attacks. The results of simulation show that the proposed method achieves better classification of attacks in the network than other methods.

Author 1: Kumar J
Author 2: Arul Leena Rose P J

Keywords: DDoS; SDN; IoT; machine learning

Download PDF

Paper 62: Customer Sentiment Analysis in Hotel Reviews Through Natural Language Processing Techniques

Abstract: Customer reviews of products and services play a key role in the customers' decision to buy a product or use a service. Customers' preferences and choices are influenced by the opinions of others online; on blogs or social networks. New customers are faced with many views on the web, but they can't make the right decision. Hence, the need for sentiment analysis is to clarify whether opinions are positive, negative or neutral. This paper suggests using the Aspect-Based Sentiment Analysis approach on reviews extracted from tourism websites such as TripAdvisor and Booking. This approach is based on two main steps namely aspect extraction and sentiment classification related to each aspect. For aspect extraction, an approach based on topic modeling is proposed using the semi-supervised CorEx (Correlation Explanation) method for labeling word sequences into entities. As for sentiment classification, various supervised machine learning techniques are used to associate a sentiment (positive, negative or neutral) to a given aspect expression. Experiments on opinion corpora have shown very encouraging performances.

Author 1: Soumaya Ounacer
Author 2: Driss Mhamdi
Author 3: Soufiane Ardchir
Author 4: Abderrahmane Daif
Author 5: Mohamed Azzouazi

Keywords: Topic modeling; aspect-based sentiments analysis; aspect extraction; sentiment classification; machine learning

Download PDF

Paper 63: Performance Comparison of the Kernels of Support Vector Machine Algorithm for Diabetes Mellitus Classification

Abstract: Diabetes Mellitus is a disease where the body cannot use insulin properly, so this disease is one of the health problems in various countries. Diabetes Mellitus can be fatal and can cause other diseases and even lead to death. Based on this, it is important to have prediction activities to find out a disease. The SVM algorithm is used in classifying Diabetes Mellitus diseases. The purpose of this study was to compare the accuracy, precision, recall, and F1-Score values of the SVM algorithm with various kernels and data preprocessing. The data preprocessing used included data splitting, data normalization, and data oversampling. This research has the benefit of solving health problems based on the percentage of Diabetes Mellitus and can be used as material for accurate information. The results of this study are that the highest accuracy was obtained by 80% obtained from the polynomial kernel, the highest precision was obtained by 65% which was also obtained from the polynomial kernel, and the highest recall was obtained by 79% obtained from the RBF kernel and the highest f1-score was obtained by 70% obtained from RBF kernel.

Author 1: Dimas Aryo Anggoro
Author 2: Dian Permatasari

Keywords: Diabetes mellitus; kernel; normalization; oversampling; SVM

Download PDF

Paper 64: Image Segmentation of Intestinal Polyps using Attention Mechanism based on Convolutional Neural Network

Abstract: The intestinal polyp is one of the common intestinal diseases, which is characterized by protruding lining tissue of the colon or rectum. Considering that they may become cancerous, they should be removed by surgery as soon as possible. In the past, it took a lot of manpower and time to identify and diagnose intestinal polyps, which greatly affected the treatment efficiency of medical staff. Because the polyp part looks similar to the normal structure of the human body, the probability of human eye misjudgment is high. Therefore, it is necessary to use advanced computer technology to segment the intestinal polyp image. In the model established in this paper, an image segmentation method based on convolution neural network is proposed. The Har-DNet backbone network is used as the encoder in the model, and its feature processing results are converted into three feature images of different sizes, which are input to the decoding module. In the decoding process, each output first expands the receptive field module and then fuses the feature image processed by the attention mechanism. The fusion results are input to the density aggregation module for processing to improve the operation efficiency and accuracy of the model. The experimental results show that compared with the previous Pra-Net model and Har-DNet MSEG model, the accuracy and precision of this method are greatly improved, and can be applied to the actual medical image recognition process, thus improving the treatment efficiency of patients.

Author 1: Xinyi Zheng
Author 2: Wanru Gong
Author 3: Ruijia Yang
Author 4: Guoyu Zuo

Keywords: Image segmentation; intestinal polyps; block convolutional attention mechanism; Har Dnet

Download PDF

Paper 65: A Novel Hybrid DL Model for Printed Arabic Word Recognition based on GAN

Abstract: The recognition of printed Arabic words remains an open area for research since Arabic is among the most complex languages. Prior research has shown that few efforts have been made to develop models of accurate Arabic recognition, as most of these models have faced the increasing complexity of the performance and lack of benchmark Arabic datasets. Meanwhile, Deep learning models, such as Convolutional Neural Networks (CNNs), have been shown to be beneficial in reducing the error rate and enhancing accuracy in Arabic character recognition systems. The reliability of these models increases with the depth of layers. Still, the essential condition for more layers is an extensive amount of data. Since CNN generates features by analysing large amounts of data, its performance is directly proportional to the volume of data, as DL models are considered data-hungry algorithms. Nevertheless, this technique suffers from poor generalisation ability and overfitting issues, which affect the Arabic recognition models' accuracy. These issues are due to the limited availability of Arabic databases in terms of accessibility and size, which led to a central problem facing the Arabic language nowadays. Therefore, the Arabic character recognition models still have gaps that need to be bridged. The Deep Learning techniques are also to be improved to increase the accuracy by manipulating the strength of technique in a neural network for handling the lack of datasets and the generalisation ability of the neural network in model building. To solve these problems, this study proposes a hybrid model for Arabic word recognition by adapting a deep convolutional neural network (DCNN) to work as a classifier based on a generative adversarial network (GAN) work as a data augmentation technique to develop a robust hybrid model for improving the accuracy and generalisation ability. Each proposed model is separately evaluated and compared with other state-of-the-art models. These models are tested on the Arabic printed text image dataset (APTI). The proposed hybrid deep learning model shows excellent performance regarding the accuracy, with a score of 99.76% compared to 94.81% for the proposed DCNN model on the APTI dataset. The proposed model indicates highly competitive performance and enhanced accuracy compared to the existing state-of-the-art Arabic printed word recognition models. The results demonstrate that the generalisation of networks and the handling of overfitting have also improved. This study output is comparable to other competitive models and contributes an enhanced Arabic recognition model to the body of knowledge.

Author 1: Yazan M. Alwaqfi
Author 2: Mumtazimah Mohamad
Author 3: Ahmad T. Al-Taani
Author 4: Nazirah Abd Hamid

Keywords: Deep learning; convolutional neural network; generative adversarial network; Arabic recognition; image processing

Download PDF

Paper 66: Quantum Cryptography Experiment using Optical Devices

Abstract: The study of quantum cryptography is one of the great interest. A straightforward and reliable quantum experiment is provided in this paper. A half wave plate in linearly polarized light makes up a simplified polarization rotator. The polarization rotates twice as much as the half wave plate's fast axis' angle with the polarization plane when the half wave plate is rotated. Here, an experiment of message sharing is conducted to demonstrate quantum communication between parties. The unitary transformation is performed step by step using half-wave plates represented by the Mueller matrix. A simulation created using Python programming has been used to test the proposed protocol's implementation. Python was chosen because it can mathematically imitate the quantum state of superposition.

Author 1: Nur Shahirah Binti Azahari
Author 2: Nur Ziadah Binti Harun

Keywords: Half-wave plate; polarizer; photon beam splitter; Stoke Vector

Download PDF

Paper 67: Analysis of Medical Slide Images Processing using Depth Learning in Histopathological Studies of Cerebellar Cortex Tissue

Abstract: Today, with the advancement of science and technology, artificial intelligence evolves and grows along with human beings. Clinical specialists rely only on their knowledge and experience, as well as the results of complex and time-consuming clinical trials, despite the inevitable human errors of diagnosis work. Performing malignant and dangerous diseases, the use of machine learning makes it clear that the ability and capacity of these techniques are beneficial to help correctly diagnose diseases, reduce human error, improve diagnosis, and start treatment as soon as possible. In diseases, image processing and artificial intelligence is widely used in medicine and applied in stereological, histopathology. One of the essential activities for diagnosing the disease using artificial intelligence and machine learning is the fragmentation of images and classification of medical images, which is used to diagnose the disease with the help of images of the patient obtained from medical devices. In this article, we have worked on classifying medical histopathological images of brain tissue. The images are not of good quality due to sampling with standard equipment, and an attempt is made to improve the quality of the images by operating. Also, all images are segmented using the U-NET algorithm. In order to improve performance in classification, segmented images are used to classify images into two classes, normal and abnormal, instead of the images themselves. The images in the data set used in this study have a small number of images. Due to the use of a convolutional neural network algorithm to extract the feature and classify the images, more images are needed. Therefore, the data amplification technique to overcome this problem is used. Finally, the convolutional neural network has been used to extract features from images and classify fragmented images. Experimental results shown that the proposed method presented better performance compared to other existing methods.

Author 1: Xiang-yu Zhang
Author 2: Xiao-wen Shi
Author 3: Xing-bo Zhang

Keywords: Image processing; fragmentation of images; machine learning; image classification; stereological; histopathology

Download PDF

Paper 68: The Cloud-powered Hybrid Learning Process to Enhance Digital Natives’ Analytical Reading Skills

Abstract: Analytical reading is a necessary cognitive skill for advancing to other skills required in the digital age. Thailand is focused on the instructional development and use of digital media to enhance the digital natives' analytical reading skills, which will assist learners of all ages in effectively and quickly adapting to changes in the digital environment. After the COVID-19 pandemic situation, educational institutions in Thailand have begun to embrace a hybrid learning approach like never before. The limitations of the existing learning process for boosting digital natives’ analytical reading skills are the lack of integration between reading techniques, hybrid pedagogies, and emerging learning technologies to enhance learners' seamless learning experiences. Thus, this study aims to propose the Cloud-powered Hybrid Learning process (Cp-HL process) to enhance digital natives’ analytical reading skills. This study consisted of two main stages in the research methodology: 1) learning p ocess development; and 2) learning process evaluation. The developed Cp-HL process had four main learning phases: (1) preparation for hybrid learning; (2) presentation for interactive learning; (3) practice with analytical reading; and (4) progress reports on analytical reading skills. All the experts agreed that the newly developed Cp-HL process performed extremely well in terms of overall suitability.

Author 1: Sakolwan Napaporn
Author 2: Sorakrich Maneewan
Author 3: Kuntida Thamwipat
Author 4: Vitsanu Nittayathammakul

Keywords: Hybrid learning; cloud-powered learning tools; learning process; analytical reading skills; digital natives

Download PDF

Paper 69: A Model for Detecting Fungal Diseases in Cotton Cultivation using Segmentation and Machine Learning Approaches

Abstract: This research detailed a model for detecting fungal diseases via techniques for processing images of cotton leaves. The work allowed to develop a model based on the set of preprocessed data, to formulate the developed model, to simulate and evaluate the model. It is about detecting fungal diseases in cotton cultivation. The image data records were collected in an online data repository consisting of images of cotton leaves infected with fungal diseases and normal leaf images. In addition, other images of infected and uninfected cotton leaves were collected in cotton production fields in the Ségbana region in Benin Republic. The model was formulated based on watershed segmentation technique by applying Edge Detection algorithm and K-Means Clustering; and Support Vector Machine (SVM) for classification. The simulation was done using MATLAB with Image Processing Toolbox 9.4. The results gave an accuracy of 99.05%, specificity 90%, misclassification rate 0.95%, recall rate 99.5% and precision 99.5%. In addition, with less computational effort and in less than a minute, the best results were obtained, showing the efficiency of the image processing technique for the detection and classification of infected and uninfected leaves. It was concluded that this approach was applied to detect fungal diseases on cotton leaves to promote the production and harvest of good quality cotton and valuable cotton products.

Author 1: Odukoya O. H
Author 2: Aina S
Author 3: Dégbéssé F. W

Keywords: Fungal diseases; watershed segmentation; SVM; K-means; Edge Detection algorithm

Download PDF

Paper 70: Deep Learning Models for the Detection of Monkeypox Skin Lesion on Digital Skin Images

Abstract: The study is an investigation testing the accuracy of deep learning models in the detection of Monkeypox. The disease is relatively new and difficult for physicians to detect. Data for the skins were obtained from Google via web-scraping with Python’s BeautifulSoup, SERP API, and requests libraries. The images underwent scrutiny by professional physicians to determine their validity and classification. The researcher extracted the images’ features using two CNN models - GoogLeNet and ResNet50. Feature selection from the images involved conducting principal component analysis. Classification employed Support Vector Machines, ResNet50, VGG-16, SqueezeNet, and InceptionV3 models. The results showed that all the models performed relatively the same. However, the most effective model was VGG-16 (accuracy = 0.96, F1-score = 0.92). It is an affirmation of the usefulness of artificial intelligence in the detection of the Monkeypox disease. Subject to the approval of national health authorities, the technology can be used to help detect the disease faster and more conveniently. If integrated into a mobile application, it can be members of the public to self-diagnose before seeking official diagnoses from approved hospitals. The researcher recommends further research into the models and building bigger image databases that will power more reliable analyses.

Author 1: Othman A. Alrusaini

Keywords: Monkeypox; digital skin images; artificial intelligence; deep learning; convoluted neural networks; VGG-16

Download PDF

Paper 71: Upgraded Very Fast Decision Tree: Energy Conservative Algorithm for Data Stream Classification

Abstract: Traditional machine learning (ML) techniques model knowledge using static datasets. With the increased use of the Internet in today's digital world, a massive amount of data is generated at an accelerated rate that must be handled. This data must be handled as soon as it arrives because it is continuous, and cannot be kept for a long period of time. Various methods exist for mining data from streams. When developing methods like these, the machine learning community put accuracy and execution time first. Numerous sorts of studies take energy consumption into consideration while evaluating data mining methods. However, this work concentrates on Very Fast Decision Tree, which is the most often used technique in data flow classification, despite the fact that it wastes a huge amount of energy on trivial calculations. The research presents a proposed mechanism for upgrading the algorithm's energy usage and restricts computational resources, without compromising the algorithm's efficiency. The mechanism has two stages: the first is to eliminate a set of bad features that increase computational complexity and waste energy, and the second is to group the good features into a candidate group that will be used instead of using all of the attributes in the next iteration. Experiments were conducted on real-world benchmark and synthetic datasets to compare the proposed method to state-of-the-art algorithms in previous works. The proposed algorithm works considerably better and faster with less energy while maintaining accuracy.

Author 1: Mai Lefa
Author 2: Hatem Abd-Elkader
Author 3: Rashed Salem

Keywords: Classification; energy consumption; Hoeffding bound; Information gain; massive online analysis; stream data; very fast decision tree

Download PDF

Paper 72: Classification Model for Diabetes Mellitus Diagnosis based on K-Means Clustering Algorithm Optimized with Bat Algorithm

Abstract: Diabetes mellitus is a disease characterized by abnormal glucose homeostasis resulting in an increase in blood sugar. According to data from the International Diabetes Federation (IDF), Indonesia ranks 7th out of 10 countries with the highest number of diabetes mellitus patients in the world. The prevalence of patients with diabetes mellitus in Indonesia reaches 11.3 percent or there are 10.7 million sufferers in 2019. Prevention, risk analysis and early diagnosis of diabetes mellitus are necessary to reduce the impact of diabetes mellitus and its complications. The clustering algorithm is one of methods that can be used to diagnose and analyze the risk of diabetes mellitus. The K-mean Clustering Algorithm is the most commonly used clustering algorithm because it is easy to implement and run, computation time is fast and easy to adapt. However, this method often gets to be stuck at the local optima. The problem of the K-means Clustering Algorithm can be solved by combining the K-means Clustering algorithm with the global optimization algorithm. This algorithm has the ability to find the global optimum from many local optimums, does not require derivatives, is robust, easy to implement. The Bat Algorithm (BA) is one of global optimization methods in swarm intelligence class. BA uses automated enlargement techniques into a solution and it’s accompanied by a shift from exploration mode to local intensive exploitation. Based on the background that has been explained, this article proposes the development of a classification model for diagnosing diabetes mellitus based on the K-means clustering algorithm optimized with BA. The experimental results show that the K-means clustering optimized by BA has better performance than K-means clustering in all metrics evaluations, but the computational time of the K-means clustering optimized by BA is higher than K-means clustering.

Author 1: Syaiful Anam
Author 2: Zuraidah Fitriah
Author 3: Noor Hidayat
Author 4: Mochamad Hakim Akbar Assidiq Maulana

Keywords: Diabetes mellitus; disease diagnosis methods; k-means clustering algorithm; optimization; bat algorithm

Download PDF

Paper 73: Descriptive Analytics and Interactive Visualizations for Performance Monitoring of Extension Services Programs, Projects, and Activities

Abstract: Providing universities with high technology-enabled automation tools to support the administrative decision-making processes will enable them to achieve their objectives. For an institution to succeed in its everyday tasks, it should come up with the emerging and modernized management services constituted, among others, by cloud, mobile, and business analytics technology. With this, the institution’s operations and management efficiency are ensured. This study aims to develop a system with descriptive analytics named as MET Online Services that will automate and optimize the monitoring of extension services key performance indicators (KPIs) in order to help the institution in making better, data-driven decisions. The dashboards and interactive visualizations of the developed system will provide quick access to real-time progress of the extension services programs, projects, and activities. Results interpretation depicted that the developed system is indeed feasible for implementation, proven to be fully-functional and passed the quality software standards of a Certified Software Quality Assurance Specialist. As the developed system satisfied the users’ expectations and requirements, it would be an effective tool for the institution, extension services unit, and community, to make better strategic decisions and continuously deliver quality services to the community.

Author 1: Noelyn M. De Jesus
Author 2: Lorissa Joana E. Buenas

Keywords: Business analytics; descriptive analytics; dashboards; interactive visualizations; extension services; monitoring; key performance indicators; KPIs; community

Download PDF

Paper 74: Arabic Stock-News Sentiments and Economic Aspects using BERT Model

Abstract: Stock-market news sentiment analysis (SA) aims to identify the attitudes of the news of the stock on the official platforms toward companies’ stocks. It supports making the right decision in investing or analysts’ evaluation. However, the research on Arabic SA is limited compared to that on English SA due to the complexity and limited corpora of the Arabic language. This paper develops a model of sentiments to predict the polarity of Arabic stock news in microblogs based on Machine Learning and Deep Learning approaches. Also, it aims to extract the reasons which lead to polarity categorization as the main economic causes or aspects based on semantic unity. Therefore, this paper presents an Arabic SA approach based on the logistic regression model and the Bidirectional Encoder Representations from Transformers (BERT) model. The proposed model is used to classify articles as positive, negative, or neutral. It was trained based on data collected from an official Saudi stock-market article platform that was later preprocessed and labeled. Moreover, the economic reasons for the articles based on semantic unit, divided into seven economic aspects to highlight the polarity, were investigated. The supervised BERT model obtained 88% article classification accuracy based on SA, and the unsupervised mean Word2Vec encoder obtained 80% economic-aspect clustering accuracy.

Author 1: Eman Alasmari
Author 2: Mohamed Hamdy
Author 3: Khaled H. Alyoubi
Author 4: Fahd Saleh Alotaibi

Keywords: Machine learning; deep learning; classification; prediction; statements

Download PDF

Paper 75: e-Government Usability Evaluation: A Comparison between Algeria and the UK

Abstract: e-Government holds the keys to improving government services provided to citizens and the private sectors within their countries. Although Algeria is the largest country in Africa and has one of the most thriving economies in the continent, it is remarkable that the Algerian EGDI ranking was 120th according to the latest UN e-government survey. This inspired the researcher to investigate the relationship between the success factors of e-services in developed countries and their counterparts in developing countries. The main aim of this study is to explore the factors that influence the level of usability of e-government services between developing and developed countries against a set of specific guidelines to provide means for improving these services in developing countries. The researcher selectively extracted three guideline categories from Research-Based Web Design and Usability Guidelines as a means for expert evaluation of 10 Algerian e-government services compared to British e-government services. Our results show that Algerian e-services lack mostly in Use Frames when Functions Must Remain Accessible, Highlighting Information, and Graphics Should Not Look like Banner Ads (belonging to Page Layout, Text Appearance, and Graphics, Images & Multimedia respectively), whereas UK e-services scored highly across all three categories. These findings further enhance the UN e-government survey and identifies the sub-categories that developing countries need to pay more attention to in order to provide a more reliable and robust e-service to its users and citizens. Furthermore, this study proposes that the Research-Based Web Design & Usability Guidelines can be converted into an evaluation tool to be used by evaluators to easily assess the usability of a website. The combination of relative importance, chapters of the guidelines, and their respective guidelines gathered from Research-Based Web Design & Usability Guidelines, along with the evaluation of these individual guidelines by evaluators will serve as an integral tool for developers when developing e-government services to reach the satisfaction of the users.

Author 1: Mohamed Benaida

Keywords: Human computer interaction; usability evaluation; web design; e-Government; user satisfaction

Download PDF

Paper 76: The Effect of Artificial Neural Network Towards the Number of Particles of Rao-Blackwellized Particle Filter using Laser Distance Sensor

Abstract: Rao-Blackwellized particle filter (RBPF) algorithm aims to solve the Simultaneous Localization and Mapping (SLAM) problem. The performance of RBPF is based on the number of particles. The higher the number of particles, the better the performance of RBPF. However, higher number of particles required high memory and computational cost. Nevertheless, the number of particles can be reduced by using high-end sensor. By using high-end sensor, high performance of RBPF can be achieved and reduced the number of particles. But the development of the robot came at a high cost. A robot can be equipped with low-cost sensor in order to reduce the overall cost of the robot. However, low-cost sensor presented challenges of creating good map accuracy due to the low accuracy of the sensor measurement. For that reason, RBPF is integrated with artificial neural network (ANN) to interpret noisy sensor measurements and achieved better accuracy in SLAM. In this paper, RBPF integrated with ANN is experimented by using Turtlebot3 in real-world experiment. The experiment is evaluated by comparing the resulting maps estimated by RBPF with ANN and RBPF without ANN. The results show that RBPF with ANN has increased the performance of SLAM by 25.17% and achieved 10 out of 10 trials of closed loop map by using only 30 particles compared to RBPF without ANN that needs 400 particles to achieve closed loop map. In conclusion, it shows that, SLAM performance can be improved by integrating RBPF algorithm with ANN and reduces the number of particles.

Author 1: Amirul Jamaludin
Author 2: Norhidayah Mohamad Yatim
Author 3: Zarina Mohd Noh

Keywords: SLAM; occupancy grid map; Rao-Blackwellized particle filter; artificial neural network; laser distance sensor

Download PDF

Paper 77: Expanding Louvain Algorithm for Clustering Relationship Formation

Abstract: Community detection is a method to determine and to discover the existence of cluster or group that share the same interest, hobbies, purposes, projects, lifestyles, location or profession. There are some example of community detection algorithms that have been developed, such as strongly connected components algorithm, weakly connected components, label propagation, triangle count and average clustering coefficient, spectral optimization, Newman and Louvain modularity algorithm. Louvain method is the most efficient algorithm to detect communities in large scale network. Expansion of the Louvain Algorithm is carried out by forming a community based on connections between nodes (users) which are developed by adding weights to nodes to form clusters or referred to as clustering relationships. The next step is to perform weighting based on user relationships using a weighting algorithm that is formed by considering user account activity, such as giving each other recommendation comments, or to decide whether the relationship between the followers and the following is exist or not. The results of this study are the best modularity created with a value of 0.879 and the cluster test is 0.776.

Author 1: Murniyati
Author 2: Achmad Benny Mutiara
Author 3: Setia Wirawan
Author 4: Tristyanti Yusnitasari
Author 5: Dyah Anggraini

Keywords: Community detection; Louvain algorithm; modularity; network clustering relationship

Download PDF

Paper 78: Implementation Failure Recovery Mechanism using VLAN ID in Software Defined Networks

Abstract: Link failure is a common problem that occurs in software-defined networks. The most proposed approach for failure recovery is to use pre-configured backup paths in the switch. However, it may increase the number of traffic packets after the traffic is rerouted through the backup path. In this research, the proposed method is the implementation of a failure recovery mechanism by utilizing the fast failover group feature in OpenFlow to store pre-configured backup paths in the switch. The disrupted traffic packets will be labeled using the VLAN ID, which can be used as a matching field. Due to this capability, VLAN ID can aggregate traffic packets into one entry table as a match field in the forwarding rules. Through implementation and evaluation, it is shown that the system can build a backup path in the switch and reroute the disrupted traffic to the backup path. Based on the parameters used, the results show that the proposed approach achieves a recovery time of around 1.02-1.26ms. Additionally, it can reduce the number of traffic packets and has a low amount of packet loss compared to previous methods.

Author 1: Heru Nurwarsito
Author 2: Galih Prasetyo

Keywords: Software-defined networks; openflow; link failure; failure recovery; VLAN ID; fast failover

Download PDF

Paper 79: Analysis of the Artificial Neural Network Approach in the Extreme Learning Machine Method for Mining Sales Forecasting Development

Abstract: Forecasting is an accurate indicator to support management decisions. This study aimed to mining sales forecasting on Indonesia’s consumer goods companies with business warehouses engaged in the dynamic movement of large data using the Artificial Neural Network method. The sales forecasting used traditional method by inputting data and improvising simple patterns by collecting historical sales and remaining stock. Furthermore, several data variables in business warehouses were employed for sales forecasting. The study also used qualitative method to investigate the quality of data that cannot be measured quantitatively. The results showed with Mean Square Error score of 0.02716 in forecasting sales. The average accuracy generated by the Extreme Learning Machine after nine data tests is 111%. The result shows an opportunity for the company to further analyze the sales profit growth potential. The predicted value generated by Extreme Learning Machine for the last three months reaches 132%. The company's improved decision-making enlarge potential production line demonstrates the usefulness of this study.

Author 1: Hendra Kurniawan
Author 2: Joko Triloka
Author 3: Yunus Ardhan

Keywords: Artificial neural network; business warehouse; extreme learning machine; mining sales forecasting

Download PDF

Paper 80: Deca Convolutional Layer Neural Network (DCL-NN) Method for Categorizing Concrete Cracks in Heritage Building

Abstract: It is critical to develop a method for detecting cracks in historic building concrete structures. This is due to the fact that it is a method of preserving historic building and protecting visitors from the collapse of a historic structure. The purpose of this research is to determine the best method for identifying cracks in the concrete surface of old buildings by using cracked images of old buildings. The various surface textures, crack irregularities, and background complexity that distinguish crack detection from other forms of image detection research present challenges in crack detection of old buildings. This study presents a framework for detecting concrete cracks in old buildings in Semarang's old town using a modified Convolutional Neural Network with a combination of several convolutional layers. This study employs ten convolutional layers (Deca Convolutional Layer Neural Network (DCL-NN)) to provide mapping features for images of concrete cracks in ancient buildings at preservation area. This study also compares commonly used machine learning models such as KNeighbors (n neighbors=3), Random Forest, Support Vector Machine (SVM), ExtraTrees (n estimators=10), and other CNN-pretained models such as VGG19, Xception, and MobileNet. Four performance indicators are used to validate each model's performance: accuracy, recall, precision, F1-score, Matthews Correlation Coefficient (MCC), and Cohen Kappa (CK). This study's data set is comprised of primary data obtained from cracked and normal images of several buildings in Semarang's old town. The accuracy of this study using DCL-NN is 98.87%, recall is 99.40%, precision is 98.33%, F1 is 98.86%, MCC is 97.74%, and CK is 98.86% for crack class. From this study, it was found that the ten convolution layers have higher classification performance compared to other comparison models such as machine learning and other CNN models and are more effective in detecting cracks in concrete structures.

Author 1: Dinar Mutiara Kusumo Nugraheni
Author 2: Andi Kurniawan Nugroho
Author 3: Diah Intan Kusumo Dewi
Author 4: Beta Noranita

Keywords: Cracks; concrete; Deca-CNN; features mapping; performance

Download PDF

Paper 81: Convolutional Transformer based Local and Global Feature Learning for Speech Enhancement

Abstract: Speech enhancement (SE) is an important method for improving speech quality and intelligibility in noisy environments where received speech is severely distorted by noise. An efficient speech enhancement system relies on accurately modelling the long-term dependencies of noisy speech. Deep learning has greatly benefited by the use of transformers where long-term dependencies can be modelled more efficiently with multi-head attention (MHA) by using sequence similarity. Transformers frequently outperform recurrent neural network (RNN) and convolutional neural network (CNN) models in many tasks while utilizing parallel processing. In this paper we proposed a two-stage convolutional transformer for speech enhancement in time domain. The transformer considers global information as well as parallel computing, resulting in a reduction of long-term noise. In the proposed work unlike two-stage transformer neural network (TSTNN) different transformer structures for intra and inter transformers are used for extracting the local as well as global features of noisy speech. Moreover, a CNN module is added to the transformer so that short-term noise can be reduced more effectively, based on the ability of CNN to extract local information. The experimental findings demonstrate that the proposed model outperformed the other existing models in terms of STOI (short-time objective intelligibility), and PESQ (perceptual evaluation of the speech quality).

Author 1: Chaitanya Jannu
Author 2: Sunny Dayal Vanambathina

Keywords: Convolutional neural network; recurrent neural network; speech enhancement; multi-head attention; two-stage convolutional transformer; feed-forward network

Download PDF

Paper 82: Assessing User Interest in Web API Recommendation using Deep Learning Probabilistic Matrix Factorization

Abstract: Internet 2.0 Things connected to the Internet not only manage data supply through devices but also control the commands that flow through it. The communication technology created by the desired sensor is used by a new computing model so that the collected data appears in Web 2.0 for management. In addition to enhancing Sense efficiency through the simple IoT computing process, it is used in many cases for example video surveillance, and improved and intelligent manufacturing. Every fragment of the system is carefully continued and supervised in this process by software collection using a large number of recurs. An important process for this is to access web APIs from various public platforms in an efficient way. The use of different APIs by developers for the integration of different IoT devices and the deployment process required for this is unnecessary. Obtaining configured target APIs makes it easy to know where and how to get started with the workflow approach. Rapid industrial development can be achieved through this powerful API approach. But finding adequately powerful APIs from a large number of APIs has become a great challenge. However, due to the massive spike in the count of APIs, combining the two APIs has now become a major challenge. In this paper, for the time being, only the relationships between users and the API are considered. In this case, they had to face difficulties in extracting contextual value from their interpretation. So better accuracy could not be obtained due to this. The consequence of the user's time aspect on the cryptographic properties concerning the information collected from the API contextual description can be enhanced by the Deep Learning Probabilistic Matrix Factorization (DL-PMF) method, which improves the accuracy of the API recommendation in considering the cryptographic features of the user in the API recommendation. In this paper, we have used CNN (Convulsive Neural Network) for web elements such as APIs, and LSTM (Long-Term and Short-Term Memory) Network, which works with a diligent mechanism to find hidden features, to find hidden features that suit the tastes of the users. In conclusion, the combination of PMF (Probabilistic Matrix Factorization) evaluation of the recommended results was obtained as described above. The combination of DL-PMF method experimental results was found to be better than previous PMF, ConvMF, and other methods, thus improving the recommended accuracy.

Author 1: T. Ramathulasi
Author 2: M. Rajasekhara Babu

Keywords: Implicit feature; API’s recommendation; IoT; collaborative filtering; matrix factorization

Download PDF

Paper 83: Unsupervised Learning-based New Seed-Expanding Approach using Influential Nodes for Community Detection in Social Networks

Abstract: Several recent studies focus on community structure due to its importance in analyzing and understanding complex networks. Communities are groups of nodes highly connected with themselves and not much connected to the rest of the network. Community detection helps us to understand the properties of the dynamic process within a network. In this paper, we propose a novel seed-centric approach based on TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) and k-means algorithm to find communities in a social network. TOPSIS is used to find the seeds within this network using the benefits of multiple measure centralities. The use of a single centrality to determine seeds within a network, like in classical algorithms of community detection, doesn’t succeed in the majority of cases to reach the best selection of seeds. Therefore, we consider all centrality metrics as a multi-attribute of TOPSIS and we rank nodes based on the TOPSIS’ relative closeness. The Top-K nodes extracted from TOPSIS will be considered as seeds in the proposed approach. Afterwards, we apply the k-means algorithm using these seeds as starting centroids to detect and construct communities within a social network. The proposed approach is tested on Facebook ego network and validated on the famous dataset having the ground-truth community structure Zachary karate club. Experimental results on Facebook ego network show that the dynamic k-means provides reasonable communities in terms of distribution of nodes. These results are confirmed using Zachary karate club. Two detected communities are detected with higher normalized mutual information NMI and Adjusted Rand Index ARI compared to other seed centric algorithms such as Yasca, LICOD, etc. The proposed method is effective, feasible, and provides better results than other available state-of-the-art community detection algorithms.

Author 1: Khaoula AIT RAI
Author 2: Mustapha MACHKOUR
Author 3: Jilali ANTARI

Keywords: Complex network; community detection; TOPSIS; seed-centric approach; ground-truth; k-means

Download PDF

Paper 84: Implementation of ICT Continuity Plan (ICTCP) in the Higher Education Institutions (HEI’S): SUC’S Awareness and its Status

Abstract: The purpose of this study was to assess the level of awareness of the management and the personnel within the academic institution and identify the implementation status of the ICTCP to the implementing SUCs. The researchers used the BCM Framework was utilized in this study as the model for identifying the level of awareness of the personnel within the institution about the ICTCP. The research respondents were the personnel employed in the different States, Universities, and Colleges (SUCs) within the province of Negros Occidental. The respondents were selected through random sampling, they were provided by a google form link to answer the survey questionnaire. A total of thirty-five (35) IT personnel were included in the study's sample size. It was found out that most SUCs have consistent ICT system uptime because they can continuously provide services; surprisingly, this is independent of an ICT business continuity plan. Most SUCs do not entirely implement their ICT business continuity plans. Lastly, it is recommended that SUCs can significantly enhance service delivery if ICT business continuity planning is taken seriously, adopted, and entirely carried out.

Author 1: Chester L. Cofino
Author 2: Ken M. Balogo
Author 3: Jefrey G. Alegia
Author 4: Michael Marvin P. Cruz
Author 5: Benjamin B. Alejado Jr
Author 6: Felicisimo V. Wenceslao Jr

Keywords: Business Continuity Plan (BCP); Information; Communication; and Technology Continuity Plan (ICTCP); State Universities and Colleges (SUCs); Business Continuity Management (BCM) Framework

Download PDF

Paper 85: Towards a Machine Learning-based Model for Automated Crop Type Mapping

Abstract: In the field of smart farming, automated crop type mapping is a challenging task to guarantee fast and automatic management of the agricultural sector. With the emergence of advanced technologies such as artificial intelligence and geospatial technologies, new concepts were developed to provide realistic solutions to precision agriculture. The present study aims to present a machine learning-based model for automated crop-type mapping with high accuracy. The proposed model is based on the use of both optical and radar satellite images for the classification of crop types with machine learning-based algorithms. Random Forest and Support Vector Machine, were employed to classify the time series of vegetation indices. Several indices extracted from both optical and radar data were calculated. Harmonical modelization was also applied to optical indices, and decomposed into harmonic terms to calculate the fitted values of the time series. The proposed model was implemented using the geospatial processing services of Google Earth Engine and tested with a case study with about 147 satellite images. The results show the annual variability of crops and allowed performing classifications and crop type mapping with accuracy that exceeds the performances of the other existing models.

Author 1: Asmae DAKIR
Author 2: Fatimazahra BARRAMOU
Author 3: Omar Bachir ALAMI

Keywords: Smart farming; artificial intelligence; machine Learning; Precision agriculture; random forest; SVM

Download PDF

Paper 86: A Hybrid Model by Combining Discrete Cosine Transform and Deep Learning for Children Fingerprint Identification

Abstract: Fingerprint biometric as an identification tool for children recognition was started in the late 19th century by Sir Galton. However, it is still not matured for children as adult fingerprint identification even after the span of two centuries. There is an increasing need for biometric identification of children because more than one million children are missing every year as per the report of International Centre of missing and exploited children. This paper presents a robust method of children identification by combining Discrete Cosine Transform (DCT) features and machine learning classifiers with Deep learning algorithms. The handcrafted features of fingerprint are extracted using DCT coefficient’s mid and high frequency bands. Gaussian Naïve Base (GNB) classifier is best fitted among machine learning classifiers to find the match score between training and testing images. Further, the Transfer learning model is used to extract the deep features and to get the identification score. To make the model robust and accurate score level fusion of both the models is performed. The proposed model is validated on two publicly available fingerprint databases of children named as CMBD and NITG databases and it is compared with state-of-the-art methods. The rank-1 identification accuracy obtained with the proposed method is 99 %, which is remarkable compared to the literature.

Author 1: Vaishali Kamble
Author 2: Manisha Dale
Author 3: Vinayak Bairagi

Keywords: Discrete Cosine Transform (DCT); Curve DCT; biometric recognition; machine learning; convolutional neural network; AlexNet

Download PDF

Paper 87: 2-D Deep Convolutional Neural Network for Predicting the Intensity of Seismic Events

Abstract: Machine learning has advanced rapidly in the last decade, promising to significantly change and improve the function of big data analysis in a variety of fields. When compared to traditional methods, machine learning provides significant advantages in complex problem solving, computing performance, uncertainty propagation and handling, and decision support. In this paper, we present a novel end-to-end strategy for improving the overall accuracy of earthquake detection by simultaneously improving each step of the detection pipeline. In addition, we propose a Conv2D convolutional neural network (CNN) architecture for processing seismic waveforms collected across a geophysical system. The proposed Conv2D method for earthquake detection was compared to various machine-learning approaches and state-of-the-art methods. All of the methods used were trained and tested on real data collected in Kazakhstan over the last 97 years, from 1906 to 2022. The proposed model outperformed the other models with accuracy, precision, recall, and f-score scores of 63%, 82.4%, 62.7%, and 83%, respectively. Based on the results, it is possible to conclude that the proposed Conv2D model is useful for predicting real-world earthquakes in seismic zones.

Author 1: Assem Turarbek
Author 2: Yeldos Adetbekov
Author 3: Maktagali Bektemesov

Keywords: Earthquake; prediction; deep learning; machine learning; classification

Download PDF

Paper 88: User-Centered Design (UCD) of Time-Critical Weather Alert Application

Abstract: Weather alert applications can save precious lives in time-critical risk situations; however, even the most widely used applications may fall short in intuitive interface and content design, possibly due to limitations in the users participation in the design process and in the users range considered. The objective of this study was to investigate whether the application of UCD principles and usability guidelines can improve the use of and satisfaction of time-critical weather alert apps by the public and or expert users. A prototype of a UCD-based weather alert application was developed and evaluated. Initially, thirty-two voluntaries participated in the identification of the important features that lead to the development of the porotype, and then the prototype was tested with another eighty participants (40 young and 40 elderly). The prototype includes five enhancements: auto-suggested location search, an all-inclusive interface for weather forecasts, message alert, visual and intuitive map settings, and minimalism-oriented alert settings. The enhanced functionality was compared to similar functionality in existing commercial weather applications. Effectiveness (completion rate, error count, error severity, and error cause), efficiency (time to completion), and satisfaction (post-task and post-test surveys) were measured. The results showed the enhancements significantly improved performance and satisfaction across both age groups compared to equivalent functionality in the existing app. The Mann-Whitney U test showed a statistically significant difference (p<0.001) in task satisfaction and number of errors between the two apps for all tasks. The Mann-Whitney U test showed a significant difference (p<0.001) in the across all tasks between the two apps Also, overall, young people with existing apps outperformed elderly, and both young and elderly with enhanced apps performed very high. Therefore, the enhancements implemented through the UCD process and usability guidelines significantly improved performance and satisfaction across both age groups to facilitate timely action necessary during a crisis.

Author 1: Abdulelah M. Ali
Author 2: Abdulrahman Khamaj
Author 3: Ziho Kang
Author 4: Majed Moosa
Author 5: Mohd Mukhtar Alam

Keywords: User-centered design; time-critical weather alert apps; weather forecasts; map set-tings; message alert

Download PDF

Paper 89: Interventional Teleoperation Protocol that Considers Stair Climbing or Descending of Crawler Robots in Low Bit-rate Communication

Abstract: In teleoperation of a crawler robot in a disaster-stricken enclosed space, distress of the crawler robot due to communication breakdown is a problem. We present a robot teleoperation system using LoRaWAN as a subcommunication infrastructure to solve this problem. In this system, the crawler robot is operated by teleoperation using a subcommunication infrastructure in a place where a wireless local area network (LAN) communication is possible. In this study, we assume an environment in which the crawler robot must ascend and descend stairs to evacuate to a place where wireless LAN communication is possible. In addition, the disaster-stricken environment is considered as an environment where obstacles are expected to suddenly occur, and the crawler robot has difficulty avoiding obstacles on the stairs. In this paper, we propose a teleoperation communication protocol that considers the risk of sudden appearance of obstacles and confirm its effectiveness using evaluation experiments in a real environment.

Author 1: Tsubasa Sakaki
Author 2: Kei Sawai

Keywords: LoRaWAN; teleoperation; crawler robot; disaster-reduction activity; teleoperation protocol

Download PDF

Paper 90: Business Intelligence Data Visualization for Diabetes Health Prediction

Abstract: In today's environment, Business Intelligence (BI) is transforming the world at a rapid pace across domains. Business intelligence has been around for a long time, but when combined with technology, the results are astounding. BI is also playing an important role in the healthcare domain. Centers for Disease Control and Prevention (CDC) is the largest science-based, data-driven service provider in the country for public health protection. For over 70 years, has been using science to fight disease and keep families, businesses, and communities healthy. However, research indicates that the prevalence of diabetes in the US is rising alarmingly. As a result, if diabetes is not treated, it can lead to life-threatening complications such as heart disease, loss of feeling, blindness, kidney failure, and amputations. As a result, this study was conducted to analyze people's health conditions and daily lifestyles in order to predict which type of diabetes they would most likely diagnose with the implementation of business intelligence using Tableau dashboard. Furthermore, background research is conducted on CDC to understand their work, challenges, and opportunities. By the end of the project, the information obtained and visualized should be able to enhance business choices and make better decisions on controlling diabetes in the future.

Author 1: Samantha Siow Jia Qi
Author 2: Sarasvathi Nagalingham

Keywords: Diabetes; business intelligence; prediction; dashboard visualization; data analysis; centers for disease control and prevention

Download PDF

Paper 91: Augmented, Virtual and Mixed Reality Research in Cultural Heritage: A Bibliometric Study

Abstract: Heritage allows us to learn about the different monuments of importance and the inherited traditions from our ancestors. However, many times the monuments get partly ruined due to natural wear and tear, sometimes due to attacks by invaders. To preserve the cultural heritage virtually, many researchers have used augmented, virtual and mixed reality for bringing ancient environments to live in those heritage sites. This study aims to identify publications related to virtual, augmented and mixed reality in cultural heritage and to present the bibliometric analysis of these studies. The research articles on virtual, augmented, and mixed reality in cultural heritage are retrieved using the Scopus database. The analysis is performed using VOSviewer and various parameters such as bibliographic coupling of the countries, publications, journals, authors, and co-occurrences of the author keywords are performed. From the analysis done in this study, it is discovered that the augmented, virtual and mixed reality research in the domain of cultural heritage is mostly concentrated in Italy and surrounding European countries. However, it is also found that the research in this domain is lagging in many countries even if those countries are the homes of various heritage sites. This study provides an extensive analysis of the recent literature related to augmented, virtual and mixed reality research in cultural heritage. This information science based analysis will help researchers to identify the prominent journals in this domain, recognize stalwarts in this field and follow their works, find out path-breaking publications to refer to, and predict the direction of future studies.

Author 1: Nilam Upasani
Author 2: Asmita Manna
Author 3: Manjiri Ranjanikar

Keywords: Augmented reality; bibliometric analysis; cultural heritage; information science; mixed reality; virtual reality

Download PDF

Paper 92: Implementation of Business Intelligence Solution for United Airlines

Abstract: US Airline is recognized as the world's largest airline, with a massive number of daily departures completed and a combined fleet of over 2700 aircraft. US Airlines have carried major 18 airlines, categorized as mainline, regional, and freight airlines. United Airline is one of the major airlines in the US after American Airlines and Delta Airlines in the world. Today, companies received as much feedback from their customers. Customers can share their opinion and emotion through social media platforms, such as Twitter. Thus, collecting and understanding customer’s opinion become the key benefits for the aviation industry to get actionable insights while increasing their competitiveness. Such insights are useful in planning and execution to increase the relationship with customers. Thus, this study was conducted to analyze customer’s feedback in different airlines to discover actionable insights that increase the competitiveness of United Airline. The analysis result will be visualized on Tableau dashboards and BI solutions will be provided. By implementing the BI solutions, United Airline can make accurate decisions and define next strategies by identifying those positive and negative references. Thus, United Airline can improve the quality of their service, enhance customers loyalty, and boost business profitability.

Author 1: Ng Iris
Author 2: Sarasvathi Nagalingham

Keywords: Business intelligence; aviation industry; dashboard visualization; tableau; data analytics

Download PDF

Paper 93: Model Predictive Controlled Quasi Z Source Inverter Fed Induction Motor Drive System

Abstract: Ongoing advancements in inverters have offered pathway to high gain quasi Z source inverter Circuit (QZSIC). High gain QZSIC can be found between Semi Converter (SC) and three phase Induction Motor Loads (TPIML). This paper proposes suitable controller for closed loop controlled QZSIC-TPIML. This strive deals within improvement in time- response of QZSIC fed induction motor system. The objective of this effort tis to design a closed loop controlled QZSI*fed-induction motor framework that provides a stable-rotor-speed. The QZSIC is settled to switch it to “3phase AC”. The yield of 3phase inverter is sieved before it’s applied to a‘3phase-Induction-motor’. Closed loop control of QZSIC-TPIML using SMC and MPC is simulated &their rejoinders are compared. The ‘Model Predictive controller (MPC)’ is acclaimed to retain persistent significance of-speed. The result obtained via MP-controlled QZS-IIMD-method is compared with Sliding mode-controlled SMC) QZS-IIMD systems for change in input voltage. The wished-for MP controlled-QZS-IIMD method has benefits like fast settling-time and less steady state speed error.PIC16F84basedhardware for 0.5HP, QZSIC-IMDS is implemented.

Author 1: D. Himabindu
Author 2: G. Sreenivasan
Author 3: R. Kiranmayi

Keywords: QZSIC; TPIML; CLSC; SMC; MPC; IMDS

Download PDF

Paper 94: Visualization of Business Intelligence Insights into Aviation Accidents

Abstract: Despite the recent tragic loss activity, flying is often said to be the safest form of transport, and this is at least true in terms of fatalities per distance travelled. The Civil Aviation Authority reports that the death rate per billion kilometers travelled by aircraft is 0.003, which is much lower than the rates of 0.27 for train travel and 2.57 for vehicle travel. Despite the fact that safety has been the aviation industry's top focus for the last century and a half, accidents involving aircraft continue to be a source of horror even in the present day. Hence, the aim of this project is to identify the major causes and reasons that led to accidents in the aviation industry and to carry out research, finding, design, build and suggest a Business Intelligence (BI) solution to the problem. Throughout the project, it will discover problems, both elementary and critical which needs to be corrected or changed in order to prevent major negative happenings and improve the current situation in a positive way. Tableau will be the primary BI tool used in this process. Data visualization is the graphic depiction of information and data. Data visualization tools offer an easy approach to data analysis that observe and find patterns, outliers, and patterns in data by employing visual elements like charts, graphs, and maps. The project will also cover the initial to building and deployment stage of the BI solution to improve and prevent further accidents.

Author 1: Loe Piin Piin
Author 2: Sarasvathi Nagalingham

Keywords: Aviation; accidents; business intelligence; prediction; dashboard visualization; data analysis

Download PDF

Paper 95: Metaphor Recognition Method based on Graph Neural Network

Abstract: This Metaphor is a very common language phenomenon. Human language often uses metaphor to express emotion, and metaphor recognition is also an important research content in the field of NLP. Official documents are a serious style and do not usually use rhetorical sentences. This paper aims to identify rhetorical metaphorical sentences in official documents. The use of metaphors in metaphorical sentences depends on the context. Based on this linguistic feature, this paper proposes a BertGAT model, which uses Bert to extract semantic features of sentences and transform the dependency relationship between Chinese text and sentences into connected graphs. Finally, the graph attention neural network is used to learn semantic features and syntactic structure information to complete sentence metaphor recognition. The proposed model is tested on the constructed domain dataset and the sentiment public dataset respectively. Experimental results show that the method proposed in this paper can effectively improve the recognition ability of metaphorical emotional sentences.

Author 1: Zhou Chuwei
Author 2: SHI Yunmei

Keywords: Sentiment analysis; metaphor recognition; graph neural network; attention mechanism

Download PDF

Paper 96: User Perceive Realism of Machine Learning-based Drone Dynamic Simulator

Abstract: The drone will be a commonly used technology by a significant portion of society, and simulating a given drone dynamic will be an essential requirement. There are drone dy-namic simulation models to simulate popular commercial drones. In addition, there are many Newtonian and fluid dynamics-based generic drone dynamic models. However, these models consist of many model parameters, and it is impracticable to evaluate the required model parameters to simulate a custom-made drone. A simple method to develop a machine learning-based dynamic drone simulation model to simulate custom- made drones mitigates the issues mentioned above. Specifically, the authors’ research is associated with the development of a machine learning-based drone dynamic model integrated with a virtual reality environment and validation of the user-perceived physical and behavioural realism of the entire solution. A figure of eight manoeuvring patterns was used to collect the data related to drone behaviour and drone pilot inputs. A Neural Network-based approach was employed to develop the machine learning-based drone dynamic model. Validations were done against real-world drone manoeuvres and user tests. Validation results show that the simulations provided by machine learning are accurate at the beginning and it decreases the accuracy with time. However, users also make mistakes/misjudgments while perceiving the real-world or virtual world. Hence, we explored the user perceive motion prediction accuracy of the simulation environment which is associated with the behavioural realism of the simulation environment. User tests show that the entire simulation environment maintains substantial physical realism.

Author 1: Damitha Sandaruwan
Author 2: Nihal Kodikara
Author 3: Piyumi Radeeshani
Author 4: K.T.Y. Mahima
Author 5: Chathura Suduwella
Author 6: Sachintha Pitigala
Author 7: Mangalika Jayasundara

Keywords: Drone; simulation; machine learning; drone dynamics; virtual reality

Download PDF

Paper 97: Stacking Deep-Learning Model, Stories and Drawing Properties for Automatic Scene Generation

Abstract: Text-image mapping is of great interest to the scientific community, especially for educational purposes. It helps young learners, mainly those with learning difficulties, to better understand the content of stories. In this paper, we propose to capture the teacher’s experience in manually building relevant scenes for animal behavior stories. This manual work, which consists of a pair of texts and a set of elementary images, is fed into a Long Short-Term Memory (LSTM) followed by a Conditional Random Field (CRF) that aims to associate the relevant words in the text with their corresponding elementary image while preserving the drawing properties. This association is then used for scene construction. Several experiments were con-ducted to show how better the constructed scenes convey textual information than the scenes constructed from the competitor’s models.

Author 1: Samir Elloumi
Author 2: Nzamba Bignoumba

Keywords: Text to image conversion; elementary image; image composition; deep-learning; drawing properties

Download PDF

Paper 98: A Machine Learning Hybrid Approach for Diagnosing Plants Bacterial and Fungal Diseases

Abstract: Bacterial and Fungal diseases may affect the yield of stone fruit and cause damage to the Chlorophyll synthesis process, which is crucial for tree growth and fruiting. However, due to their similar visual shot-hole symptoms, novice agriculturalists and ordinary farmers usually cannot identify and differentiate these two diseases. This work investigates and evaluates the use of machine learning for diagnosing these two diseases. It aims at paving the way toward creating a generic deep learning-based model that can be embedded in a mobile phone application or in a web service to provide a fast, reliable, and cheap diagnosis for plant diseases which help reduce the excessive, unnecessary, or improper use of pesticides, which can harm public health and the environment. The dataset consists of hundreds of samples collected from stone fruit farms in the north of Jordan under normal field conditions. The image features were extracted using a CNN algorithm that was pre-trained with millions of images, and the diseases were identified using three machine learning classification algorithms: 1) K-nearest neighbour (KNN); 2) Stochastic Gradient Descent (SGD); and 3) Random Forests (RF). The resulting models were evaluated using 10-fold cross-validation, with CNN-KNN achieving the best AUC performance with a score of 98.5%. On the other hand, the CNN-SGD model performed best in Classification Accuracy (CA) with a score of 93.7%. The results shown in the Confusion Matrix, ROC, Lift, and Calibration curves also confirmed the validity and robustness of the constructed models.

Author 1: Ahmed BaniMustafa
Author 2: Hazem Qattous
Author 3: Ihab Ghabeish
Author 4: Muwaffaq Karajeh

Keywords: Deep learning; machine learning; classification; plant diseases; disease diagnosis

Download PDF

Paper 99: Enhancing Collaborative Interaction with the Augmentation of Sign Language for the Vocally Challenged

Abstract: As per Census 2011, in India, there were 26.8 million differently abled people, out of which more than 25%of the people faced difficulty in vocal communication. They use Indian Sign Language (ISL) to communicate with others. The proposed solution is developing a sensor-based Hand Gesture Recognition (HGR) wearable device capable of translating and conveying messages from the vocally challenged community. The proposed method involves designing the hand glove by integrating flex and Inertial Measurement Unit (IMU) sensors within the HGR wearable device, wherein the hand and finger movements are captured as gestures. They are mapped to the ISL dictionary using machine learning techniques that learn the spatio-temporal variations in the gestures for classification. The novelty of the work is to enhance the capacity of HGR by extracting the spatio-temporal variations of the individual’s gestures and adapt it to their dynamics with aging and context factors by proposing Dynamic Spatio-temporal Warping (DSTW) technique along with long short term memory based learning model. Using the sequence of identified gestures along with their ISL mapping, grammatically correct sentences are constructed using transformer-based Natural Language Processing (NLP) models. Later, the sentences are conveyed to the user through a suitable communicable media, such as, text-to-voice, text-image, etc. Implementation of the proposed HGR device along with the Bidirectional Long-Short Memory (BiLSTM) and DSTW techniques is carried out to evaluate the performance with respect to accuracy, precision and reliability for gesture recognition. Experiments were carried out to capture the varied gestures and their recognition, and an accuracy of 98.91% was observed.

Author 1: Sukruth G L
Author 2: Vijaya Kumar B P
Author 3: Tejas M R
Author 4: Rithvik K
Author 5: Trisha Ann Tharakan

Keywords: Hand Gesture Recognition (HGR); wearable sen-sors; Long-Short Term Memory (LSTM); Natural Language Pro-cessing (NLP); Dynamic Spatio-Temporal Warping (DSTW); Indian Sign Language (ISL)

Download PDF

Paper 100: Delivery Management System based on Blockchain, Smart Contracts and NFT: A Case Study in Vietnam

Abstract: Current traditional shipping models are increas-ingly revealing many shortcomings and affecting the interests of sellers and buyers due to having to depend on trusted third parties. For example, the Cash-on-Delivery (CoD) model must depend on the carrier/shipper, or the Letter-of-Credit (LoC) model depends on the place of the Letter certification (i.e., bank). There have been many examples demonstrating the riskiness of the two models above. Specifically, in developing countries (e.g., Vietnam), the demand for exporting goods and trading between sellers and buyers have not yet applied the benefits of current technology to improve traditional shipping models. Two typical examples in the last five years that have demonstrated the risks of both sellers and buyers when applying CoD and LoC models are the problem of keeping the money of the seller of GNN Expresses (2017) as well as risks in losing control of 4 containers of cashew nuts when exporting from Vietnam to Italy (2021). A series of studies have proposed solutions based on distributed storage, blockchain, and smart contracts to solve the above problems. However, the role of the shipper has not been considered in some approaches or is not suitable for deployment in a developed country (i.e., Vietnam). In this paper, we propose a combination model between the traditional CoD model and blockchain technology, smart contracts, and NFT to solve the above problems. Specifically, our contribution includes four aspects: a) proposing a shipping model based on blockchain technology and smart contracts; b) proposing a model for storing package information based on Ethereum’s NFT technology (i.e. ERC721); c) implementing the proposed model by designing smart contracts that support the creation and transfer of NFTs between sellers and buyers; d) deploy smart contracts on four EVM-enabled platforms including BNB Smart chain, Fantom, Celo, and Polygon to find a suitable platform for the proposed model.

Author 1: Khiem Huynh Gia
Author 2: Luong Hoang Huong
Author 3: Hong Khanh Vo
Author 4: Phuc Nguyen Trong
Author 5: Khoa Tran Dang
Author 6: Hieu Le Van
Author 7: Loc Van Cao Phu
Author 8: Duy Nguyen Truong Quoc
Author 9: Nguyen Huyen Tran
Author 10: Anh Nguyen The
Author 11: Huynh Trong Nghia
Author 12: Bang Le Khanh
Author 13: Kiet Le Tuan
Author 14: Nguyen Thi Kim Ngan

Keywords: Letter-of-Credit; cash-on-delivery; blockchain; smart contract; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain

Download PDF

Paper 101: Integrated Assessment of Teaching Efficacy: A Natural Language Processing Approach

Abstract: The most significant component in the education domain is evaluation. Apart from student evaluation, teacher evaluation plays a vital role in the colleges or universities. The implementation of a scientific and appropriate assessment method for enhancing teaching standards in educational institutions is absolutely essential. Conventional teacher assessment techniques have always been bounded to bias and injustice for single dimensional assessment criteria, biased scoring, and ineffective integration. In this regard,it is crucial to develop a special-ized teacher evaluation assistant (TEA) system that integrates with some computational intelligence algorithms. This research concentrates on using Natural language processing(NLP) based techniques for empirically analysing teaching effectiveness. We develop a model in which a teacher is evaluated based on the content he delivers during a lecture. Two techniques are employed to evaluate teacher effectiveness using topic modelling and text clustering. By the application of topic modelling, an accuracy of 75% is achieved and text clustering achieved an accuracy of 80%. Thus, the method can effectively be deployed to assess and predict the effectiveness of a teacher's teaching.

Author 1: Lalitha Manasa Chandrapati
Author 2: Ch. Koteswara Rao

Keywords: Teacher evaluation; topic modeling; clustering; La-tent Dirichlet Allocation (LDA); K-means

Download PDF

Paper 102: An Effect Assessment System for Curriculum Ideology and Politics based on Students’ Achievements in Chinese Engineering Education

Abstract: The curriculum ideological and political education (CIPE) has caused the attention of China’s leaders and state departments in Chinese education, but its effect assessment is still an open issue should be address for the efficient and effective implementation of CIPE. The engineering education conception has been widely adopted in Chinese higher education, in recent years, due to its effectiveness. Therefore, in this paper, under the background of Chinese engineering education, we study the CIPE effect quantification. We propose a CIPE effect assessment system for higher education, and a CIPE effect quantitative method based on the achievements of graduation requirements for each student. The proposed system provides visualization information of achievements and CIPE effect for students and teachers. This helps students to locate themselves in their major learns, and teachers to continuously improve their teaching methods.

Author 1: Bo Wang
Author 2: Hailuo Yu
Author 3: Yusheng Sun
Author 4: Zhifeng Zhang
Author 5: Xiaoyun Qin

Keywords: Curriculum ideology and politics; assessment; engi-neering education; ideological and political education; outcomes-based education

Download PDF

Paper 103: Navigation of Autonomous Vehicles using Reinforcement Learning with Generalized Advantage Estimation

Abstract: This study proposes a reinforcement learning ap-proach using Generalized Advantage Estimation (GAE) for autonomous vehicle navigation in complex environments. The method is based on the actor-critic framework, where the actor network predicts actions and the critic network estimates state values. GAE is used to compute the advantage of each action, which is then used to update the actor and critic networks. The approach was evaluated in a simulation of an autonomous vehicle navigating through challenging environments and it was found to effectively learn and improve navigation performance over time. The results suggest GAE as a promising direction for further research in autonomous vehicle navigation in complex environments.

Author 1: Edwar Jacinto
Author 2: Fernando Martinez
Author 3: Fredy Martinez

Keywords: Actor-critic; autonomous vehicles; generalized ad-vantage estimation; navigation; reinforcement learning

Download PDF

Paper 104: A Low-Cost Wearable Autonomous System for the Protection of Bicycle Users

Abstract: A bicycle is a form of transport that not only positively impacts the health of users, and the general population by reducing pollution levels but also constitutes an accessible and affordable means of transport for developing societies. However, when coexisting with other forms of transport, the accident rate is elevated, and the risk is high. Among the factors contributing to accidents involving bicycles are collisions with motor vehicles. These accidents can occur when a motor vehicle maneuvers and does not see the bicycle or when a motorist drives distracted. These types of accidents can be avoided if cyclists and motorists are aware of the environment and respect traffic laws and safety regulations. This research aims to develop a low-cost autonomous electronic system that provides extra protection to bicycle users, particularly by making them visible to other road users on cloudy days or at night. The system uses a 32-bit processor with brightness and acceleration sensors that trigger visual alerts to both the bicycle user and possible nearby vehicles. It also monitors and logs the signals on a server for route evaluation. The laboratory successfully evaluated the prototype, demonstrating its autonomy and performance. The test results obtained demonstrate the system’s capacity to provide extra protection, in addition to its robustness and accuracy.

Author 1: Daniel Mejia
Author 2: Sergio Gomez
Author 3: Fredy Martinez

Keywords: Autonomous system; bicycle users; embedded sys-tem; protection; wearable

Download PDF

Paper 105: An Automated Impact Analysis Approach for Test Cases based on Changes of Use Case based Requirement Specifications

Abstract: Change Impact Analysis (CIA) is essential to the software development process that identifies the potential effects of changes during the development process. The changing of requirements always impacts on the software testing because some parts of the existing test cases may not be used to test the software. This affects new test cases to be entirely generated from the changed version of software requirements specification that causes a considerable amount of time and effort to generate new test cases to re-test the modified system. Therefore, this paper proposes a novel automatic impact analysis approach of test cases based on changes of use case based requirement specification. This approach enables a framework and CIA algorithm where the impact of test cases is analysed when the requirement specification is changed. To detect the change, two versions as before-change and after-change of the use case model are compared. Consequently, the patterns representing the cause of variable changes are classified and analysed. This results in the existing test cases to be analysed whether they are completely reused, partly updated as well as additionally generated. The new test cases are generated automatically by using the Combination of Equivalence and Classification Tree Method (CCTM). This benefits the level of testing coverage with a minimised number of test cases to be enabled and the redundant test cases to be eliminated. The automation of this approach is demonstrated with the developed prototype tool. The validation and evaluation result with two real case studies from Hospital Information System (HIS) together with perspective views of practical specialists confirms the contribution of this tool that we seek.

Author 1: Adisak Intana
Author 2: Kanjana Laosen
Author 3: Thiwatip Sriraksa

Keywords: Change impact analysis approach; test case; black-box testing; use case based requirement specification; combination of equivalence and classification tree method

Download PDF

Paper 106: Trust Management for Deep Autoencoder based Anomaly Detection in Social IoT

Abstract: Social IoT has gained huge traction with the advent of 5G and beyond communication. In this connected world of devices, the trust management is crucial for protecting the data. There are many attacks, while DDOS is the most prevalent BotNet attack. The infected devices earnestly require anomaly detection to learn and curb the malwares soon. This paper considers 9 IoT devices deployed in a Social IoT environment.We introduce a couple of attacks like Bash lite and Mirai by compromising a network node. We then look for traces of malicious behavior using AI algorithms. The investigation starts from a simple network approach - Multi-Layer Perceptron (MLP) then proceeds to ML - Random Forest (RF). While MLP detected the malicious node with an accuracy of 89.39%, RF proved 90.0% accurate. Motivated by the results, the Deep learning approach - Deep autoencoder was employed and found to be more accurate than MLP and RF. The results are encouraging and verified for scalability, efficiency, and reliability.

Author 1: Rashmi M R
Author 2: C Vidya Raj

Keywords: Social IoT; trust management; anomaly detection; DDoS; deep autoencoder

Download PDF

Paper 107: Machine Learning Techniques to Enhance the Mental Age of Down Syndrome Individuals: A Detailed Review

Abstract: Down syndrome individuals are known as intellectually disabled people. Their intellectual ability is classified into four categories known as mild, moderate, severe, and profound. These individuals have significant limitations in learning and adapting skills. Psychologists evaluate mental capability of such individuals using conventional intellectual quotient method instead of using any technology. The research matrix shows most of research has been carried out on analyzing neuroimaging, antenatal screening, and hearing impairment of individuals. But there is still an obvious gap of evaluating mental age using artificial intelligence. We have proposed an artificial neural network model, which supervises how software is used to obtain dataset using Knowledge Base Decision Support System. In a survey (N = 120) individuals examined by psychiatrist, medical expert, and a teacher to assess the presence of Down’s syndrome by analyzing their physical and facial appearances, and communication skills. Only (N = 62) individuals declared as Down syndrome. Selected individuals invited to perform mental ability assessment using Interactive Mental Learning Software. The results of mental age of Down syndrome with a raise in IQ from severe to moderate (20% to 35%), moderate to mild (35% to 75%) severity were carried out with the help of assessing the interactive series of software opinion polls based on comparison, logic, and basic mathematical operations using initial IQ (iIQ), and enhanced IQ (eIQ) variables input and output parameters.

Author 1: Irfan M. Leghari
Author 2: Hamimah Ujir
Author 3: SA Ali
Author 4: Irwandi Hipiny

Keywords: Artificial Intelligence; Artificial Neural Network (ANN); Down Syndrome Individuals (DSI); Interactive Mental Learning Software (IMLS)

Download PDF

Paper 108: AMIM: An Adaptive Weighted Multimodal Integration Model for Alzheimer’s Disease Classification

Abstract: Alzheimer’s disease (AD) is an irreversible neu-rological disorder, so early medical diagnosis is extremely im-portant. Magnetic resonance imaging (MRI) is one of the main medical imaging methods used clinically to detect and diagnose AD. However, most existing computer-aided diagnostic methods only use MRI slices for model architecture design. They ig-nore informational differences between all slices. In addition, physicians often use multimodal data, such as medical images and clinical information, to diagnose patients. The approach helps physicians to make more accurate judgments. Therefore, we propose an adaptive weighted multimodal integration model (AMIM) for AD classification. The model uses global information images, maximum information slices and clinical information as data inputs for the first time. It adopts adaptive weights integration method for classification. Experimental results show that our model achieves an accuracy of 99.00% for AD versus normal controls (NC), and 82.86% for mild cognitive impairment (MCI) versus NC. The proposed model achieves best classification performance in terms of accuracy, compared with most state-of-the-art methods.

Author 1: Dewen Ding
Author 2: Xianhua Zeng
Author 3: Xinyu Wang
Author 4: Jian Zhang

Keywords: MRI; global information images; maximum infor-mation slices; adaptive weights; integration method

Download PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. Registered in England and Wales. Company Number 8933205. All rights reserved. thesai.org