The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 9

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Information Flow Control for Serverless Systems

Abstract: Security for Serverless Systems is looked at from two perspectives, the server-level security managed by the infras-tructure company and the Application level Security managed by the tenants.The Trusted computing base for cloud systems is enormous as it encompasses all the functions running on a system. Authentication for systems is mostly done using ACL. Most Serverless Systems share data and thus, ACL isn’t sufficient. IFC using appropriate label design can enforce continuously through-out the application. IFC can be used to increase confidence between functions with other functions and cloud provider and also mitigate security vulnerabilities making the system safer. A survey of the present IFC implementations for Serverless Systems is presented and system designs which are relevant to Serverless Systems and could be added to Serverless Systems Architecture and, an idea of an IFC model that could be effectively applied in a decentralised model like serverless systems.

Author 1: Rishabh Chawla

Keywords: Information flow control; serverless systems; lan-guage based security; cloud computing

PDF

Paper 2: Monitoring Indoor Activity of Daily Living using Thermal Imaging: A Case Study

Abstract: Monitoring indoor activities of daily living (ADLs) of a person is subjected to dependency on sensor type, power supply stability, and connectivity stability without mentioning artifacts introduced by the person himself. Multiple challenges have to be overcome in this field, such as; detecting the precise spatial location of the person, and estimating vital signs like an individual’s average temperature. Privacy is another domain of the problem to be thought of with care. Identifying the person’s posture without a camera is another challenge. Posture identification is a key in assisting detection of a person’s fall. Thermal imaging could be a proper solution for most of the mentioned challenges. It provides monitoring both the person’s average temperature and spatial location while maintaining privacy. In this research, an IoT system for monitoring an indoor ADL using thermal sensor array (TSA) is proposed. Three classes of ADLs are introduced, which are daily activity, sleeping activity and no-activity respectively. Estimating person average temperature using TSAs is introduced as well in this paper. Results have shown that the three activity classes can be identified as well as the person’s average temperature during day and night. The person’s spatial location can be determined while his/her privacy is maintained as well.

Author 1: Hassan M. Ahmed
Author 2: Bessam Abdulrazak

Keywords: Activity monitoring; activities of daily living (ADLs); thermal imaging; indoor monitoring; thermal sensor array (TSA)

PDF

Paper 3: Improving the Quality of e-Commerce Service by Implementing Combination Models with Step-by-Step, Bottom-Up Approach

Abstract: e-Commerce, as a hot industry, plays an important role in people's lives. People visit e-commerce websites, check what they want, then click buy, and finally complete the transaction. The developments taking place at the level of electronic services at the global level and the intensification of competition and the increase in the experiences of electronic shoppers, the awareness and understanding of companies of the distinctive characteristics of the population in the region and their purchasing habits has become the most important for companies of e-commerce and services, where it is imperative Companies should keep pace with these developments and provide electronic services via the internet of high quality and efficiency, by focusing on the most important requirements for customer satisfaction, especially in light of the information and technological revolution. However, customers will have an awful experience if they visit crudely made e-commerce websites. Kunst A. (2019, Dec 20) claimed that around a total of 37.4% of customers complained that they had an awful shopping experience. The reason is that the service quality of e-commerce websites is not up to standard. This research aims to improve the quality of e-commerce service by using the Comprehensive and Referential Combination Model by implementing a Step-by-Step, Bottom-Up approach. Finally, we will recommend improving the quality of e-commerce service in construct and revision ways within parts of this model.

Author 1: Hemn Barzan Abdalla
Author 2: Ge Chengwei
Author 3: Baha Ihnaini

Keywords: E-commerce; website; framework; criteria; model; approach; service; quality

PDF

Paper 4: Facilitating Personalisation in Epilepsy with an IoT Approach

Abstract: The premises made in this paper put the future of personalisation in epilepsy into focus, a focus that shifts from a one-size fits all to a focus on the core of the epilepsy patients’ individual characteristics. The emerging approach of personalised healthcare is known to be facilitated by the Internet of Things (IoT) and sensor-based IoT devices are in popular demand for healthcare providers due to the constant need for patient monitoring. In epilepsy, the most common and complex patients to deal with correspond to those with multiple strands of epilepsy. These extremely varied kind of patients should be monitored precisely according to their identified key symptoms and specific characteristics then treatment tailored accordingly. Consequently, paradigms are needed to personalise this information. By focusing upon personalised parameters that make epilepsy patients distinct this paper proposes an IoT based Epilepsy monitoring model endorsing a more accurate and refined way of remotely monitoring the ‘individual’ patient.

Author 1: S. A McHale
Author 2: E. Pereira

Keywords: IoT; healthcare systems; smart healthcare; personalisation

PDF

Paper 5: EEG-based Brain Computer Interface Prosthetic Hand using Raspberry Pi 4

Abstract: Accidents, wars, or different diseases can affect upper limbs in such a manner so their amputation is required, with dramatic effects on people’s ability to perform tasks such as grabbing, holding objects, or moving them. In this context, it is necessary to develop solutions to support upper limb amputees to perform daily routine activities. BCI (brain-computer interface) offer the ability to use the neural activity of the brain to communicate or control robots, artificial limbs, or machines without physical movement. This article proposing an electroencephalography (EEG) mind-controlled prosthetic arm. It eliminates the drawbacks like the high price, heaviness, and dependency on the intact nerves related to the myoelectric and other types of prostheses currently in use. The developed prototype is a low-cost 3D-printed prosthetic arm controllable via brain commands using EEG-based BCI technology. It includes a stepper motor controlled by Raspberry Pi 4 to perform actions like open/close movement and holding objects. The project has successfully implemented and achieve the aim to create a prototype of a mind-controlled prosthetic arm system in addition to the necessary experimental tests and calculations regarding torque, force, and the weight that the hand can carry. The paper proves the feasibility of the approach and opens the route for improving the design of the prototype to attach it to the upper-limb amputation stump.

Author 1: Haider Abdullah Ali
Author 2: Diana Popescu
Author 3: Anton Hadar
Author 4: Andrei Vasilateanu
Author 5: Ramona Cristina Popa
Author 6: Nicolae Goga
Author 7: Hussam Al Deen Qhatan Hussam

Keywords: Prosthetic; brain computer interface (BCI); electroencephalography (EEG); raspberry pi 4; EMOTIV

PDF

Paper 6: A Facilitator Support System that Overlooks Keywords Expressing the True Intentions of All Discussion Participants

Abstract: This paper proposed the Keyword Movement Disclose System (KMDS), which allows a facilitator of discussion to watch a record of the moving keywords in a Discussion Board System (DBS). In the DBS, the discussion participants place each keyword in a box made for each item to be discussed. The keywords in the box were expected to show each participant’s opinion and intention, because the participant’s individual display was not disclosed to the other participants. Therefore, if the facilitator of the discussion can see the true opinions and intentions of all participants via the keywords in the boxes through the KMDS, the facilitator will be more appropriately advance the discussions and be able to draw conclusions based on diverse opinions. Moreover, the KMDS may contribute to the development of an artificial intelligence facilitator. In this paper, we conducted an experiment in which ten facilitators were asked to listen to a recorded discussion held by nine participants using the DBS. Five of the facilitators used the KMDS while listening the recorded discussion. It was suggested that KMDS may allow the facilitators to build a consensus from various viewpoints of the participants, although the results of the experiment did not show much difference depending on the conditions with/without KMDS.

Author 1: Chika Oshima
Author 2: Tatsuya Oyama
Author 3: Chihiro Sasaki
Author 4: Koichi Nakayama

Keywords: Keyword movement disclose system; discussion board system; facilitator; putting keywords in box

PDF

Paper 7: A New Flipped Learning Engagement Model to Teach Programming Course

Abstract: Online learning education at higher learning institutions has changed over the years as technology evolves. The main purpose of this study was to propose a new Flipped Learning Engagement (FLE) model. User testing to measure students’ achievement was carried out in four separate groups namely Control Technology (CT) group, Experimental Technology (ET) group, Control Engineering (CE) group and Experimental Engineering (EE) group by using t-Test. The findings yielded that the experimental group (ET and EE) that underwent learning and teaching process by using the proposed FLE model obtained higher results or level of achievements as compared to the control groups (CT and CE) undergoing the conventional approach of teaching and learning. The study contributes mainly to the design and development of FLE model. FLE model proposed in this study can be beneficial to guide not only programming related educators but also for all educators that use flipped learning approach in their learning and teaching process. Future study should examine the proposed model in depth to improve it by adding new entities, hence, enabling its application in any related courses at various levels.

Author 1: Ahmad Shaarizan Shaarani
Author 2: Norasiken Bakar

Keywords: Flipped learning engagement model; online programming course; student achievement; blended learning; technical based higher learning institutions

PDF

Paper 8: Classifying Familial Hypercholesterolaemia: A Tree-based Machine Learning Approach

Abstract: Familial hypercholesterolaemia is the most common and serious form of inherited hyperlipidaemia. It has an autosomal dominant mode of inheritance, and is characterised by severely elevated low-density lipoprotein cholesterol levels. Familial hypercholesterolaemia is an important cause of premature coronary heart disease, but is potentially treatable. However, the majority of familial hypercholesterolaemia individuals are under-diagnosed and under-treated, resulting in lost opportunities for premature coronary heart disease prevention. This study aims to assess performance of machine learning algorithms for enhancing familial hypercholesterolaemia detection within the Malaysian population. We applied three machine learning algorithms (random forest, gradient boosting and decision tree) to classify familial hypercholesterolaemia among Malaysian patients and to identify relevant features from four well-known diagnostic instruments: Simon Broome, Dutch Lipid Clinic Criteria, US Make Early Diagnosis to Prevent Early Deaths and Japanese FH Management Criteria. The performance of these classifiers was compared using various measurements for accuracy, precision, sensitivity and specificity. Our results indicated that the decision tree classifier had the best performance, with an accuracy of 99.72%, followed by the gradient boosting and random forest classifiers, with accuracies of 99.54% and 99.52%, respectively. The three classifiers with Recursive Feature Elimination method selected six common features of familial hypercholesterolaemia diagnostic criteria (family history of coronary heart disease, low-density lipoprotein cholesterol levels, presence of tendon xanthomata and/or corneal arcus, family hypercholesterolaemia, and family history of familial hypercholesterolaemia) that generate the highest accuracy in predicting familial hypercholesterolaemia. We anticipate machine learning algorithms will enhance rapid diagnosis of familial hypercholesterolaemia by providing the tools to develop a virtual screening test for familial hypercholesterolaemia.

Author 1: Marshima Mohd Rosli
Author 2: Jafhate Edward
Author 3: Marcella Onn
Author 4: Yung-An Chua
Author 5: Noor Alicezah Mohd Kasim
Author 6: Hapizah Nawawi

Keywords: Familial hypercholesterolaemia; predicting FH; machine learning algorithms; tree-based classifier

PDF

Paper 9: Development of Star-Schema Model for Lecturer Performance in Research Activities

Abstract: In this study, the researchers developed a multidimensional data model to investigate the activities of lecturers in universities in carrying out research activities as part of the Three Pillars of Higher Education. Information about lecturers' research activities has been managed using spreadsheet (excel) documents. Thus, access and analysis of the information were limited. Data warehouse development was carried out through several stages, namely requirement analysis, data source analysis, multidimensional modeling, ETL process, and reporting. The information generated in this data warehouse (DW) can be used as one of the business intelligence (BI) models in universities. In this study, the star-schema model was used in designing dimension tables and fact tables to facilitate and speed up the query process. The information generated in this study can be used by management in universities to make decisions and strategic planning. The results of this study can also be used as one of the important information in the preparation of institutional accreditation data and study program accreditation.

Author 1: M. Miftakul Amin
Author 2: Adi Sutrisman
Author 3: Yevi Dwitayanti

Keywords: Data warehouse (DW); star-schema; multidimensional data; business intelligence (BI)

PDF

Paper 10: Empirical Analysis of Feature Points Extraction Techniques for Space Applications

Abstract: Recently, space research advancements have widened the scope of many vision-based techniques. Computer vision techniques with manifold objectives require that valuable features are extracted from input data. This paper attempts to analyze known feature extraction techniques empirically; Scale Invariant Feature Transform (SIFT), Speeded up robust features (SURF), Oriented fast and Rotated Brief (ORB), and Convolutional Neural Network (CNN). A methodology for autonomously extracting features using CNN is analyzed in more detail. The autonomous process demonstrates the use of convolutional neural networks for feature extraction. Those techniques are studied and evaluated empirically on lunar satellite images. For analysis, a dataset containing different affine transformations of a video frame is generated from a sample lunar descent video. The nearest neighbor algorithm is then applied for feature matching. For an unbiased evaluation, a similar process of feature matching is repeated for all the models. Well-known metrics like repeatability and matching scores are employed to validate the studied techniques. The results show that the CNN features showed much better computational efficiency and stable performance concerning matching accuracy for lunar images than other studied algorithms.

Author 1: Janhavi H. Borse
Author 2: Dipti D. Patil

Keywords: Artificial intelligence; convolutional neural network; computer vision; feature extraction; machine learning; satellite images; space research

PDF

Paper 11: Traffic Adaptive Deep Learning based Fine Grained Vehicle Categorization in Cluttered Traffic Videos

Abstract: Smart traffic management is being proposed for better management of traffic infrastructure and regulate traffic in smart cities. With surge of traffic density in many cities, smart traffic management becomes utmost necessity. Vehicle categorization, traffic density estimation and vehicle tracking are some of the important functionalities in smart traffic management. Vehicles must be categorized based on multiple levels like type, speed, direction of travel and vehicle attributes like color etc. for efficient tracking and traffic density estimation. Vehicle categorization becomes very challenging due to occlusions, cluttered backgrounds and traffic density variations. In this work, a traffic adaptive multi-level vehicle categorization using deep learning is proposed. The solution is designed to solve the problems in vehicle categorization in terms of occlusions, cluttered backgrounds.

Author 1: Shobha B. S
Author 2: Deepu. R

Keywords: Vehicle categorization; deep learning; traffic density estimation; clutter

PDF

Paper 12: Wide Area Measurement System in the IEEE-14 Bus System using Multiobjective Shortest Path Algorithm for Fault Analysis

Abstract: In a large-scale interconnected power system network, few challenges exist in evaluating and maintaining the overall system stability. The power system’s ability to supply all types of loads during natural disasters or faults has yet to be addressed. This work focuses on developing a wide-area measurement system to manage and control the power system under all operating conditions. The IEEE-14 bus system was modeled in PSCAD software for simulating nineteen types of fault based on multi-objective shortest path algorithm. To manage the wide area measurement, the research must comprehend the working principle of the multi-objective shortest path algorithm, whereby the proposed method will determine the new path for the IEEE-14 bus system. To evaluate the performance of the multi-objective shortest path algorithm, all sections of the IEEE-14 bus system were simulated with faults. The distances of the normal path (without simulated fault) and the new path (with simulated fault) were recorded. Based on the recorded data, it was found that the location of the fault has significant influence on the shortest path of the buses connected to each other.

Author 1: Lilik J. Awalin
Author 2: Syahirah Abd Halim
Author 3: Jafferi Bin Jamaludin
Author 4: Nor Azuana Ramli

Keywords: IEEE-14 bus system; wide area measurement; multi-objective shortest path algorithm; fault location; PMU

PDF

Paper 13: An Enhanced Feature Acquisition for Sentiment Analysis of English and Hausa Tweets

Abstract: Due to the continuous and rapid growth of social media, opinionated contents are actively created by users in different languages about various products, services, events, and political parties. The automated classification of these contents prompted the need for multilingual sentiment analysis researches. However, the majority of research efforts are devoted to English and Arabic, English and German, English and French languages, while a great share of information is available in other languages such as Hausa. This paper proposes multilingual sentiment analysis of English and Hausa tweets using an Enhanced Feature Acquisition Method (EFAM). The method uses machine learning approach to integrate two newly defined Hausa features (Hausa Lexical Feature and Hausa Sentiment Intensifiers) and English feature to measure classification performance and to synthesize a more accurate sentiment classification procedure. The approach has been evaluated using several experiments with different classifiers in both monolingual and multilingual datasets. The experimental results reveal the effectiveness of the approach in enhancing feature integration for multilingual sentiment analysis. Similarly, by using features drawn from multiple languages, we can construct machine learning classifiers with an average precision of over 65%.

Author 1: Amina Imam Abubakar
Author 2: Abubakar Roko
Author 3: Aminu Muhammad Bui
Author 4: Ibrahim Saidu

Keywords: Multilingual sentiment analysis; sentiment analysis; social media; machine learning

PDF

Paper 14: Development of Technology for Summarization of Kazakh Text

Abstract: This paper presents the solution to the problem of summarizing Kazakh texts. The problem of Kazakh text summarization is considered as a sequence of two tasks: extracting the most important sentences of the text and simplifying the received sentences. The task of extracting the most important sentences of the text is solved using the TF-IDF method and the task of simplifying sentences is solved using the neural network technology “Seq2Seq”. Problem of using NMT method for simplification of Kazakh was in absence of Kazakh dataset for training. To solve this problem in this work propose use transfer learning method. The use of transfer learning made it possible to use a ready-made model that was trained on a parallel corpus of Simple English Wikipedia and not create a simplification corpus in Kazakh from scratch. For this, a transfer learning technology for simplifying sentences of the Kazakh language has been developed, based on training a neural model for simplifying sentences in the English language. Main scientific contribution of this work is transfer learning technology for the simplification of Kazakh sentences using the parallel corpus of the English language simplification.

Author 1: Talgat Zhabayev
Author 2: Ualsher Tukeyev

Keywords: Summarization; text simplification; low-resource language; seq2seq; transfer learning

PDF

Paper 15: An Energy-aware Facilitation Framework for Scalable Social Internet of Vehicles

Abstract: The Internet of Things (IoT) has eventually evolved into a more promising service provisioning paradigm, namely, Social Internet of Things (SIoT). Social Internet of Vehicles (SIoV) symbolizes a multitude of components from the existing Vehicular Ad-Hoc Networks (VANETs) such as OBUs, RSUs, and cloud devices that necessitate energy for proper functioning. It is speculated that the connected devices will surpass the 40 billion mark in the year 2022 in which the devices related to ITS will constitute a significant part. Therefore, the ever-increasing number of components increases the communication hopping that results in the immense escalation of energy consumption. However, the energy consumption at the object level increases due to individual communication, storage, and processing capabilities. The existing research in SIoV is focused on providing state-of-the-art services and applications; however, a significant goal of energy efficiency is largely ignored. Therefore, extensive research needs to be performed to come up with an energy-efficient framework for a scalable SIoV system to meet the future requirements of ITS. Consequently, this study proposed, simulated, and evaluated an energy-aware efficient deployment of RSUs scheme. The proposed scheme is based on network energy, data acquisition energy, and data processing energy. To achieve efficiency in terms of energy, traveling salesman problem with ant colony optimization algorithm are utilized. The experiments are performed in an urban scenario with different numbers of RSUs. The outcomes of the experiments exhibited promising results in energy gain and energy consumption having implications for society and consumers at large.

Author 1: Abdulwahab Ali Almazroi
Author 2: Muhammad Ahsan Qureshi

Keywords: Social Internet of Vehicles (SIoV); energy optimization; Travel Sales Person (TSP) problem; Ant Colony Optimization (ACO)

PDF

Paper 16: The Role of Ontologies through the Lifecycle of Virtual Reality based Training (VRT) Development Process: A Review Study

Abstract: The size of learning content continually challenges education and training providers. A recent advanced technology called Virtual Reality (VR) has emerged as a promising choice to facilitate knowledge acquisition and skill transfer in a variety of sectors. The main challenge in this technology is the increasing costs, time, effort, and resources needed for designing Virtual Reality based Training (VRT) applications as educational content. To fill such gaps, ontology approach was introduced to support VR development. Therefore, this review has the objective of investigating on how ontologies have been applied throughout the life cycle of a VR development process. Accordingly, articles from the year 2015 onwards have been explored. Findings show that VR developers do not incorporate ontology in all phases of the lifecycle of VR methodology, but only cover some phases like creation and implementation. Creating novel solutions without a complete methodology results in a long development process and an ineffective product. This could consequently raise high dangers in real life, especially when VRT is for fields containing trivial details that are vital for saving lives such as healthcare. This research thus presents a proposal of methodological guidance on designing VR applications with the use of an ontology approach throughout all the life cycles of VR construction.

Author 1: Youcef Benferdia
Author 2: Mohammad Nazir Ahmad
Author 3: Mushawiahti Mustafa
Author 4: Mohd Amran Md Ali

Keywords: Virtual reality; ontology; methodology; training and learning

PDF

Paper 17: Components and Indicators of Problem-solving Skills in Robot Programming Activities

Abstract: The objective of this research was to study the components and indicators of problem-solving skills in robot programming activities for high school students. This is done by analyzing the second order of confirmatory factor analysis (CFA) based on data from the behavioral assessment with regard to the robot programming activities of 320 students from specialized science schools. The results of the research revealed that the problem-solving skills in robot programming activities had five components and 15 indicators. All the components were tested for consistency using CFA statistics with the support of R-Studio program. The model analysis results were found to be consistent with empirical data with Chi-Square = 98.273, df = 80.000, p-value = 0.081, GFI = 0.961, NFI (TLI) = 0.924, CFI = 0.985, RMSEA = 0.027, RMR = 0.007. This indicates that all the identified components and indicators are involved in problem-solving skills in the robot programming activities of high school students.

Author 1: Chacharin Lertyosbordin
Author 2: Sorakrich Maneewan
Author 3: Daruwan Srikaew

Keywords: Components; indicators; problem solving; robot programming

PDF

Paper 18: A Hybrid Ensemble Word Embedding based Classification Model for Multi-document Summarization Process on Large Multi-domain Document Sets

Abstract: Contextual text feature extraction and classification play a vital role in the multi-document summarization process. Natural language processing (NLP) is one of the essential text mining tools which is used to preprocess and analyze the large document sets. Most of the conventional single document feature extraction measures are independent of contextual relationships among the different contextual feature sets for the document categorization process. Also, these conventional word embedding models such as TF-ID, ITF-ID and Glove are difficult to integrate into the multi-domain feature extraction and classification process due to a high misclassification rate and large candidate sets. To address these concerns, an advanced multi-document summarization framework was developed and tested on number of large training datasets. In this work, a hybrid multi-domain glove word embedding model, multi-document clustering and classification model were implemented to improve the multi-document summarization process for multi-domain document sets. Experimental results prove that the proposed multi-document summarization approach has improved efficiency in terms of accuracy, precision, recall, F-score and run time (ms) than the existing models.

Author 1: S Anjali Devi
Author 2: S Sivakumar

Keywords: Word embedding models; text classification; multi-document summarization; contextual feature similarity; natural language processing

PDF

Paper 19: Integration of Value Co-creation into the e-Learning Platform

Abstract: The e-learning platform is a technology used in most academic institutions. The e-learning platform provides services as an alternative to conventional methods. Previous studies have primarily focused on using and accepting e-learning among consumers and developing tangible attributes on the platform. Platform attributes should be available to engage with all users, leading to innovative ideas and improvement in the value offerings to users if well used. Therefore, this study explores the service science perspective in terms of co-creation for an e-learning platform. The concept of Service-Dominant Logic and value co-creation is adopted to explore and extract the elements and factors that are collectively applied to the model. The concepts illustrate how user value is co-created through the value propositions on the platform and value drivers for the users. The findings help identify the components for value proposition on the platform: enrichment, interaction, personalization, and environment. Meanwhile, the components of value drivers for actors are engagement, resources, experience, and goals. Then, the proposed components are used to develop an e-learning conceptual model. A service-driven model of e-learning will be a significant input to develop an effective platform that provides co-creation opportunities to its users. Future research is to identify the critical features available on the e-learning platform from the users’ view.

Author 1: Eliza Annis Thangaiah
Author 2: Ruzzakiah Jenal
Author 3: Jamaiah Yahaya

Keywords: e-Learning; co-creation; S-D Logic; value propositions; value drivers; actors

PDF

Paper 20: An Efficient Aspect based Sentiment Analysis Model by the Hybrid Fusion of Speech and Text Aspects

Abstract: Aspect-based Sentiment Analysis (ABSA) is treated to be a challenging task in the domain of speech, as it needs the fusion of acoustic features and Linguistic features for information retrieval and decision making. The existing studies in speech are limited to speech and emotion recognition. The main objective of this work is to combine acoustic features in speech with linguistic features in text for ABSA. A deep learning and language model is implemented for acoustic feature extraction in speech. Different variants of text feature extraction techniques are used for aspect extraction in text. Trained Lexicons, Latent Dirichlet Allocation (LDA) model, Rule based approach and Efficient Named Entity Recognition (E-NER) guided dependency parsing approach has been used for aspect extraction. Sentiment with respect to the extracted aspect is analyzed using Natural Language Processing (NLP) techniques. The experimental results of the proposed model proved the effectiveness of hybrid level fusion by yielding improved results of 5.7% WER and 3% CER when compared with the traditional baseline individual linguistic and acoustic feature models.

Author 1: Maganti Syamala
Author 2: N. J. Nalini

Keywords: Acoustic; aspect-based sentiment analysis; decision making; emotion; extraction; hybrid; lexicon; linguistic; natural language processing; speech

PDF

Paper 21: Evaluating Chinese Potential e-Commerce Websites based on Analytic Hierarchy Process

Abstract: China has recently become the largest market for e-commerce at the global level. After the technological revolution and its widespread, As of December 2020, about 782.41 million people deal with this e-commerce through modern and advanced electronic devices, which are smartphones, computers, and others. The study of e-commerce is one of the branches of business administration established electronically through the use of Internet networks, which aim to carry out buying and selling operations. This article applied the Analytic hierarchy process (AHP) to evaluate three potential Chinese e-Commerce websites: JD, TAOBAO, and SUNING. We divided our model into three primary levels: Goal level, Strategical level, and criteria level. We take two main factors into account at the Strategical level: website (A1) and user (A2). Meanwhile, in the criteria level, we consider the total of six aspects: Visiting speed (B1), website stability (B2), page ranking (B3), average person visits (B4), period of average person visits (B5), and users' comments (B6). We also make all scores normalized; all scores are mapped into 0-1 to compare each website's performance. Our results show the TAOBAO is the best E-commerce website with a score of 0.8233 based on our algorithm, and JD is the second one with a score of 0.7895, while SUNING is the worst with a score of 0.5955.

Author 1: Hemn Barzan Abdalla
Author 2: Liwei Wang

Keywords: Analytic hierarchy process (AHP); e-commerce; chinese e-commerce websites; JD (JING DONG); taobao; suning

PDF

Paper 22: Detection Technique and Mitigation Against a Phishing Attack

Abstract: Wireless networking is a main part of our daily life during these days, each one wants to be connected. Nevertheless, the massive progress in the Wi-Fi trends and technologies leads most people to give no attention to the security issues. Also detecting a fake access point is a hard security issue over the wireless network. All the currently used methods are either in need of hardware installation, changing the protocol or needs analyzing frames. Moreover, these solutions mainly focus on a single digital attack identification. In this paper, we proposed an admin side way of detection of a not real access point. That works on multiple cyber-attacks especially the phishing attack. We shed the light on detecting WI-phishing or Evil Twin, DE authentication attack, KARMA attack, advanced WI-phishing attack and differentiate them from the normal packets. By performing the frame type analysis in real time and analyzing different static and dynamic parameters as any change in the static features will be considered as an evil twin attack. Also, providing that the value of the dynamic parameters surpasses the threshold, it reflects Evil Twin. The detector has been tested experimentally and it reflects average accuracy of 94.40%, 87.08% average precision and an average specificity of 96.39% for the five types of attack.

Author 1: Haytham Tarek Mohammed Fetooh
Author 2: M. M. EL-GAYAR
Author 3: A. Aboelfetouh

Keywords: Rogue access point; phishing attacks; KARMA attack; social engineering; hacking

PDF

Paper 23: A PSNR Review of ESTARFM Cloud Removal Method with Sentinel 2 and Landsat 8 Combination

Abstract: Remote sensing images with high spatial and temporal resolution (HSHT) for GIS land use monitoring are crucial data sources. When trying to get HSHT resolution images, cloud cover is a typical problem. The effects of cloud cover reduction using the ESTARFM, one of spatiotemporal image fusion technique, is examined in this study. By merging two satellite photos of low-resolution and medium-resolution images, the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Method (ESTARFM), predicts the reflectance value of the cloud cover region. ESTARFM, on the other hand, employs both medium and high-resolution satellite pictures in this study. Using Sentinel 2 and Landsat 8, the Peak Signal Noise Ratio (PSNR) statistical methods are then utilized to evaluate the ESTARFM. The PSNR explain ESTARFM cloud removal performance by comparing the level of similarity of the reference image with the reconstructed image. In remote sensing, this hypothesis was established to get high-quality HSHT pictures. Based on this study, Landsat 8 images that have been cloud removed with ESTARFM may be classed as good. The PSNR value of 21.8 to 26 backs this up, and the ESTARFM result seems good on visual examination.

Author 1: Dietrich G. P. Tarigan
Author 2: Sani M. Isa

Keywords: Cloud removal; RS-Remote sensing; PSNR-Peak signal noise ratio; GIS-Geographic information system; spatiotemporal image fusion

PDF

Paper 24: An Improved K-anonymization Approach for Preserving Graph Structural Properties

Abstract: Privacy risks are an important issue to consider during the release of network data to protect personal information from potential attacks. Network data anonymization is a successful procedure used by researchers to prevent an adversary from revealing the user's identity. Such an attack is called a re-identification attack. However, this is a tricky task where the primary graph structure should be maintained as much as feasible within the anonymization process. Most existing solutions used edge-perturbation methods directly without any concern regarding the structural information of the graph. While that preserving graph structure during the anonymization process requires keeping the most important knowledge/edges in the graph without any modifications. This paper introduces a high utility K-degree anonymization method that could utilize edge betweenness centrality (EBC) as a measure to map the edges that have a central role in the graph. Experimental results showed that preserving these edges during the modification process will lead the anonymization algorithm to better preservation for the most important structural properties of the graph. This method also proved its efficiency for preserving community structure as a trade-off between graph utility and privacy.

Author 1: A. Mohammed Hanafy
Author 2: Sherif Barakat
Author 3: Amira Rezk

Keywords: Privacy; social networks; anonymization; edge-perturbation methods

PDF

Paper 25: Security Enhancement in Software Defined Networking (SDN): A Threat Model

Abstract: Software Defined Networking (SDN) has emerged as a technology which can replace the prevalent vendor based proprietary CLI networking devices. SDN has introduced applications based network control and provided various opportunities and challenges for research and innovation in these networks. Despite many advantages and opportunities in SDN, security is a matter of concern for developers who want to invest in SDN. In this paper we are analyzing the SDN security issues with their countermeasures. We have generalized four use cases threat model that should cover security requirements of SDN. These use cases are: (I) protect controllers from applications, (II) inter-controller protection, (III) protecting data plane or switches from controller, (IV) protecting controllers from malicious switches. We found that these SDN components are inter-related if one is secure another one is already secure. We also compared the SDN and traditional network security in terms of these four use cases and provide the insights for protection mechanism and security enhancements. A framework for the development of a SDN security application has been presented based on ryu controller. We believe that our threat model will help various researchers and developers to understand current security requirements and provide a ready reference to tackle vulnerabilities and threats in this area. Finally, we identify some open research problems and future research directions with a proposed security architecture.

Author 1: Pradeep Kumar Sharma
Author 2: S. S Tyagi

Keywords: Software defined networking (SDN); openflow; control plane; data plane; controller; programmability

PDF

Paper 26: A Comprehensive Framework for Big Data Analytics in Education

Abstract: With the adoption of cloud services for hosting knowledge delivery system in educational domain, there is a surplus quantity of education data being generated every day by current learning management system. Such data are associated with certain typical complexities that impose significant challenges for existing database management and analytics. Review of existing approaches towards educational data highlights that they do not offer full-fledged solution towards analytics and still there is an open-end problem. Therefore, the proposed system introduces a comprehensive framework which offers integrated operation of transformation, data quality, and predictive analytics. The emphasis is more towards achieving distributed analytical operation towards educational data in cloud. Implemented using analytical research methodology, the proposed system shows better analytical performance with respect to frequently used educational data analytical approaches.

Author 1: Ganeshayya Shidaganti
Author 2: Prakash S

Keywords: Big data; data analytics; educational big data; predictive analytics; text mining; machine learning; education technology

PDF

Paper 27: A Systematic Mapping Study of Software Usability Studies

Abstract: Among software quality attributes “software usability” is considered as one of the vital factors in software engineering literature. Software usability is the ability for users to generally understand, use, and learn a software with ease. Due to the importance of usability in software quality, a considerable amount of literature is published in the past decade. Few review and survey studies are also published to critically review the existing literature in the domain. However, there is limited research covering systematic mapping study of software usability. Mapping studies help in analyzing the general trends and research productivity in a research area. To fill this gap, this work critically examines the overall research productivity, demographics, trends, and challenges of software usability. The objective is to classify the current contributions and trends in the area of software usability. We retrieved 9,874 research articles from six research databases and 62 works are selected as primary studies using an evidence-based approach. The result of this mapping study shows that software usability is an active research area, with a promising number of works published in the last decade (2011 - 2020). We identified that the current literature spans over multiple article classes of which investigative papers, model proposals and evaluation papers are the most frequently published article types. We found experiments and theoretical validations to be the most common validation techniques. In terms of application domains; web, software development and mobile applications are the most frequent domains where usability studies are conducted. We identified that future usability studies should focus more on field studies as well as on the usability testing of scientific software packages. It will be of importance to consider ethical issues in usability testing as well.

Author 1: Abdulwahab Ali Almazroi

Keywords: Software usability; usability study; systematic mapping study; systematic literature review; software engineering

PDF

Paper 28: Multimedia Transmission Mechanism for Streaming Over Wireless Communication Channel

Abstract: With the evolution of wireless communication technologies (i.e., 4G/5G), the explosion of multimedia transmission of content sharing has become an integral part of users' daily lives. It expects further growth in Quality of Service (QoS) and Quality-of-Experience (QoE) performance. Therefore, multimedia service providers are developing new technologies to offer higher video streaming quality content along with video compression standards, which is highly demanded by the receivers. Thus, inventing precise and efficient quality-based media transmission protocol will significantly help to improve the multimedia QoS over wireless networks. This comprehensive research study discusses standard research work progress in multimedia transmission protocol for wireless communication networks. It also investigates the limitations of such literature found some challenging factors that play a significant role in managing the superior signal quality for digital or video content transmission over heavy traffic conditions. The final section provides a briefing on crucial open research issues to develop a multimedia transmission model that can seamlessly communicate multimedia content irrespective of adverse traffic conditions.

Author 1: Shwetha M
Author 2: Yamuna Devi C R

Keywords: Multimedia transmission; video encoding; multimedia streaming; quality of service; quality of experience; video compression standards

PDF

Paper 29: A Systematic Literature Review on Regression Test Case Prioritization

Abstract: Test case prioritization (TCP) is deemed valid to improve testing efficiency, especially in regression testing, as retest all is costly. The TCP schedule the test case execution order to detect bugs faster. For such benefit, test case prioritization has been intensively studied. This paper reviews the development of TCP for regression testing with 48 papers from 2017 to 2020. In this paper, we present four critical surveys. First is the development of approaches and techniques in regression TCP studies, second is the identification of software under test (SUT) variations used in TCP studies, third is the trend of metrics used to measure the TCP studies effectiveness, and fourth is the state-of-the-art of requirements-based TCP. Furthermore, we discuss development opportunities and potential future directions on regression TCP. Our review provides evidence that TCP has increasing interests. We also discovered that requirement-based utilization would help to prepare test cases earlier to improve TCP effectiveness.

Author 1: Ani Rahmani
Author 2: Sabrina Ahmad
Author 3: Intan Ermahani A. Jalil
Author 4: Adhitia Putra Herawan

Keywords: Software testing; test case prioritization; regression testing; requirements-based test case prioritization; software engineering

PDF

Paper 30: SNR based Energy Efficient Communication Protocol for Emergency Applications in WBAN

Abstract: Continuous remote monitoring of a patient’s health condition in dynamic environment imposes many challenges. Challenges further get multiplied based on the size of body area sensor network. One such challenge is energy efficiency of sensors. Maintaining longer life of all nodes, especially who participate in communicating vital signals from one network to another towards the base station is very important. In this work, an energy efficient communication protocol for the wireless body area network (WBAN) is proposed. The essential characteristics of the protocol are: random deployment of nodes, formation of clusters, node with high signal to noise ratio (SNR) as cluster head (CH), random rotation of CHs within each cluster, and so on. The developed algorithm is simulated in MATLAB by varying the number of nodes and networks. Obtained results are compared with some of the recent and most relevant existing works. It is found that there is an enhancement in the network lifetime by 19.5%, throughput by 12.61% and average remaining energy by 57.21%.

Author 1: K. Viswavardhan Reddy
Author 2: Navin Kumar

Keywords: WBAN; energy efficiency; emergency applications; protocol; remote monitoring

PDF

Paper 31: Critical Success Factor of Trusted Elements for Mobile Health Records Management: A Review of Conceptual Models

Abstract: Health Information Technology such as Mobile Health Record Management (MHRM) and Electronic Health Record (EHR) depend on each other in maintaining the patients’ medical record. For maintaining trust specifically in health information technology development, the relationship among the patients, providers and clinicians needs to be maintained. The present study consists of the understanding of the importance of the trusted elements of mobile health (mHealth) record management implementation in government hospitals. Covid-19 pandemic situation force obeying the technological approach in healthcare delivery. Technology gives a big impact on healthcare industry that deals with confidential data and human life. The increased use of mobile in records management in the wrong way leads the practitioner and communities towards poor quality, security problems, and meaningless data. To fulfil this objective, the conceptual framework has been developed by producing the trust elements for the implementation of mHealth apps in hospitals. Secondary data have been used and analyses to justify the objectives of the study. The findings and discussion have been evolved on correlating the existing literature and the analyses data. Five trusted elements for MHRM have been found: Governance, Professional skills and competency, Mobile Health Records Management (MHRM), Sustainability and, Technological. This paper has evolved the use of electronic health records in the health organizations for the accessibility of trust data and timely access. The involvement success factors of trust elements avoid the petty problem, and inefficient process but giving users convenient and instant access to patients' records.

Author 1: Fatin Nur Zulkipli
Author 2: Nurussobah Hussin
Author 3: Saiful Farik Mat Yatin
Author 4: Azman Ismail

Keywords: Electronic health record; mobile health record; records management; health information technology; records trust

PDF

Paper 32: Non-linear Multiclass SVM Classification Optimization using Large Datasets of Geometric Motif Image

Abstract: Support Vector Machine (SVM) with Radial Basis Functions (RBF) kernel is one of the methods frequently applied to nonlinear multiclass image classification. To overcome some constraints in the form of a large number of image datasets divided into nonlinear multiclass, there three stages of SVM-RBF classification process carried out i.e. 1) Determining the algorithms of feature extraction and feature value dimensions used, 2) Determining the appropriate kernel and parameter values, and 3) Using correct multiclass method for the training and testing processes. The OaO, OaA, and DAGSVM multi-class methods were tested on a large dataset of batik motif images whose geometric motifs with a variety of patterns and colors in each class and containing similar patterns in the motifs between the classes. DAGSVM has the advantage in classification accuracy value, i.e. 91%, but it takes longer during the training and testing processes.

Author 1: Fikri Budiman
Author 2: Edi Sugiarto

Keywords: Geometric motif; image classification; multiclass; non-linear; large dataset

PDF

Paper 33: Recent Progress, Emerging Techniques, and Future Research Prospects of Bangla Machine Translation: A Systematic Review

Abstract: Machine Translation (MT), the way of translating texts or documents from a source language to a target language automatically without human intervention, has gained popularity in the growing information technology-based era of globalization. Bangla is a major language, and several MT studies with different tools and techniques have been investigated in the last two decades. Considering the importance of the Bangla language and its prospects in MT studies, this study provides a comprehensive review of existing Bangla MT studies to meet the timely demand. Specifically, at first, the basic ideas of different MT methods (Rule-based, Example-based, Statistical, Neural, and Hybrid) and performance measures of MT are presented as a background study of the present review. Then an overview of the Bangla language and a brief description of the available Bangla-English corpora are provided. Next, a description of the existing Bangla MT studies is provided categorically following the common strategic fashion to create a valuable reference for current researchers in the field that is also suitable for non-expert users. The achieved performances of individual methods are also compared in a tabular form. Finally, a number of future research prospects are revealed from the studies, encouraging researchers and practitioners to develop a better and comprehensive Bangla MT system.

Author 1: M. A. H. Akhand
Author 2: Arna Roy
Author 3: Argha Chandra Dhar
Author 4: Md Abdus Samad Kamal

Keywords: Machine Translation (MT); Bangla language; rule-based MT; example-based MT; statistical MT; neural MT; hybrid MT

PDF

Paper 34: Classification of Breast Cancer Cell Images using Multiple Convolution Neural Network Architectures

Abstract: Breast cancer is a malignant tumor that affects women. It is the most prevalent cancer in women, affecting about 10% of all women at any point in their lives. The development of breast cancer begins in the lobules or ducts of the cells. Early detection and prevention are the best ways to stop this cancer from spreading. In this study, five Convolution Neural Network (CNN) models are used to process image data of breast cells. AlexNet, InceptionV3, GoogLeNet, VGG19 and Xception models are used for the classification of Invasive Ductal Carcinoma, IDC and Non-Invasive Ductal Carcinoma (Non-IDC) cells. The models are trained and tested at different epochs to record the learning rate. It is observed from the study that with higher epochs, the data loss decreases and accuracy increases. The accuracy of InceptionV3 and Xception is 92.48% and 90.72% respectively. Likewise, VGG19 and AlexNet have fairly close accuracy of 94.83% and 96.74%. However, GoogLeNet dominates over the other implemented models with the highest accuracy of 97.80%. The GoogLeNet model performs with high accuracy and precision in detecting IDC cells responsible for breast cancer.

Author 1: Zarrin Tasnim
Author 2: F. M. Javed Mehedi Shamrat
Author 3: Md Saidul Islam
Author 4: Md.Tareq Rahman
Author 5: Biraj Saha Aronya
Author 6: Jannatun Naeem Muna
Author 7: Md. Masum Billah

Keywords: Breast cancer; IDC; non-IDC; AlexNet; VGG19; Inception sV3; GoogLeNet; Xecption; accuracy

PDF

Paper 35: A Multi-dimensional Credibility Assessment for Arabic News Sources

Abstract: Due to the advances in social media, it has become the most popular means of the propagation of news. Many news items are published on social media like Facebook, Twitter, Instagram, etc. Facebook is a huge source for spreading and consuming daily news, but it is an unstructured way of producing news about domains (Art, Health, Education, Sport, Politics, etc.). Thus, this paper will present a model to assess the credibility of news sources over the social context in a particular domain through a particular period of time from a multidimensional perspective. Based on these dimensions of credibility, this model will be designed, evaluated, and implemented by using machine learning algorithms and Arabic NLP approaches to assess the credibility score for Arabic news sources on Facebook. In addition, the study will visualize their scores at different data analysis levels to make the assessment more precise and trustworthy. The proposed model has been implemented and tested over some real Arabic news sources for specific domains and over a period of time to produce a credibility score for each one, whereas the user can display these scores and choose the most credible news sources. The credibility assessment model will be more specific and accurate for a specific domain and time with an accuracy of 98%.

Author 1: Amira M. Gaber
Author 2: Mohamed Nour El-din
Author 3: Hanan Moussa

Keywords: Information credibility; social media; machine learning; Arabic Natural Language Processing (ANLP)

PDF

Paper 36: Online Programming Semantic Error Feedback using Dynamic Template Matching

Abstract: Many of automated computer programming feedback is generated based on static template matching that need to be provided by the experts. This research is focusing on developing an automated online programming semantic error feedback by using dynamic template matching models based on students’ correct answers submission. Currently, there is a lack of research using dynamic template matching model due to their complexity and varies in terms of programming structure. To solve the formulation of the dynamic templates, a new automated feedback model using front and rear n-gram sequence as the matching technique was developed to provide feedback to students based on the missing structure of the best-matched template. We have tested 60 student’s Java programming answers on 3 different types of programming questions using all the dynamic templates randomly chosen for each student. An expert was assigned to manually match the student’s answer with the 3 randomly chosen templates. The result shows that 80% of the best-matched templates for each student using the technique were similarly chosen by the expert. Based on the matched template, the student will be given feedback notifying the possible next programming instruction that can be included in the answer to get it correct as was achieved by the template. This model can contribute to automatically assist students in answering computational programming exercises.

Author 1: Razali M. K. A
Author 2: S. Suhailan
Author 3: Mohamed M. A
Author 4: M. D. M. Sufian

Keywords: Dynamic; feedback; online programming; semantic error; template matching

PDF

Paper 37: Comparing MapReduce and Spark in Computing the PCC Matrix in Gene Co-expression Networks

Abstract: Correlation between gene expression profiles across multiple samples and the identification of inter-gene interactions is a critical technique for Co-expression networking. Due to the highly intensive processing of calculating the Pearson’s Correlation Coefficient, PCC, matrix, it often takes too much processing time to accomplish it. Therefore, in this work, Big Data techniques including MapReduce and Spark have been employed in a cloud environment to calculate the PCC matrix to find the dependencies between genes measured in high throughput microarray. A comparison between the running time of each phase in both of MapReduce and Spark approaches has been held. Both these techniques can dramatically speed up the computation allowing users to work with highly intensive processing. However, Spark has yielded a better performance than the MapReduce as it performs the processing in the main memory of the worker nodes and avoids the unnecessary I/O operations with the disks. Spark has yielded 80 times speed up for calculating the PCC of 22777 genes, however the MapReduce attained barely 8 times speed up.

Author 1: Nagwan Abdel Samee
Author 2: Nada Hassan Osman
Author 3: Rania Ahmed Abdel Azeem Abul Seoud

Keywords: Pearson's correlation; Hadoop; MapReduce; spark; gene co-expression networks; GCN; Affymetrix microarrays

PDF

Paper 38: Analysis of Different Attacks on Software Defined Network and Approaches to Mitigate using Intelligent Techniques

Abstract: The detection of DDoS (Distributed Denial of Service) attacks is essential topic under network security. DDoS attacks cause network services to become unavailable by repeatedly flooding servers with unwanted traffic. The volume, magnitude, and complexity of these attacks increased dramatically as a result of low-cost Internet connections and easily available attack tools. Both Software Defined Networking (SDN) and Deep Learning (DL) have recently found a number of practical and fascinating applications in industry and academia. SDN enables centralized management, a global view of the overall network, and configurable control planes, allowing network devices to adapt to diverse applications. When applied to diverse categorization problems, DL-based approaches outperformed classic machine learning techniques, while SDN characteristics offer better network monitoring and security of the managed network when compared to traditional networks. By inheriting the non-linearity of neural networks, they increase feature extraction and reduction from a high-dimensional dataset in an unsupervised way. An overview of deep learning algorithms for sensing distributed denial of service attacks in software-defined networks with Deep learning is presented within this article. Furthermore, SDN environment is simulated in Mininet using RYU controller. In addition, each paper's mitigation method is examined in the survey.

Author 1: P. Karthika
Author 2: A. Karmel

Keywords: Distributed Denial of Service (DDoS); Software Defined Networking (SDN); attack detection; Mininet; OpenFlow; mitigation; machine learning; deep learning

PDF

Paper 39: Multi-objective Batch Scheduling in Collaborative Multi-product Flow Shop System by using Non-dominated Sorting Genetic Algorithm

Abstract: Batch scheduling is a well-known topic that has been studied widely with various objectives, methods, and circumstances. Unfortunately, batch scheduling in a collaborative flow shop system is still unexplored. All studies about batch scheduling that are found were in a single flow shop system where all arriving jobs come from single door. In a collaborative flow shop system, every flow shop handles its own customers although joint production among flow shops to improve efficiency is possible. This work aims to develop a novel batch scheduling model for a collaborative multi-product flow shop system. Its objective is to minimize make-span and total production cost. This model is developed by using non-dominated sorting genetic algorithm (NSGA II) which is proven in many multi objective optimization models. This model is then compared with the non-collaborative models which use NSGA II and adjacent pairwise interchange algorithm. Due to the simulation result, the proposed model performs better than the existing models in minimizing the make-span and total production cost. The make-span of the proposed model is 10 to 17 percent lower than the existing non-collaborative models. The total production cost of the proposed model is 0.3 to 3.5 percent lower than the existing non-collaborative models.

Author 1: Purba Daru Kusuma

Keywords: Batch scheduling; flow shop; NSGA II; collaborative system

PDF

Paper 40: Power Loss Minimization using Optimal Power Flow based on Firefly Algorithm

Abstract: Conventional methods are commonly used to solve optimal power flow problems in power system networks. However, conventional methods are not suitable for solving large and non-linear optimal power flow problems as they are influenced by initialization values and more likely be trapped in local optimum. Hence, heuristic optimization methods such as Firefly Algorithm have been widely implemented to overcome the limitations of the conventional methods. These methods often use random strategy that can provide better solutions to avoid being trapped in the local optimum while achieving global optimum. In this study, the load flow analysis was performed using the conventional method of Newton-Raphson technique to calculate the real power loss. Next, Firefly Algorithm was implemented to optimize the control variables for minimizing the real power loss of the transmission system. Generator bus voltage magnitudes, transformer tap settings and generator output active power were taken as the control variables to be optimized. The effectiveness of the proposed Firefly Algorithm was then tested on the IEEE 14-bus and 30-bus system using MATLAB software. The simulated results were then analyzed and compared with Particle Swarm Optimization’s results based on the consistency and execution time. Implementation of the Firefly Algorithm has successfully produced minimum real power loss with faster computational time as compared to Particle Swarm Optimization. For the IEEE 14-bus system, the active power loss for the Firefly Algorithm is 6.6222 MW and the calculation time is 18.2372 seconds. Therefore, the application of optimal power flow based on Firefly Algorithm is a reliable technique, in which the optimal settings with respect to power transmission loss can be determined effectively.

Author 1: Chia Shu Jun
Author 2: Syahirah Abd Halim
Author 3: Hazwani Mohd Rosli
Author 4: Nor Azwan Mohamed Kamari

Keywords: Optimal power flow; firefly algorithm; real power loss; control variables

PDF

Paper 41: Distance Education during the COVID-19 Pandemic: The Impact of Online Gaming Addiction on University Students’ Performance

Abstract: The COVID-19 pandemic has forced most universities worldwide to convert to distance education to ensure the educational process remains uninterrupted. The COVID-19 pandemic-related confinement orders have led students to be more engaged with online games. However, for a minority of students, excessive playing can become problematic and addictive. Few studies investigated the long-term effect of COVID-19 on game addiction among university students. The present study investigates the changes in online game addiction rates between May 2021 and May 2020 and aims at determining the impact of playing online games on students’ academic performance. It also examines the demographic factors associated with video game addiction. A sample (n= 418) of students from one private university in UAE was randomly selected, and data were analyzed. The study has determined a reduction in online game addiction levels in the second year of pandemic compared with the first year. Gender and academic level were considered the most predominant features expressively related to online games addiction. It has also been found that digital game addiction is positively associated with academic performance.

Author 1: Mahmoud Abou Naaj
Author 2: Mirna Nachouki

Keywords: Distance education; COVID-19 pandemic; game addiction; students’ performance

PDF

Paper 42: Enhancing Business Process Modeling with Context and Ontology

Abstract: Business process is a sequence of events and tasks that encompass actions and people. Therefore, a company that pays much attention to its business processes has to clearly identify and define the procedures of their relations. However, with the exponential evolution in ubiquitous computing, the exploitation of information spread all over different devices has become essential to further improve business processes. Hence, in this paper we present a new approach for business process modeling that is based on context-awareness and ontology. We propose a set of meta-model for the elements that we find very important, taking into account the context and, in modeling, the business process. To validate our approach, we propose a concrete case study about transport system to provide a proof of the applicability as well as the utility of the model.

Author 1: Jamal EL BOUROUMI
Author 2: Hatim GUERMAH
Author 3: Mahmoud NASSAR

Keywords: Business process; business process modeling; context; context-awareness; ontology

PDF

Paper 43: Hybrid Decision Support System Framework for Leaf Image Analysis to Improve Crop Productivity

Abstract: Crop disease is one of the major problems with agriculture in India. Identifying the disease and classifying the type of disease is most important which can be made possible using the deep learning technique. To perform this verified dataset is required which consists of healthy and disease leaf images of all crops. The proposed model uses a hybrid approach which integrates VGG16 classifier with an attention mechanism, transfer learning approach and dropout operation. The proposed model uses a rice disease dataset and using the proposed approach it achieves an train accuracy of 96.45 percent and train loss 0.09 and validation loss of 0.44. The dataset is collected from the plant village project for rice leaf which consists of 4955 images which include Brown Spot, Healthy, Hipsa, and Leaf Blast type of images. The proposed model use attention mechanism that focuses mainly on the part of the image rather than the whole part of the image using a glimpse ratio of 3:1. The traditional method of detecting crop diseases needs high experience and knowledge of experts in the field which is time consuming, ineffective, and high cost. In this study, Deep Convolutional Neural Networks (DCNN) and Transfer Learning with Attention models are used to detect diseases associated with rice plants without overfitting the model.

Author 1: Meeradevi
Author 2: Monica R Mundada

Keywords: Deep learning; activation function; attention mechanism; dropout operation; transfer learning; VGG16

PDF

Paper 44: Weighted Clustering for Deep Learning Approach in Heart Disease Diagnosis

Abstract: An approach for heart diagnosis based on weighted clustering is presented in this paper. The existing heart diagnosis approach develops a decision based on correlation of feature vector of a querying sample with available knowledge to the system. With increase in the learning data to the system the search overhead increases. This tends to delay in decision making. The linear mapping is improved by the clustering process of large database information. However, the issue of data clustering is observed to be limited with increase in training information and characteristic of learning feature. To overcome the issue of accurate clustering, a weighted clustering approach based on gain factor is proposed. This approach updates the cluster information based on dual factor monitoring of distance and gain parameter. The presented approach illustrates an improvement in the mining performance in terms of accuracy, sensitivity and recall rate.

Author 1: BhandareTrupti Vasantrao
Author 2: Selvarani Rangasamy

Keywords: Learning approach; weighted clustering; heart disease diagnosis; gain factor

PDF

Paper 45: Research Efforts and Challenges in Crowd-based Requirements Engineering: A Review

Abstract: Eliciting software system development requirements is a challenging task as the information is from various resources. The most constructive resource is the stakeholders of the system to be developed. It is critical yet time-consuming to capture essential requirements to realize a reliable and workable software system. The crowd-based Requirements Engineering (crowd-based RE) approach adapts the crowdsourcing technique to access an extensive range of stakeholders and save time, especially for the generic type system with no clear stakeholder. This paper presents current research efforts and challenges in crowd-based RE. A systematic literature review method is adopted to explore literature based on two specific research questions. The first question aimed at identifying research efforts on crowd-based RE, and the second question focused on the main challenges discovered in pursuing crowd-based RE. The findings from the literature review show that many efforts have been made to explore and further improve crowd-based RE. This paper provides a foundation to pursue research in improving crowdsourcing techniques for the benefit of requirements engineering.

Author 1: Rosmiza Wahida Abdullah
Author 2: Sabrina Ahmad
Author 3: Siti Azirah Asmai
Author 4: Seok-Won Lee
Author 5: Zarina Mat Zain

Keywords: Crowd-based requirement engineering; requirements engineering; requirements elicitation; software engineering; crowdsourcing; review

PDF

Paper 46: Application of Convolutional Neural Networks for Binary Recognition Task of Two Similar Industrial Machining Parts

Abstract: Misclassifying parts in the small-medium manufacturing enterprise can lead to serious consequences. Manual inspection, as currently practiced, allows for compromises in product traceability. Due to this condition, inspection of the part’s number is not digitally visible. Due to a lack of modern traceability, customers receive incorrect parts, and the same incidents continue to occur. It is essential to transform manual inspections into digital and automated ones. AI-based technologies have recently been employed to enable a smart and intelligent recognition system for industrial machining parts. Convolutional Neural Networks (CNN) are widely used for image recognition tasks and are gaining popularity as deep learning algorithms. In this paper, a CNN model is used to perform binary recognition on two similar industrial machining parts. The model has been trained to recognise two classes of machining parts: Parts A and B. The dataset used to train the model includes both original and augmented images, with a total of 2447 images for both classes. The performance metrics have been measured during the training process, and 10 experiments have been conducted to evaluate the performance of the model. The test results reveal that the CNN model achieves 98% mean accuracy, 97.1% precision for Part A, 99% precision for part B and 0.982 AUC value. The results demonstrate the effectiveness of the CNN-based recognition of parts. It offers an effective alternative and is a compelling method for quality assurance in small-medium manufacturing enterprises.

Author 1: Hadyan Hafizh
Author 2: Amir Hamzah Abdul Rasib
Author 3: Rohana Abdullah
Author 4: Mohd Hadzley Abu Bakar
Author 5: Anuar Mohamed Kassim

Keywords: Convolutional neural networks; binary recognition; machining parts; deep learning

PDF

Paper 47: Design and Evaluation of an Engagement Framework for e-Learning Gamification

Abstract: Recently, gamification in education software development to improve student engagement and performance has become prevalent. Gamification is used to counter attrition and dropout issues in e-learning. A handful of methods are presented for the gamification of e-learning systems in the literature. However, the e-learning gamification methods proposed in the literature lacked consistency. The number and types of game elements used in the methods are varied. In addition, there is a lack of an engagement framework that can be used in applying game elements to e-learning systems. Therefore, this paper provides insights into gamification and how it is used in e-learning systems. Then, the study proposes and evaluates an engagement framework that can be used to guide developers on how to add game elements to e-learning to improve student engagement and performance. The framework consists of three components: game elements, learning activities, and engagement factors components. Two experts evaluate the engagement framework via a semi-structured interview. The evaluation results indicate that developers can efficiently and effectively use the framework to gamify e-learning systems for improved student engagement and performance.

Author 1: Mohammed Abdulaziz Alsubhi
Author 2: Noraidah Sahari Ashaari
Author 3: Tengku Siti Meriam Tengku Wook

Keywords: e-learning; learning activities; gamification; game elements; engagement framework

PDF

Paper 48: Application of Deep Learning in Satellite Image-based Land Cover Mapping in Africa

Abstract: Deep Learning Networks (DLN), in particular, Convolutional Neural Networks (CNN) has achieved state-of-the-art results in various computer vision tasks including automatic land cover classification from satellite images. However, despite its remarkable performance and broad use in developed countries, using this advanced machine learning algorithm has remained a huge challenge in developing continents such as Africa. This is because the necessary tools, techniques, and technical skills needed to utilize DL networks are very scarce or expensive. Recently, new approaches to satellite image-based land cover classification with DL have yielded significant breakthroughs, offering novel opportunities for its further development and application. This can be taken advantage of in low resources continents such as Africa. This paper aims to review some of these notable challenges to the application of DL for satellite image-based classification tasks in developing continents. Then, review the emerging solutions as well as the prospects of their use. Harnessing the power of satellite data and deep learning for land cover mapping will help many of the developing continents make informed policies and decisions to address some of its most pressing challenges including urban and regional planning, environmental protection and management, agricultural development, forest management and disaster and risks mitigation.

Author 1: Nzurumike Obianuju Lynda
Author 2: Ezeomedo Innocent C
Author 3: Nwojo Agwu Nnanna
Author 4: Ali Ahmad Aminu

Keywords: Deep learning; satellite image classification; land cover mapping; Africa

PDF

Paper 49: Semi-supervised Deep Learning for Stress Prediction: A Review and Novel Solutions

Abstract: This research introduces a novel self-supervised deep learning model for stress detection using an intelligent solution that detects the stress state using the physiological parameters. The first part of this research represents a concise review of different intelligent techniques for processing physiological data and the emotional states of humans. Also, for all covered methods, special attention is made to semi-supervised learning algorithms. In the second part of the paper, a novel semi-supervised deep learning model for predicting the stress state is proposed. It is the first attempt of using contrastive learning for the stress prediction tasks. The model is based on utilizing generative and contrastive features specially tailored for treating time-series data. A widely popular multimodal WESAD (Wearable Stress and Affect Detection) data set is exploited for experimental purposes. It consists of physiological and motion data recorded from the wrist and chest-worn devices. To provide an intelligent solution that will be widely applicable, only the wrist data recorded from smartwatches is exploited during the model's training. The proposed model in this research is tested on a single subject's data and predicts the stress and non-stress events. Keeping in mind that the initial data was unbalanced with only 11% of the stress data, data augmentation techniques are applied within the model to provide additional reliable training information. The model shows significant potential in clustering stress conditions, and it presents accuracy in the range with other state-of-the-art solutions. The most significant benefits of using this model are its prediction capabilities when dealing with unlabeled data and performances when undersized data cannot be processed optimally by traditional intelligent methods.

Author 1: Mazin Alshamrani

Keywords: Deep learning; semi-supervised learning; contrastive learning; physiological data; stress prediction

PDF

Paper 50: Customer Segmentation and Profiling for Life Insurance using K-Modes Clustering and Decision Tree Classifier

Abstract: Customer segmentation and profiling has become an important marketing strategy in most businesses as a preparation for better customer services as well as enhancing customer relationship management. This study presents the segmentation and classification technique for insurance industry via data mining approaches: K-Modes Clustering and Decision Tree Classifier. Data from an insurance company were gathered. Decision Tree Algorithm was applied for customer profile classification comparing two methods which are Entropy and Gini. K-Modes Clustering segmentized the customers into three prominent groups which are “Potential High-Value Customers”, “Low Value Customers” and “Disinterested Customers”. Decision Tree with Gini model with 10-fold cross validation was found as the best fit model with average accuracy of 81.30%. This segmentation would help marketing team of insurance company to strategize their marketing plans based on different group of customers by formulating different approaches to maximize customer values. Customers can receive customization of insurance plans which satisfy their necessity as well as better assistance or services from insurance companies.

Author 1: Shuzlina Abdul-Rahman
Author 2: Nurin Faiqah Kamal Arifin
Author 3: Mastura Hanafiah
Author 4: Sofianita Mutalib

Keywords: Customer segmentation; customer profiling; decision tree; insurance domain; k-modes clustering

PDF

Paper 51: Building a Standard Model of an Information System for Working with Documents on Scientific and Educational Activities

Abstract: To increase the effectiveness of research, it is necessary to have access to systematic information resources of scientific work. Therefore, in any field of science, it begins with research, the search for scientific information, but with the growing number of scientific articles, books, monographs, patents, the search for information becomes more and more difficult. Creating a unified information system that allows scientists to quickly get acquainted with the results of other scientific research and prevent their duplication. The article discusses the technological techniques of distributed information systems that provide scientific and educational activities. The main tasks of creating a model of a distributed information system that supports scientific and educational activities, the functional capabilities of the model, the concept of metadata and the requirements for the metadata profile are described. The task, subject area, subjects, objects, the main functionality of the information system are defined, a list of the main types of information resources is provided. The paper analyzes the functional requirements for such systems. The paper describes a technological approach to creating a standard model of an information system to support scientific and educational activities organized in the form of an electronic library for working with documents on scientific heritage. The article describes the architecture of the information system and the principles of integration with the digital depository, the rules for the presentation and transformation of metadata.

Author 1: Serikbayeva Sandugash
Author 2: Tussupov Jamalbek
Author 3: Sambetbayeva Madina
Author 4: Yerzhanova Akbota
Author 5: Abduvalova Ainur

Keywords: Scientific and educational activities; distributed information systems; electronic library; metadata; model; search; interoperability; document; ontology; Z39. 50; SRU/SRW; Apache Solr

PDF

Paper 52: Detection of Intruder in Cloud Computing Environment using Swarm Inspired based Neural Network

Abstract: Cloud computing services offered a resource pool with a wide range of storage for large amounts of data. Cloud services are generally used as a demand-driven private or open data forum, and the increase in use has led to security concerns. Therefore, it is necessary to design an accurate Intrusion Detection System (IDS) to identify the suspected node in the cloud computing environment. This is possible by monitoring network traffic so that the quality of service and performance of the system can be maintained. Several researchers have worked on designing valid IDS with the help of a machine learning approach. A single classification algorithm seems to be impossible to detect intruders with high accuracy. Therefore, a hybrid approach is presented. This approach is a combination of Cuckoo Search. CS as an optimization algorithm and Feed Forward Back Propagation Neural Network (FFBPNN) as a multi-class classification approach. The user's request to access cloud data is collected and essential features are selected using CS as an optimization approach. The selected features are used to train FFBPNN with reduced training time and complexity. The experimental analysis has been performed in terms of precision, recall, F-measure, and accuracy. The evaluated value for parameters i.e., precision (85.5%), recall (86.4%), F-measure (85.9%), and accuracy (86.22%) are observed. At last, the parameters are also compared with the existing approach.

Author 1: Nishika
Author 2: Kamna Solanki
Author 3: Sandeep Dalal

Keywords: Cloud computing; intrusion detection system; cuckoo search; feed forward back propagation neural network (FFBPNN)

PDF

Paper 53: Construction of a Model and Development of an Algorithm for Solving the Wave Problem under Pulsed Loading

Abstract: The article considers approaches and methods for modeling the wave process resulting from blasting operations. The analysis of modeling methods has shown that in the context of the task it is advisable to conduct a study based on the application of the method of behavioral characteristics, which was optimized using the splitting method. The defining equations were calculated, the point scheme of the template was selected, the resolving difference equations for dynamic boundary value problems of a seismic nature were calculated. Based on the method, an algorithm for calculating the relationship between the voltage and the seismic medium was developed, which allowed generating a code and designing an information system for calculating the wave process.

Author 1: Khabdolda Bolat
Author 2: Zhuzbayev Serik
Author 3: Sabitova Diana S
Author 4: Aitkenova Ailazzat A
Author 5: Serikbayeva Sandugash
Author 6: Badekova Karakoz Zh
Author 7: Yerzhanova Akbota Y

Keywords: Information systems; wave process; explosive technologies; method of bicharacteristics; stress tensor

PDF

Paper 54: Evaluation Study of Elliptic Curve Cryptography Scalar Multiplication on Raspberry Pi4

Abstract: The internet of things (IoT) is defined as a collection of autonomous devices that connect and network with each other via the Internet without the requirement for human interaction. It enhances daily our lives such as through personal devices, healthcare sensing, retail sensing, and industrial control, as well as the smart homes, smart cities, and smart supply chains. Although the IoT offers significant benefits, it has inherent issues, including security and privacy risks, memory size limitations, and processing capability challenges. This paper describes the application of elliptic curve cryptography (ECC) in a simulated IoT environment to ensure the confidentiality of data passed between the connected devices. Scalar multiplication represents the main operation of ECC, and it is primarily used for key generation, encryption, and decryption. The aim of this paper is to evaluate and show the efficiency of adapt lightweight ECC with an IoT devices. In the study outlined in this paper, scalar multiplication was implemented on Raspberry Pi4 and processing time and consumed energy were measured to compare the performance. The comparison was made on the scalar multiplication of both fast and basic ECC algorithms. The result of the performance test revealed that a fast scalar multiplication reduced the computation time in comparison with basic scalar multiplication while consuming a similar level of energy.

Author 1: Fatimah Alkhudhayr
Author 2: Tarek Moulahi
Author 3: Abdulatif Alabdulatif

Keywords: IoT; elliptic curve cryptography; fast scalar multiplication; raspberry Pi4

PDF

Paper 55: A Comparative Analysis of Scalability Issues within Blockchain-based Solutions in the Internet of Things

Abstract: Recently, enormous interest has been shown by both academia and industry around concepts and techniques related to connecting heterogeneous IoT devices. It is now considered a rapidly evolving technology with billions of IoT devices expected to be deployed in the upcoming years around the globe. These devices must be maintained, managed, traced, and secured in a timely and flexible manner. Previously, the centralized approaches constituted mainstream solutions to handle the ever-increasing number of connected IoT devices. However, these approaches may be inadequate to handle devices at a massive scale. Blockchain as a distributed approach that presents a promising solution to tackle the concerns of IoT devices connectivity. However, current Blockchain platforms face several scalability issues to accommodate diverse IoT devices without losing efficiency. This paper performs a comprehensive analysis of the recent blockchain-based scalability solutions applied to the Internet of Things domain. We propose an evaluation framework of scalability in IoT environments, encompassing critical criteria like throughput, latency, and block size. Moreover, we conduct an assessment of the notable scalability solutions and conclude the results by highlighting six overarching scalability issues of blockchain-based solutions in IoT that ought to be resolved by the industry and research community.

Author 1: Ahmed Alrehaili
Author 2: Abdallah Namoun
Author 3: Ali Tufail

Keywords: Blockchain; IoT; scalability; issues; distributed ledger; throughput; latency

PDF

Paper 56: An Internet of Things (IoT) Reference Model for an Infectious Disease Active Digital Surveillance System

Abstract: Internet of Things (IoT) technological assistance for infectious disease surveillance is urgently needed when outbreaks occur, especially during pandemics. The IoT has great potential as an active digital surveillance system, since it can provide meaningful time-critical data needed to design infectious disease surveillance. Many studies have developed the IoT for such surveillance; however, such designs have been developed based on authors' ideas or innovations, without consideration of a specific reference model. Therefore, it is essential to build such a model that could encompass end-to-end IoT-based surveillance system design. This paper proposes a reference model for the design of an active digital surveillance system of infectious diseases with IoT technology. It consists of 14 attributes with specific indicators to accommodate IoT characteristics and to meet the needs of infectious disease surveillance design. The proof of concept was conducted by adopting the reference model into an IoT system design for the active digital surveillance of the Covid-19 disease. The use-case of the design was a community-based surveillance (CBS) system utilizing the IoT to detect initial symptoms and prevent closed contacts of Covid-19 in a nursing home. We then elaborated its compliance with the 14 attributes of the reference model, reflecting how the IoT design should meet the criteria mandated by the model. The study finds that the proposed reference model could eventually benefit engineers who develop the complete IoT design, as well as epidemiologists, the government or the relevant policy makers who work in preventing infectious diseases from worsening.

Author 1: Nur Hayati
Author 2: Kalamullah Ramli
Author 3: Muhammad Suryanegara
Author 4: Muhammad Salman

Keywords: IoT; framework; digital surveillance; infectious disease; Covid-19

PDF

Paper 57: Personally Identifiable Information (PII) Detection in the Unstructured Large Text Corpus using Natural Language Processing and Unsupervised Learning Technique

Abstract: Personally Identifiable Information (PII) has gained much attention with the rapid development of technologies and the exploitation of information relating to an individual. The corporates and other organizations store a large amount of information that is primarily disseminated in the form of emails that include personnel information of the user, employee, and customers. The security aspects of PII storage have been ignored, raising serious security concerns onindividual privacy. A significant concern arises about comprehending the responsibilities regarding the uses of PII. However, in real-time scenarios, email data is regarded as unstructured text data, detecting PII from such an unstructured large text corpus is quite challenging. This paper presents an intelligent clustering approach for automatically detecting personally identifiable information (PII) from a large text corpus. The focus of the proposed study is to design a model that receives text content and detects possible PII attributes. Therefore, this paper presents a clustering-based PII Model (C-PPIM) based on NLP and unsupervised learning to address detection of PII in the unstructured large text corpus. NLP is used to perform topic modeling, and Byte mLSTM, a different approach of sequence model, is implemented to address clustering problems in PII detection. The performance analysis of the proposed model is carried out existing hierarchical clustering concerning silhouette and cohesion score. The outcome indicatedthe effectiveness of the proposed system that highlights significant PII attributes, with significant scope in real-time implementation. In contrast, existing techniques are too expensive to function and fit in real-time environments.

Author 1: Poornima Kulkarni
Author 2: Cauvery N K

Keywords: PII; natural language processing; word2vec machine learning; PII detection; security

PDF

Paper 58: Evaluation of Data Center Network Security based on Next-Generation Firewall

Abstract: This study aims to create a network security system that can mitigate attacks carried out by internal users and reduce attacks from internal networks. Further, a network security system is expected to overcome the difficulty of mitigating attacks carried out by internal users. The goal of this research is to analyze the effectiveness of the Next-Generation Firewall implemented to improve network security. The method used in this research is the comparison method with a test of TCP SYN attack, UDP flood attack, ICMP smurf attack, and DHCP starvation attack on a company network. From the experiment results, it can be concluded that the Next-Generation Firewall has significantly better performance for protecting mitigating attacks carried out by internal users on a company network. It can increase the security of data communication networks against threats from the internal networks.

Author 1: Andi Jehan Alhasan
Author 2: Nico Surantha

Keywords: Network security; next-generation firewall; TCP SYN attack; UDP flood attack; ICMP smurf attack

PDF

Paper 59: Analogy of the Application of Clustering and K-Means Techniques for the Approximation of Values of Human Development Indicators

Abstract: The objective of this study was to apply Clustering and K-Means’ techniques to classify the departments of Peru according to their Human Development Index. In this article, the elbow method was used to determine the optimal number of clusters, applying the classification algorithms to group the departments of Peru according to their similarities, in addition to the Principal Component Analysis (PCA) technique for a better display of clusters. After applying the unsupervised algorithms, the results were more relevant in clusters 2 and 4 according to their HDI, made up of the departments of Arequipa, the Constitutional Province of Callao, Ica, Lima, Moquegua and Tacna, where the most notable is the life expectancy at birth, the population with full secondary education, the number of years of education, the average per capita income, and the state's density index. The results obtained by the K-Means algorithm show more cohesive results than the Clustering algorithm.

Author 1: José Luis Morales Rocha
Author 2: Mario Aurelio Coyla Zela
Author 3: Nakaday Irazema Vargas Torres
Author 4: Genciana Serruto Medina

Keywords: Clustering; K-Means; elbow method; cohesion; separation; human development index

PDF

Paper 60: Analysis of Momentous Fragmentary Formants in Talaqi-like Neoteric Assessment of Quran Recitation using MFCC Miniature Features of Quranic Syllables

Abstract: The use of technological speech recognition systems with a variety of approaches and techniques has grown rapidly in a variety of human-machine interaction applications. Further to this, a computerized assessment system to identify errors in reading the Qur'an can be developed to practice the advantages of technology that exist today. Based on Quranic syllable utterances, which contain Tajweed rules that generally consist of Makhraj (articulation process), Sifaat (letter features or pronunciation) and Harakat (pronunciation extension), this paper attempts to present the technological capabilities in realizing Quranic recitation assessment. The transformation of the digital signal of the Quranic voice with the identification of reading errors (based on the Law of Tajweed) is the main focus of this paper. This involves many stages in the process related to the representation of Quranic syllable-based Recitation Speech Signal (QRSS), feature extraction, non-phonetic transcription Quranic Recitation Acoustic Model (QRAM), and threshold classification processes. MFCC-Formants are used in a miniature state that are hybridized with three bands in representing QRSS combined vowels and consonants. A human-guided threshold classification approach is used to assess recitation based on Quranic syllables and threshold classification performance for the low, medium, and high band groups with performances of 87.27%, 86.86%and 86.33%, respectively.

Author 1: Mohamad Zulkefli Adam
Author 2: Noraimi Shafie
Author 3: Hafiza Abas
Author 4: Azizul Azizan

Keywords: Speech processing; MFCC-Formant; Quranic recitation assessment; human-guided threshold classification

PDF

Paper 61: IoT-based e-Health Framework for COVID-19 Patients Monitoring

Abstract: The COVID-19 pandemic, produced by the SARS-CoV-2 virus, has caused global public health emergency, with the rapid evolution and tragic consequences. The fight against this disease, whose epidemiological, clinical, and prognostic characteristics are still being studied in recent works which is forcing a change in the form of care, to include transforming some face-to-face consultations into non-face-to-face. Recently, various initiatives have emerged to incorporate the Internet of Things (IoT) in different sectors specially the health sector generally and in e-Health systems specifically. Millions of devices are connected and generating massive amounts of data. In this sense, based on the experience in the health sector in the management of the pandemic caused by COVID-19, it has been determined that monitoring potential patients of COVID-19 is still a great challenge for the latest technologies. In this paper, an IoT-based monitoring framework is proposed to help the health caregivers to obtain useful information during the current pandemic of COVID-19, thus bringing direct benefits of monitoring patient's health and speed of hospital care and cost reduction. An analysis of the proposed framework was carried out and a prototype system was developed and evaluated. Moreover, we evaluated the efficacy of the proposed framework to detect potentially serious cases of COVID-19 among patients treated in home isolation.

Author 1: Fahad Albogamy

Keywords: COVID-19; IoT; healthcare; e-Health

PDF

Paper 62: Brown Spot Disease Severity Level Detection using Binary-RGB Image Masking

Abstract: Agriculture is known as one of the main factor for a growth of a country. Paddy plantation is the most widely planted crop in Malaysia. The rice produced is the main food source to Malaysian’s people and source of income to this country as well. However, a disease known as Brown Spot (BS) attacks the paddy plants and threats their quality. This disease caused by bipolar fungus, which represent by the development of an oval, dark brown to purplish-brown spot on leaf. This disease observed as among the hazardous disease that may result in degradation of paddy production. Brown Spot disease could spread through airborne spores from plant to plant on the field. In this research, a system that could help people, especially farmers, to detect the disease at early stage is developed. The real image capture at paddy field is processed in the MATLAB software with image enhancement, background removal as well as binary and RGB image masking process. To determine the Brown Spot area, pixel intensity between the infected and non-infected areas is calculated. The severity level table developed by Horsfall and Heuberger is then used as reference to classify the severity level of Brown Spot disease. A GUI is created to detect the Brown Spot disease automatically. From the study conducted, the accuracy of Brown Spot detection is approximately 89% accurate compared to manual evaluation by plant pathology.

Author 1: N. S. A. M Taujuddin
Author 2: N. H. N. A Halim
Author 3: M. Siti Norsuha
Author 4: R. Koogeethavani
Author 5: Z. H Husin
Author 6: A. R. A Ghani
Author 7: Tara Othman Qadir

Keywords: Brown spot; image enhancement; binary image; RGB image; masking process

PDF

Paper 63: A Novel Feature Extraction for Complementing Authentication in Hand-based Biometric

Abstract: With an increasing usage of hand-based biometrics in authentication system, there is a need to evolve up with more potential security owing to increasing evolution of threats. The security of the hand authentication system completely depends upon uniqueness and distinct selection of features from hand image which has the properties of robustness, fault tolerance, and simpler implication. Review of existing feature extraction literatures shows more inclination towards sophisticated process as well as it also suffers from various other limitation. Therefore, this manuscript resolves this limitation by presenting a novel model of feature extraction which is carried out in more progressive form and less iterative form, unlike existing approaches. The proposed system achieves its research goal by introducing simplified feature extraction operation via storage, blurring, color space conversion, binary image conversion, the modelling aspect of the study emphasizes on image enhancement along with fuzzification for yielding more efficient result. An experimental study has been carried out using Python considering hand-biometric dataset in order to carry analysis where the outcome shows significant supportability over any palmprint recognition system. The study outcome is compared with most standard implementation of feature extraction to find that proposed system offer better accuracy performance in contrast with existing system.

Author 1: Mahalakshmi B S
Author 2: Sheela S V

Keywords: Biometric; security; feature extraction; hand geometric; authentication; palmprint recognition

PDF

Paper 64: Categorical Vehicle Classification and Tracking using Deep Neural Networks

Abstract: The classification and tracking of vehicles is a crucial component of modern transportation infrastructure. Transport authorities make significant investments in it since it is one of the most critical transportation facilities for collecting and analyzing traffic data to optimize route utilization, increase transportation safety, and build future transportation plans. Numerous novel traffic evaluation and monitoring systems have been developed as a result of recent improvements in fast computing technologies. However, still the camera-based systems lag in accuracy as mostly the systems are constructed using limited traffic datasets that do not adequately account for weather conditions, camera viewpoints, and highway layouts, forcing the system to make trade-offs in terms of the number of actual detections. This research offers a categorical vehicle classification and tracking system based on deep neural networks to overcome these difficulties. The capabilities of generative adversarial networks framework to compensate for weather variability, Gaussian models to look for roadway configurations, single shot multibox detector for categorical vehicle detections with high precision and boosted efficient binary local image descriptor for tracking multiple vehicle objects are all incorporated into the research. The study also includes the publication of a high-quality traffic dataset with four different perspectives in various environments. The proposed approach has been applied on the published dataset and the performance has been evaluated. The results verify that using the proposed flow of approach one can attain higher detection and tracking accuracy.

Author 1: Deependra Sharma
Author 2: Zainul Abdin Jaffery

Keywords: Vehicle classification; generative adversarial networks; single shot multibox detector; vehicle tracking; deep neural networks

PDF

Paper 65: Goal-oriented Email Stream Classifier with A Multi-agent System Approach

Abstract: Now-a-days, email is often one of the most widely used means of communication despite the rise of other communication methods such as instant messaging or communication via social networks. The need to automate the email stream management increases for reasons such as multi-folder categorization, and spam email classification. There are solutions based on email content, capable of contemplating elements such as the text subjective nature, adverse effects of concept drift, among others. This paper presents an email stream classifier with a goal-oriented approach to client and server environment. The i* language was the basis for designing the proposed email stream classifier. The email environment was represented with the early requirements model and the proposed classifier with the late requirements model. The classifier was implemented following a multi-agent system approach supported by JADE agent platform and Implementation_JADE pattern. The behavior of agents was taking from an existing classifier. The multi-agent classifier was evaluated using functional, efficacy and performance tests, which compared the existing classifier with the multi-agent approach. The results obtained were satisfactory in all the tests. The performance of multi-agent approach was better than the existing classifier due to the use of multi-threads.

Author 1: Wenny Hojas-Mazo
Author 2: Mailyn Moreno-Espino
Author 3: José Vicente Berná Martínez
Author 4: Francisco Maciá Pérez
Author 5: Iren Lorenzo Fonseca

Keywords: Email stream classification; goal-oriented requirements; i*; multi-agent system

PDF

Paper 66: Problem based Learning: An Experience of Evaluation based on Indicators, Case of Electronic Business in Professional Career of Systems Engineering

Abstract: It is a reality that universities place great emphasis on formative research in the training of their students in order to increase their knowledge, skills, attitudes and achieve competences. This paper aims to show the experience of applying the Problem-Based Learning (PBL) methodology to assess learning based on indicators that have been determined from the criteria considered to correspond to the competences of the course under study Business Electronic, at the Professional School of Systems Engineering (EPIS) of the Universidad Nacional de San Agustín de Arequipa (UNSA), Arequipa-Peru, with theory and laboratory practice taught by two teachers. The objective is to apply an evaluation strategy for the development of competences with active didactics to the engineering training course. The methodology used is Problem-Based Learning applied to a formative research project based on real problems common to many organizations. In the semester, the students in groups solve the problems stated, and then they deliver a deliverable report and formative research report of each problem that is scored through a rubric. The teachers make contributions and provide feedback in the report for the improvement and experience that the student is acquiring in their training. The results obtained show that the objectives are achieved, increasing knowledge, skills, attitudes and adequate evaluation in the training of students, as well as the development of the competences of the course, as well as achieving the results of the student; showing that the application of PBL with Formative Research would provide good results for other courses of the professional career, allowing continuous improvement in the teaching-learning process. The conclusion is that an adequate assessment of learning based on indicators with an active didactic strategy, effectively planned and adequately applied to real-life problems, makes it possible to achieve the expected student results.

Author 1: César Baluarte-Araya
Author 2: Ernesto Suarez-Lopez
Author 3: Oscar Ramirez-Valdez

Keywords: Problem-based learning; formative research; competences; evaluation; performance indicator; skills; deliverable report

PDF

Paper 67: Genetic Behaviour of Zika Virus and Identification of Motif

Abstract: ZIKV is a mosquito-borne disease. It is known to cause neurological disorders and congenital disabilities in newborns. The Genome Sequence of the Zika virus is used for the study. The essential cell functionalities like circadian behavior and expression of genes are studied. Regulatory proteins are alternating functionalities during daytime and night time. Identifying motif is made by understanding the features of motifs, finding the count matrix, and formulating the profile matrix. The consensus string of the Zika virus is to be computed, and the score motif is to be calculated. Different techniques of motif finding like the Brute Force technique and Greedy Search techniques are proposed. In the Brute Force technique, each motif is selected, its score is to be calculated, and then the minimum score can be obtained. The Brute Force technique will take an enormous amount of time, but it is guaranteed to find a solution. The Greedy Search technique is not guaranteed to find motif like the Brute Force technique but can give a close answer in a realistic time. This paper presents the identification of motif in the Zika virus genome using programming techniques.

Author 1: Pushpa Susant Mahapatro
Author 2: Jatinderkumar R. Saini

Keywords: Circadian behaviour; consensus string; genome study; greedy search technique; motif search; regulatory proteins

PDF

Paper 68: Enhanced Graphical Representation of Data in Web Application (Case Study: Covid-19 in the UK)

Abstract: This paper describes the analysis, design, and implementation of responsive data representation in the web application that can render data asynchronously to users by making an Application Programming Interface (API) request from a webserver. At the same time, provides high-quality downloadable Scalable Vector Graphics (SVG) images for journals, magazines, and other printed media. For this issue, large-scale data that uses open-source Covid-19 data was used to improve the Covid-19 data visualization and the other improvements that can be done for proper representation of such vital data to the general public. During the development process, qualitative research into data representation with responsive charts and/or Scalable Vector Graphics images file has been conducted in contrast of each other to answer questions like what tools and technologies are often used, what are the alternative tools and technology, when, where, and why developers make use of certain approach to data representation.

Author 1: Rockson Adomah
Author 2: Tariq Alwada’n
Author 3: Mohammed Al Masarweh

Keywords: Data representation; data visualization; accessibility standards; scalable vector graphics; Covid-19

PDF

Paper 69: Internet of Things Multi-protocol Interoperability with Syntactic Translation Capability

Abstract: Because Internet of Things (IoT) systems contain different devices, infrastructures, and data formats; its success depends on the realization of full interoperability among these systems. Interoperability is a communication challenge that affects all the layers of the system. In this paper, a transparent translator to solve interoperability issues in two layers of an IoT system is proposed. The communication protocol layer is the first layer. In this layer, it is necessary to overcome the difference between the interaction patterns, such as request/response and publish/subscribe. The second layer includes the syntactic layer, which refers to data encoding. This type of interoperability is achieved through the semantic sensor network (SSN) ontology. Tests and evaluations of the proposed translator in comparison with a similar translator were performed using the constrained application protocol (CoAP), message queuing telemetry transport (MQTT) protocol, and hypertext transfer (HTTP) protocol, in addition to different data formats, such as JSON, CSV, and XML. The results reveal the efficiency of the proposed method in terms of application protocol interoperability. In addition, the suggested translator has the added feature that it supports different data encoding standards as compared to the other translator.

Author 1: Nedaa H. Ahmed
Author 2: Ahmed M. Sadek
Author 3: Haytham Al-Feel
Author 4: Rania A. AbulSeoud

Keywords: Internet of things (IoT); interoperability; multiprotocol translation; message payload translation; SSN ontology

PDF

Paper 70: Comparing SMOTE Family Techniques in Predicting Insurance Premium Defaulting using Machine Learning Models

Abstract: Default in premium payments impacts significantly on the profitability of the insurance company. Therefore, predicting defaults in advance is very important for insurance companies. Predicting in the insurance sector is one of the most beneficial and important study areas in today's world, thanks to technological advancements. But because of the imbalanced datasets in this industry, predicting insurance premium defaulting becomes a difficult task. Moreover, there is no study that applies and compares different SMOTE family approaches to address the issue of imbalanced data. So, this study aims to compare different SMOTE family approaches. Such as Synthetic Minority Oversampling Technique (MOTE), Safe-level SMOTE (SLS), Relocating Safe-level SMOTE (RSLS), Density-based SMOTE (DBSMOTE), Borderline-SMOTE(BLSMOTE), Adaptive Synthetic Sampling (ADSYN), and Adaptive Neighbor Synthetic (ASN), SMOTE-Tomek, and SMOTE-ENN, to solve the problem of unbalanced data. This study applied a variety of machine learning (ML)classifiers to assess the performance of the SMOTE family in addressing the imbalanced problem. These classifiers including Logistic Regression (LR), CART, C4.5, C5.0, Support Vector Machine (SVM), Random Forest (RF), Bagged CART(BC), AdaBoost (ADA), Stochastic Gradient Boosting, (SGB), XGBOOST(XGB), NAÏVE BAYES, (NB), k-Nearest Neighbors (K-NN), and Neural Networks (NN). Additionally, model validation strategies include Random hold-out. The findings obtained using various assessment measures show that ML algorithms do not perform well with imbalanced data, indicating that the problem of imbalanced data must be addressed. On the other hand, using balanced datasets created by SMOTE family techniques improves the performance of classifiers. Moreover, the Friedman test, a statistical significance test, further confirms that the hybrid SMOTE family methods are better than others, especially the SMOTE -TOMEK, which performs better than other resampling approaches. Moreover, among ML algorithms, the SVM model has produced the best results with the SMOTE- TOMEK.

Author 1: Mohamed Hanafy Kotb
Author 2: Ruixing Ming

Keywords: Machine learning; classification; insurance; imbalanced data; SMOTE family; statistical analysis

PDF

Paper 71: A Gray Box-based Approach to Automatic Requirements Specification for a Robot Patrol System

Abstract: The black box-based requirements specification models representative by the use case model focus on specifying system behaviors exposed outside. While these models are sufficiently effective in specifying requirements for business applications behavior, they are limited in specifying requirements for embedded systems with relatively very short interaction sequences with users. To solve this problem, we have proposed a gray box-based requirements specification method to specify the inner logic of an embedded system, including a tool for automatic generation of requirements specification from some analysis models in our previous work. This study proves the benefits of the proposed software requirements specification method by applying it to a robot patrol system and showing the possibility of general use of the proposed method in the embedded system domain. Compared with our previous work, we enhance the tool for automatic generation of requirements specification, called SpecGen, and prove the benefit of the proposed method from multiple aspects. The application result on the robot patrol system case is quantitatively demonstrating that our proposed requirements specification method improves development productivity and enhances overall software product quality, including code quality.

Author 1: Soojin Park

Keywords: Embedded system; automatic requirement specifications generation; mobile robots; use case specification

PDF

Paper 72: A Comparison of BAT and Firefly Algorithm in Neighborhood based Collaborative Filtering

Abstract: The recommender system is a knowledge-based filtering system that predicts the users' rating and preference for what they might desire. Simultaneously, the neighborhood method is a promising approach to perform predictions, resulting in a high accuracy based on the common items. This method, furthermore, could affect the resulting accuracy value because when each user provides limited data and sparsity, the accuracy of value might be narrow down as a consequence. In this research, we use the Swarm Intelligent (SI) technique in the recommender system to overcome this problem, whereby SI will train each feature to optimal weight. This technique's main objective is to form better groups of similar users and improve recommendations' accuracy. The intelligent swarm technique used to compare its accuracy to help provide recommendations is the Firefly and Bat Algorithm. The results show that the Firefly Algorithm has slightly better performance than the Bat Algorithm, with a difference in the mean absolute error of 0.02013333. The significance test using the independent t-test method states that no statistically significant difference between Bat and Firefly algorithm.

Author 1: Hartatik
Author 2: Bayu Permana Sejati
Author 3: Hamdani Hamdani
Author 4: Andri Syafrianto

Keywords: Bat algorithm; firefly algorithm; collaborative filtering; recommender system; swarm intelligent

PDF

Paper 73: Modeling a Fault Detection Predictor in Compressor using Machine Learning Approach based on Acoustic Sensor Data

Abstract: Proper functioning of the air compressor ensures stability for many critical systems. The ill-effect of the breakdown caused by the wear and tear in the system can be mitigated if there exists an effective automated fault classification system. Traditionally, the simulation-based methods help to extend to identify the faults; however, those systems are not so effective enough to build real-time adaptive methods for the fault detection and its type. This paper proposes an effective model for the fault classification in the air compressor based on the real-time empirical acoustic sensor time-series data were taken on a sampling frequency of 50Khz. In the proposed work, the time-series datais transformedinto the frequency domain using fast Fourier transforms,where half of the signals are considered due to its symmetric representation. Afterward, a masking operation is carried out to extract significant feature vectors fed to the multilayer perception neural network. The uniqueness of the proposed system is that it requires less trainable parameters, thus reduces the training time and imposes lower memory overhead. The model is benchmarked with performance metric accuracy, and it is found that the proposed masked feature set-based MLP-ANN exhibits an accuracy of 91.32%. In contrast, the LSTM based fault classification model gives only 83.12% accuracy, takes more training time, and consumes more memory. Thus, the proposed model is realistic enough to be considered a real-time monitoring system of the fault and control. However, other performance metrics like precision, recall, and F1-Score are also promising with the LSTM based fault classifier.

Author 1: Divya M. N
Author 2: Narayanappa C. K
Author 3: Gangadharaiah S. L

Keywords: Air-compressor; fault detection; LSTM; multi-layer perception; ANN; acoustic sensor data

PDF

Paper 74: Organisational Information Security Management Maturity Model

Abstract: Information Security Management (ISM) is a systematic initiative in managing the organisation’s information security. ISM can also be defined as a strategic approach to addressing information security (IS) risks, breaches, and incidents that could threaten the confidentiality, integrity, and availability of information. Although organisations have complied with ISM requirements, security incidents are still afflicting numerous organisations. This issue shows that the current implementation of ISM is still ineffective. The ineffective ISM implementation illustrates the low maturity level. To achieve a higher level of maturity, organisations should always evaluate their ISM practices. Several maturity models have been developed by international organisations, consultants, and researchers to assist organisations in assessing their ISM practices. However, the current models do not evaluate ISM practices holistically. The measurement dimensions in current models are more focused on assessing certain factors only. This caused the maturity assessment to be not executed comprehensively. Therefore, this study aims to address this shortcoming by proposing a comprehensive maturity assessment model that takes into account ISM success factors to evaluate the effectiveness of the implementation. This study adopted a mixed-method approach, which comprises qualitative and quantitative studies to strengthen the research finding. The qualitative study analyses the existing literature and conducts interviews with nine industry practitioners and six experts while the quantitative study involves a questionnaire survey. The data obtained from the qualitative study were analysed using content analysis while the quantitative data employed statistics analysis. The study identified fourteen success factors and fifty-seven maturity dimensions, which each contains five maturity levels. The proposed model was evaluated through experts’ reviews to ensure its accuracy and suitability. The evaluation shows that the model can identify the ISM maturity level systematically and comprehensively. This model will ultimately help the organisations to improve the weaknesses in the implementations thus diminishing security incidents.

Author 1: Mazlina Zammani
Author 2: Rozilawati Razali
Author 3: Dalbir Singh

Keywords: Information security; information security management; maturity models; information security management maturity model

PDF

Paper 75: Applying Grey Clustering and Shannon’s Entropy to Assess Sediment Quality from a Watershed

Abstract: The evaluation of the quality of sediments is a complex issue in the Peruvian reality, mainly because there is no sampling protocol or norm for comparison, which leads to the assessment of sediments without a comprehensive analysis of their quality. In the present study, the quality of the sediments in the upper basin of the Huarmey river was evaluated in 30 monitoring points and 7 parameters, which are: arsenic, cadmium, copper, chromium, mercury, lead and zinc, which were compared according to the standards recommended by the Environmental Quality Guidelines for Sediments in freshwater bodies of Canada (Canadian Environmental Quality Guidelines - CEQG, 2002. Sediment Quality Guidelines for Protection of Aquatic Life - Fresh water according to Canadian Council of Ministers of the Environment (CCME)). The results of the evaluation, by grey clustering method and Shannon entropy, showed that 13 monitoring points resulted in good sediment quality, 1 monitoring point had moderate quality and 16 monitoring points presented poor quality; therefore, it can be concluded that the effluents and discharges of the mining activities that take place in the aforementioned location have a negative impact on environmental quality. Finally, the results obtained can be of great help for OEFA, the regional government, the municipalities and any other body that has oversight functions, since they will allow them to be more objective and precise decisions.

Author 1: Alexi Delgado
Author 2: Betsy Vilchez
Author 3: Fabian Chipana
Author 4: Gerson Trejo
Author 5: Renato Acari
Author 6: Rony Camarena
Author 7: Víctor Galicia
Author 8: Chiara Carbajal

Keywords: Grey clustering; sediment quality; Shannon entropy

PDF

Paper 76: A Hybrid Intrusion Detection Model for Identification of Threats in Internet of Things Environment

Abstract: Internet of Things (IoT) has transcended from its application in traditional sensing networks such as wireless sensing and radio frequency identification to life-changing and critical applications. However, IoT networks are still vulnerable to threats, attacks, intrusions, and other malicious activities. Intrusion Detection Systems (IDS) that employ unsupervised learning techniques are used to secure sensitive data transmitted on IoT networks and preserve privacy. This paper proposes a hybrid model for intrusion detection that relies on a dimension reduction algorithm, an unsupervised learning algorithm, and a classifier. The proposed model employs Principal Component Analysis (PCA) to reduce the number of features in a dataset. The K-means algorithm generates clusters that serve as class labels for the Support Vector Machine (SVM) classifier. Experimental results using the NSL-KDD and the UNSW-NB15 datasets justify the effectiveness of our proposed model in detecting malicious activities in IoT networks. The proposed model, when trained, identifies benign and malicious behaviours using an unlabelled dataset.

Author 1: Nsikak Pius Owoh
Author 2: Manmeet Mahinderjit Singh
Author 3: Zarul Fitril Zaaba

Keywords: Internet of things; intrusion detection system; k-means; principal component analysis; support vector machine

PDF

Paper 77: Data Dissemination for Bioinformatics Application using Agent Migration

Abstract: Bioinformatics is research intensive field where agents operate in highly dynamic environment. Due to extensive research in this domain leads to basic but important problems for the researchers that are (1) Bandwidth (2) storage and (3) computation. We are using agent migration approach to reduce the network load and resolve the resource problem for the client by using server side resources for the computations on large data. The proposed approach does not demand extra storage and extensive computational resources on clients slide fsage. It solves the problem of bandwidth, storage, computation. Our results show that this approach saves the time of the user up to 12.5 % approximately, depending on the size of the data. Similarly the agent can work like a mashup to get heterogeneous data from different service providers and presents in homogeneous shape to its owner.

Author 1: Shakir Ullah Shah
Author 2: Abdul Hameed
Author 3: Jamil Ahmad
Author 4: Hafeez Ur Rehman Safia Fatima
Author 5: Muhammad Amin

Keywords: Data dissemination; protein-protein interactions; agent migration; inter-platform mobility; multi-agent systems

PDF

Paper 78: Hybrid Metaheuristic Aided Energy Efficient Cluster Head Selection in Wireless Sensor Network

Abstract: Clustering is one of the significant techniques for expanding the lifetime of networks in wireless sensor networks (WSNs). It entails combining of sensor nodes (SNs) into clusters and electing cluster heads (CHs) for each and every cluster. CH collects the information from particular cluster nodes and passes the cumulative data to the base station (BS). However, the most important requirement in WSN is to choose a suitable CH with an increased network life span. This work introduces a new CHS model in WSN. The optimal CH is elected by a new hybridized model termed as “Lion Updated Dragonfly Algorithm (LU-DA) that hybrid the concepts of Dragonfly Algorithm (DA) and Lion Algorithm (LA)”. Moreover, the optimal selection of CH is done depending upon constraints like “energy, delay, distance, security (risk) and trust (direct and indirect trust)”. This optimal CH ensures the network lifetime enhancement. At last, the superiority of the developed approach is proved on varied measures like energy and alive node analysis. Accordingly, the proposed model has accomplished higher energy of 0.55 at 1st round, whereas at the 2000th round, the normalized energy value has been dropped to 0.1.

Author 1: Turki Ali Alghamdi

Keywords: Cluster head; security; trust; dragonfly algorithm; LU-DA model

PDF

Paper 79: Risk Assessment Methods for Cybersecurity in Nuclear Facilities: Compliance to Regulatory Requirements

Abstract: As strategic infrastructures, nuclear facilities are considered attractive targets for attackers to commit their mali-cious intention. At the same time, for efficiency, those infrastruc-tures are increasingly implemented, equipped with, and managed by digitally computerized systems. Attackers, therefore, try to realign their attack scenarios through such cyber systems. It is crucial to understand various existing risk assessment methods for cybersecurity in nuclear facilities to prevent such attacks. Risk assessment is designed to study the nature of the originated attack threats and the consequences implied. This paper studies a series of risk assessment methods implemented for security related to cybersecurity of strategic infrastructures, including nuclear facilities. Extended from cybersecurity, the required concepts in nuclear security cover defense-in-depth, synergy of safety and security, and probabilistic safety/risk assessment. Selecting cybersecurity risk assessment methods should integrate these three essential concepts in their evaluation. This paper highlights the suitable and appropriate risk assessment methods that meet security requirements in the nuclear industry as specified in the national and international regulations.

Author 1: Lilis Susanti Setianingsih
Author 2: Reza Pulungan
Author 3: Agfianto Eko Putra
Author 4: Moh Edi Wibowo
Author 5: Syarip

Keywords: Risk assessment; cybersecurity; nuclear facilities; security requirements; regulatory requirements

PDF

Paper 80: Comparative Analysis of Spark and Ignite for Big Spatial Data Processing

Abstract: Recently, spatial data became one of the most interesting fields related to big data studies, in which the spatial data have been generated and consumed from different resources. However, the increasing numbers of location-based services and applications such as Google Maps, vehicle navigation, recommendation systems are the main foundation of the idea of spatial data. On the other hand, several researchers started to discover and compared spatial frameworks to understand the requirements for spatial database processing, manipulating, and analysis systems. Apache Spark, Apache Ignite, and Hadoop are the most widely known frameworks for large data processing. However, Apache Spark, Apache Ignite have integrated different spatial data operations and analysis queries, but each system has its advantages and disadvantages when dealing with spatial data. Dealing with a new framework or system that needs to integrate new functionality sometimes becomes a risky decision if we did not examine it well The main aim of this research is to conduct a comprehensive evaluation of big spatial data computing on two well-known data management systems Apache Ignite and Apache Spark. The comparative has been done on four different domains, experimental environment setup, supported features, supported functions and queries, and performance and execution time. The results show that GeoSpark has recorded more flexibility to use than SpatialIgnite. We thoroughly investigated and discovered that multiple factors affect the performance of both frameworks, such as CPU, Main memory, data set size the complexity of data type, and programming environment. spark is more advanced and equipped with several functionalities that made it well suitable with spatial data queries and indexing. such as kNN queries; in which these functionalities are not supported in SpatialIgnite.

Author 1: Samah Abuayeid
Author 2: Louai Alarabi

Keywords: Big spatial data; GeoSpark; SpatialIgnite; Apache Ignite; Apache Spark

PDF

Paper 81: Carrot Disease Recognition using Deep Learning Approach for Sustainable Agriculture

Abstract: Carrot is a fast-growing and nutritious vegetable cultivated throughout the world for its edible roots. The farmers are still learning the scientific methods of carrot production worldwide. For the production of good quality carrots, modern technology is not being used to its fullest to detect carrot vegetable diseases in the farms. As a result, the farmers face difficulties now and then in continuous monitoring and detecting defects in carrot crops. Hence, this paper proposes an efficient carrot disease identification and classification method using a deep learning approach, especially Convolutional Neural Network (CNN). In this research, five different carrot diseases including healthy carrots have been examined and experimented with four different pretrained models of CNN architecture, i.e., VGG16, VGG19, MobileNet, and Inception v3. Among the four models, the Inception v3 model is selected as an efficient pretrained CNN architecture to build an effective and robust system. The Inception v3 based system proposed here takes carrot images as input and examines whether they are healthy or infected, and provides output accordingly. To train and evaluate the system, a robust dataset is used, which consists of original and synthetic data. In the Fully Connected Neural Network (FCNN), dropout is used to solve the problem of overfitting as well as to improve the accuracy of the system. The accuracy achieved from the method which uses Inception v3 is 97.4%, which is undoubtedly helpful for the farmers to identify carrot disease and maximize their benefits to establish sustainable agriculture.

Author 1: Naimur Rashid Methun
Author 2: Rumana Yasmin
Author 3: Nasima Begum
Author 4: Aditya Rajbongshi
Author 5: Md. Ezharul Islam

Keywords: Deep learning; convolutional neural network; In-ception v3; carrot disease recognition

PDF

Paper 82: Data Security: A New Symmetric Cryptosystem based on Graph Theory

Abstract: Sharing private data in an unsecured channel is extremely critical, as unauthorized entities can intercept it and could break its privacy. The design of a cryptosystem that fulfills the security requirements in terms of confidentiality, integrity and authenticity of transmitted data has therefore become an unavoidable imperative. Indeed, a lot of work has been carried out in this regard. Although many cryptosystems have been proposed in the published literature, it has been found that their robustness and performance vary relatively from one to another. Adopting this reflection, we address in this paper the concept of block cipher, which is a major cryptographic solution to guarantee confidentiality, by involving the properties of graph theory to represent the plaintext message. Our proposal is in fact a new symmetric encryption block cipher that proceeds by representing plaintext messages using disjoint Hamiltonian circuits and then dealing with them as an adjacency matrix in a pre-encryption phase. The proposed system relies on a particular sub-key generator that has been carefully designed to produce the encryption keys according to the specifications of the system. The obtained experimental results demonstrate that our proposed cryptosystem is robust against statistical attacks, particularly the DIEHARD test, and presents both good confusion and good diffusion.

Author 1: Khalid Bekkaoui
Author 2: Soumia Ziti
Author 3: Fouzia Omary

Keywords: Cryptosystem; graph theory; hamiltonian circuits; adjacency matrix; block cipher; encryption

PDF

Paper 83: A New Algorithm to Reduce Peak to Average Power Ratio in OFDM Systems based on BCH Codes

Abstract: Orthogonal Frequency Division Multiplexing (OFDM) has a great peak-to-average power ratio (PAPR). This will reduce the performance of the power amplifier (PA). Therefore, PAPR deteriorates the overall energy efficiency of an OFDM system. Peak Insertion (PI) is one of the most commonly used methods to reduce PAPR, it gives the best reduction in PAPR. Therefore, it causes a strong degradation in Bit Error Rate (BER). To solve this problem, we propose a new algorithm called BCB-OFDM based on Bose Chaudhuri Hocquenghem Codes (BCHs) and PI. BCB is implemented in OFDM system with Quadrature Amplitude Modulation (QAM) and two coding rates 1/2 and 1/4 over an Additive White Gaussian Noise (AWGN) channel. Simulation results show that the BCB is very interesting and achieve a good value in terms of PAPR reduction with keeping good performance compared with PI and normal OFDM. In addition, BCB algorithm is simple, robust, and leaves no requirement side information with more flexibility to choose between PAPR reduction and BER performances.

Author 1: Brahim BAKKAS
Author 2: Reda Benkhouya
Author 3: Toufik Chaayra
Author 4: Chana Idriss
Author 5: Hussain Ben-Azza

Keywords: Orthogonal Frequency Division Multiplexing (OFDM); Peak to Average Power Ratio (PAPR); Bit Error Rate (BER); Peak Insertion (PI); Coding; Bose Chaudhuri Hocquenghem (BCH)

PDF

Paper 84: Collision Resolution Techniques in Hash Table: A Review

Abstract: One of the major challenges of hashing is achieving constant access time O (1) with an efficient memory space at a high load factor environment when various keys generate the same hash value or address. This problem causes a collision in the hash table, to resolve the collision and achieve constant access time O (1) researchers have proposed several methods of handling collision most of which introduce a non-constant access time complexity at a worst-case scenario. In this study, the worst case of several proposed hashing collision resolution techniques are analyzed based on their time complexity at a high load factor environment, it was found that almost all the existing techniques have a non-constant access time complexity. However, they all require an additional computation for rehashing keys in a hash table some of which is as a result of deadlock while iterating to insert a key. It was also found out that there are wasted slots in a hah table in all the reviewed techniques. Therefore, this work, provides an in-depth understanding of collision resolution techniques which can serve as an avenue for further research work in the field.

Author 1: Ahmed Dalhatu Yusuf
Author 2: Saleh Abdullahi
Author 3: Moussa Mahamat Boukar
Author 4: Salisu Ibrahim Yusuf

Keywords: Hashing; collision resolution; hash table; hash function; slot

PDF

Paper 85: Future Friend Recommendation System based on User Similarities in Large-Scale on Social Network

Abstract: Friendship is one of the most important issues in online social networks (OSN). Researchers analyze the OSN to determine how people are connected to a network and how new connections are developed. Most of the existing methods cannot efficiently evaluate a friendship graphs internal connectivity and decline to render a proper recommendation. This paper presented three proposed algorithms that can apply in OSN to predict future friends recommendations for the users. Using network and profile similarity proposed approach can measure the similarity among the users. To predict the user similarity, we calculated an average weight that indicates the probability of two users being similar by considering every precise subset of some profile attributes such as age, profession, location, and interest rather than taking the only average of the superset profile attributes. The suggested algorithms perform a significant enhancement in prediction accuracy 97% and precision 96.566%. Furthermore, the proposed recommendation frameworks can handle any profile attribute’s missing value by assuming the value based on friends’ profile attributes.

Author 1: Md. Amirul Islam
Author 2: Linta Islam
Author 3: Md. Mahmudul Hasan
Author 4: Partho Ghose
Author 5: Uzzal Kumar Acharjee
Author 6: Md. Ashraf Kamal

Keywords: Social networks; recommendation framework; pro-file similarity; network similarity

PDF

Paper 86: An Open-source Wireless Platform for Real-time Water Quality Monitoring with Precise Global Positioning

Abstract: Sustainable development associated with the agri-cultural field of Arequipa, a region in economic growth, is vul-nerable to contamination of water resources, putting production systems and food security at risk. Therefore, it is necessary to implement an automated system to control, management, and monitor this vital resource. The proposed work proposed a system to measure water quality monitoring in reservoirs and lakes with high accurate related to global positioning. It includes an embedded computer, multiparameter sonde, and an additional dual GNSS/INS in hardware architecture. The software architecture is fully open-source with compatibility, modularity, and interoperability features between Python and MySQL, al-lowing data management for real-time data in visual interface on a platform that stores unlimited data logging, monitors and analyzes. The proposed system is validated in an experimental test that measures the water quality of a huge agricultural reservoir, where certified instrumentation is mandatory, as compared to other methods used locally for this action.

Author 1: Niel F. Salas-Cueva
Author 2: Jorch Mendoza
Author 3: Juan Carlos Cutipa-Luque
Author 4: Pablo Raul Yanyachi

Keywords: Open-source; water quality monitoring; real-time; python; visual interface; MySQL; dual Global Navigation Satellite System (GNSS); Inertial Navigation System (INS); multiparameter sonde

PDF

Paper 87: Structured and Unstructured Robust Control for an Induction Motor

Abstract: The indirect field-oriented control approaches for induction motors have recently gained more attention due to its use in trend areas, such as electromobility, electric vehicles, electric ships, and unmanned vehicles. This work studies the performance of two advanced control synthesized by the H8 norm as an alternative to the classical Proportional-Integral and Derivative controller. It will be assessed in terms of the performance against disturbance variations in the reference speed in the nominal conditions. The tuning of the parameters of these controllers must be defined of the stability and performance of the system and to increase their operation range frequency. An algorithm is proposed to reach a better shape of weighting func-tions. A numerical simulation will be shown where the advances in structured advanced controller synthesis with unstructured H8 controller is still the good election for the induction motor control. Unstructured controller approach shows still good robustness in performance and stability compared with the structured controller. Constraints imposed in structured controller is the main disadvantage to improve its robustness properties. However, compared with a conventional PID approach, the structured controller has shown quite good performance and can become in one of the most attractive approaches for practitioners.

Author 1: Jhoel F. Espinoza-Quispe
Author 2: Juan C. Cutipa-Luque
Author 3: German A. Echaiz Espinoza
Author 4: Andres O. Salazar

Keywords: H8 Robust control; induction motor; indirect field oriented control

PDF

Paper 88: Employing DDR to Design and Develop a Flipped Classroom and Project based Learning Module to Applying Design Thinking in Design and Technology

Abstract: The purpose of this study is to discuss the Design and Development (DDR) research approach that was used to develop a Flipped Classroom and project-based learning modules for students of Design and Technology (D&T). The module's fundamental theory is based on 21st-century teaching and learning models, as well as design thinking. The DDR process is divided into three phases: analysis of needs, design and development, and evaluation. The phase of needs analysis is used to ascertain the necessity of module development and the application of design thinking. Three distinct data collection methods were used in this phase: semi-structured interviews, survey studies, and document analysis. The findings from this phase serve as a backup for the next phase. The Isman Instructional Design Model (2011) is adapted for use in this phase as a guide for module design and development. Additionally, the Fuzzy Delphi Method is used to obtain expert consensus on module material design, teaching and learning strategies, software development and hardware development requirements, and module prototype evaluation. The final phase is implementation and evaluation, which focuses on determining the module's effectiveness in the actual teaching and learning process. Each finding is organised and documented more systematically and orderly in accordance with the DDR phase in order to produce more meaningful research results. The conclusion of this article proposes a conceptual framework for the research.

Author 1: Mohd Ridzuan Padzil
Author 2: Aidah Abd Karim
Author 3: Hazrati Husnin

Keywords: Flipped classroom; project-based learning; design and development (DDR); Isman instructional design model

PDF

Paper 89: Effective Service Discovery based on Pertinence Probabilities Learning

Abstract: Web service discovery is one of the most motivating issues of service-oriented computing field. Several approaches have been proposed to tackle this problem. In general, they leverage similarity measures or logic-based reasoning to perform this task, but they still present some limitations in terms of effectiveness. In this paper, we propose a probabilistic-based approach to merge a set of matching algorithms and boost the global performance. The key idea consists of learning a set of relevance probabilities; thereafter, we use them to produce a combined ranking. The conducted experiments on the real world dataset “OWL-S TC 2” demonstrate the effectiveness of our model in terms of mean averaged precision (MAP); more specifically, our solution, termed “probabilistic fusion”, outperforms all the state of the art matchmakers as well as the most prominent similarity measures.

Author 1: Mohammed Merzoug
Author 2: Abdelhak Etchiali
Author 3: Fethallah Hadjila
Author 4: Amina Bekkouche

Keywords: Service-oriented computing; web service discovery; rank aggregation; probabilistic fusion

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org