The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 10 Issue 10

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Detecting Public Sentiment of Medicine by Mining Twitter Data

Abstract: The paper presents a computational method that mines, processes and analyzes Twitter data for detecting public sentiment of medicine. Self-reported patient data are collected over a period of three months by mining the Twitter feed, resulting in more than 10,000 tweets used in the study. Machine learning algorithms are used for an automatic classification of the public sentiment on selected drugs. Various learning models are compared in the study. This work demonstrates a practical case of utilizing social media in identifying customer opinions and building a drug effectiveness detection system. Our model has been validated on a tweet dataset with a precision of 70.7%. In addition, the study examines the correlation between patient symptoms and their choices for medication.

Author 1: Daisuke Kuroshima
Author 2: Tina Tian

Keywords: Twitter; social media; data mining; public health

PDF

Paper 2: Interpolation of Single Beam Echo Sounder Data for 3D Bathymetric Model

Abstract: Transmitting sound waves into water, and measuring time interval between emission and return of a pulse, single beam echo sounder determines the depth of the sea. To obtain a bathymetric model representing sea-floor continuously, interpolation is necessary to process irregular spaced measured points resulting from echo sounder acquisition and calculate the depths in unsampled areas. Several interpolation methods are available in literature and the choice of the most suitable of them cannot be made a priori, but requires to be evaluated each time. This paper aims to compare different interpolation methods to process single beam echo sounder data of the Gulf of Pozzuoli (Italy) for 3D model achievement. The experiments are carried out in GIS (Geographic Information System) environment (Software: ArcGIS 10.3 and its extension Geostatistical Analyst by ESRI). The choice of the most accurate digital depth model is made using automatic cross validation. Radial basis function and kriging prove to be the best interpolation methods for the considered dataset.

Author 1: Claudio Parente
Author 2: Andrea Vallario

Keywords: Interpolation; bathymetric model; 3D model; digital depth model; kriging; radial basis function; Geographic Information System (GIS)

PDF

Paper 3: Study of Cross-Platform Technologies for Data Delivery in Regional Web Surveys in the Education

Abstract: Web-surveys are a popular form of collecting primary data from various studies. However, mass regional polls have their own characteristics, including the following: it is necessary to take into account various platforms and browsers, as well as the speed of networks, if rural areas remote from large centers are involved in the polls. Ensuring guaranteed data delivery in these conditions should be the right choice of technology for the implementation of surveys. The paper presents the analysis results of the technologies sustainability to various regional conditions for web survey conducted within one week at schools throughout the Russian Federation. The survey involved 20 000 real educators. The paper describes the technologies used and provides information about browsers and operation systems used by respondents. The absence of failures in data delivery confirms the effectiveness of the solutions.

Author 1: Evgeny Nikulchev
Author 2: Dmitry Ilin
Author 3: Vladimir Belov
Author 4: Pavel Pushkin
Author 5: Pavel Kolyasnikov
Author 6: Sergey Malykh

Keywords: Web-surveys; mass regional polls; various platforms and browsers; cross-platform technologies

PDF

Paper 4: Method for Texture Mapping of In-Vehicle Camera Image in 3D Map Creation

Abstract: Method for texture mapping of in-vehicle camera image in 3D map creation is proposed. Top view of ground cover targets can be mapped easily. For instance, aerial photos, high spatial resolution of satellite imagery data allows creation of top view of ground cover targets and also map creation. It can be used for pedestrian navigations. On the other hand, side view of ground cover targets is not so easy to obtain. In this paper, two methods are proposed. One is to use acquired photos with cameras mounted on the dedicated cars. The other one is to use high spatial resolution of satellite imagery data, such as IKONOS, Orbview, etc. Through experiments with the aforementioned two methods, it is found that texture mapping for the ground cover targets can be done with the two proposed methods in an efficient manner.

Author 1: Kohei Arai

Keywords: Texture mapping; high spatial resolution of satellite imagery data; 3D map

PDF

Paper 5: Development Trends of Online-based Aural Rehabilitation Programs for Children with Cochlear Implant Coping with the Fourth Industrial Revolution and Implication in Speech-Language Pathology

Abstract: The Korea Research Foundation selected the miniaturization and development of home care devices as the future promising technologies in the biotechnology (BT) area along with the Fourth Industrial Revolution. Accordingly, it is believed that there will be innovative changes in the rehabilitation field, including the development of smart diagnostics and treatment devices. Moreover, rehabilitation equipped with individualization, precision, miniaturization, portability, and accessibility is expected to draw attention. It has been continuously reported in the past decade that hearing-impaired toddlers who became able to hear speech through cochlear implantation and hearing rehabilitation before the age of 3, which is a critical period of language development, show a language development pattern similar to that of healthy toddlers. As a result, the need for developing language rehabilitation programs customized for patients wearing artificial cochlea has emerged. In other words, since the improved hearing ability owing to cochlear implant does not guarantee to promote speech perception and language development, intensive rehabilitation and education are needed for patients to recognize the heard speech as a meaningful language for communication. Nevertheless, a literature search on domestic and foreign cases revealed that there are insufficient language rehabilitation programs for cochlear implant patients as well as customized programs for them in the clinical coalface. This study examined the trend and marketability of online-based aural rehabilitation programs for patients wearing artificial cochlea and described the implications for language rehabilitation. This study suggested the following implications for developing a customized aural rehabilitation program. It is needed to secure and develop contents that can implement “a hand-held hospital” by using medical devices and mobile devices owned by consumers that transcend time and space. Also, it is necessary to develop a cochlear implant hearing rehabilitation training program suitable for native Korean speakers.

Author 1: Haewon Byeon

Keywords: Cochlear implant; future promising technologies; smart diagnostics; Online-based aural rehabilitation

PDF

Paper 6: Effective Methods to Improve the Educational Process of Medicine in Bulgaria

Abstract: The introduction of modern technologies into the educational process of medical students is a challenge of the new era in education, which can increase the success of students and give them confidence in their capabilities. The paper considers the use of physiological clinical record databases as an effective means of gaining prior experience from students that will be of use to them in their professional work. The paper describes the entering of serious educational games into the learning process of students in Bulgaria. The serious games and the pedagogical methods applied therein are an innovative technological means of developing individual, social and cognitive qualities on which the individual's professional realization depends. The paper presents the results of a survey conducted at the universities of medical education in Bulgaria. Respondents' opinion of their desire to use serious games in their training and how games affect them has been studied and shown.

Author 1: Galya N Georgieva-Tsaneva

Keywords: Serious educational games; learning process; gaming training; pedagogical methods; innovative technological means; medical education

PDF

Paper 7: Region-wise Ranking for One-Day International (ODI) Cricket Teams

Abstract: In cricket, the region plays a significant role in ranking teams. The International Cricket Council (ICC) uses an ad-hoc points system to rank cricket teams, which entirely based on the number of wins and losses a match. The ICC ignores the strength and weaknesses of the team across the region. Even though the relative accuracy in the ad-hoc ranking is high, but they do not provide a clearly defined method of ranking. We proposed Region-wise Team Rank (RTR) and a Region-wise Weighted Team Rank (RWTR) to rank cricket teams. The intuition is to get more points to a team that wins a match from a stronger team as compared to a team that wins against a weaker team & vice versa. The proposed method considers not only the number of region-wise wins and losses but also incorporates the region-wise strength and weakness of a team while assigning them the ranking score. In conclusion, the ranking list of the teams compares to the ICC official ranking.

Author 1: Akbar Hussain
Author 2: Yan Qiang
Author 3: M. Abdul Qadoos Bilal
Author 4: Kun Wu
Author 5: Zijuan Zhao
Author 6: Bilal Ahmed

Keywords: Batting; bowling and fielding strength; PageRank; region strength; team’s strength

PDF

Paper 8: Land use Detection in Nusajaya using Higher-Order Modified Geodesic Active Contour Model

Abstract: Urban development is a global phenomenon. In Johor, especially Nusajaya is one of the most rapidly developing cities. This is due to the increase of land demand and population growth. Moreover, land-use changes are considered to be one of the major components of current environmental monitoring strategies. In this context, image segmentation and mathematical model offers essential tools that can be used to analyze land use detection. The image segmentation process is known as the most important and difficult task in image analysis. Nonlinear fourth-order models had shown to have a good achievement in recovering smooth regions. Therefore, these motivate us to propose a fourth-order modified geodesic active contour (GAC) model. In the proposed model, a modified signed pressure force (SPF) function has been defined to segment the inhomogeneous satellite images. The simulations of the fourth-order modified GAC model through some numerical methods based on the higher-order finite difference method (FDM) have been illustrated. Matlab R2015a software in Windows 7 Ultimate on Intel (R) Core (TM) i5-3230M @ 2.60GHz CPU with 8 GB RAM has been considered as a computational platform for the simulation. Qualitative and quantitative differences between modified SPF functions and other SPF functions have been shown as a comparison. Hence land use detection is very useful for local governments and urban planners to enhance the future sustainable development plans of Nusajaya.

Author 1: N Alias
Author 2: M. N. Mustaffa
Author 3: F. Mustapha

Keywords: Higher-order geodesic active contour (GAC); segmentation; land use; finite difference method; numerical methods

PDF

Paper 9: Crypto-Steganographic LSB-based System for AES-Encrypted Data

Abstract: The purpose of this work is to increase the level of concealment of information from unauthorized access by pre-encrypting and hiding it in multimedia files such as images. A crypto-steganographic information protection algorithm with LSB-method was implemented. The algorithm hides AES pre-encrypted confidential information in the form of text or images into target containing image files. This method uses the concept of data concealing in the least significant pixel bits of the target image files. The proposed method relies on the use of Diffie-Hellman public key exchange protocol for securely exchanging the stego-key used for LSB as well as the public key used for encrypting the secret information. The algorithm ensures that the visual quality of the image remains unchanged, with no distortions perceived by the human eye. The algorithm also complicates the detection of concealed information embedded in the target image with use of PRNG as an enhancement for LSB. The proposed system scheme achieved competitive results. On an average, the system achieved a Peak Signal-to-Noise Ratio (PSNR) of 96.3 dB and a Mean Square Error (MSE) of 0.00408. The results obtained demonstrate that the proposed system offers high payload capabilities with immunity against visual degradation of resultant stego images.

Author 1: Mwaffaq Abu-Alhaija

Keywords: Steganography; cryptography; cryptographic steganography; crypto-steganographic system; Least-Significant Bit Replacement (LSB-method); stego-key; public-key cryptography; Advanced Encryption System (AES); Diffie-Hellman protocol; key exchange; concealment of information; PRNG

PDF

Paper 10: Scalable Data Analytics Market Basket Model for Transactional Data Streams

Abstract: Transactional data streams (TDS) are incremental in nature thus, the process of mining is complicated. Such complications arise from challenges such as infinite length, feature evolution, concept evolution and concept drift. Tracking concept drift challenge is very difficult, thus very important for Market Basket Analysis (MBA) applications. Hence, the need for a strategy to accurately determine the suitability of item pairs within the available billions of pairs to solve concept drift chalenge of TDS in MBA. In this work, a Scalable Data Analytics Market Basket Model (SDAMBM) that handles concept drift issues in MBA was developed. Transactional data of 1,112,000 were extracted from a grocery store using Extraction, Transformation and Loading approach and 556,000 instances of the data were simulated from a cloud database. Calibev function was used to caliberate the data nodes. Lugui 7.2.9 and Comprehensive R Archive Network were used for table pivoting between the simulated data and the data collected. The SDAMBM was developed using a combination of components from elixir big data architecture, the research conceptual model and consumer behavior theories. Toad Modeler was then used to assemble the model. The SDAMBM was implemented using Monarch and Tableau to generate insights and data visualization of the transactions. Intelligent interpreters for auto decision grid, selectivity mechanism and customer insights were used as metrics to evaluate the model. The result showed that 79% of the customers from the customers’ consumption pattern of the SDAMBM preferred buying snacks and drink as shown in the visualization report through the SDAMBM visualization dashboard. Finally, this study provided a data analytics approach for managing concept drift challenge in customers’ buying pattern. Furthermore, a distinctive model for managing concept drift was also achieved. It is therefore recommended that the SDAMBM should be adopted for the enhancement of customers buying and consumption pattern by business ventures, organizations and retailers.

Author 1: Aaron A Izang
Author 2: Nicolae Goga
Author 3: Shade O. Kuyoro
Author 4: Olujimi D. Alao
Author 5: Ayokunle A. Omotunde
Author 6: Adesina K. Adio

Keywords: Association rule mining; big data analytics; concept drift; market basket analysis; transactional data streams

PDF

Paper 11: Haze Effects on Satellite Remote Sensing Imagery and their Corrections

Abstract: Imagery recorded using satellite sensors operating at visible wavelengths can be contaminated by atmospheric haze that originates from large scale biomass burning. Such issue can reduce the reliability of the imagery and therefore having an effective method for removing such contamination is crucial. The principal aim of this study is to investigate the effects of haze on remote sensing imagery and develop a method for removing them. In order to get a better understanding on the behaviour of haze, the effects of haze on satellite imagery were initially studied. A methodology of removing haze based on haze subtraction and filtering was then developed. The developed haze removal method was then evaluated by means of signal-to-noise ratio (SNR) and classification accuracy. The results show that the haze removal method is able to improve the haze-affected imagery qualitatively and quantitatively.

Author 1: Asmala Ahmad
Author 2: Shaun Quegan
Author 3: Suliadi Firdaus Sufahani
Author 4: Hamzah Sakidin
Author 5: Mohd Mawardy Abdullah

Keywords: Haze effects; haze removal; remote sensing; accuracy; visibility

PDF

Paper 12: A Semantic Ontology for Disaster Trail Management System

Abstract: Disasters, whether natural or human-made, leave a lasting impact on human lives and require mitigation measures. In the past, millions of human beings lost their lives and properties in disasters. Information and Communication Technology provides many solutions. The issue of so far developed disaster management systems is their inefficiency in semantics that causes failure in producing dynamic inferences. Here comes the role of semantic web technology that helps to retrieve useful information. Semantic web-based intelligent and self-administered framework utilizes XML, RDF, and ontologies for a semantic presentation of data. The ontology establishes fundamental rules for data searching from the unstructured world, i.e., the World Wide Web. Afterward, these rules are utilized for data extraction and reasoning purposes. Many disaster-related ontologies have been studied; however, none conceptualizes the domain comprehensively. Some of the domain ontologies intend for the precise end goal like the disaster plans. Others have been developed for the emergency operation center or the recognition and characterization of the objects in a calamity scene. A few ontologies depend on upper ontologies that are excessively abstract and are exceptionally difficult to grasp by the individuals who are not conversant with theories of the upper ontologies. The present developed semantic web-based disaster trail management ontology almost covers all vital facets of disasters like disaster type, disaster location, disaster time, misfortunes including the causalities and the infrastructure loss, services, service providers, relief items, and so forth. The objectives of this research were to identify the requirements of a disaster ontology, to construct the ontology, and to evaluate the ontology developed for Disaster Trail Management. The ontology was assessed efficaciously via competency questions; externally by the domain experts and internally with the help of SPARQL queries.

Author 1: Ashfaq Ahmad
Author 2: Roslina Othman
Author 3: Mohamad Fauzan
Author 4: Qazi Mudassar Ilyas

Keywords: Semantic web; ontology; information retrieval; disaster trail management

PDF

Paper 13: Design of Embedded Vision System based on FPGA-SoC

Abstract: The advanced micro-electronics in the last decades provide each year new tools and devices making it possible to design more and more efficient artificial vision systems capable of meeting the constraints imposed. All the elements are thus brought together to make artificial vision one of the most promising, even unifying scientific "challenges" of our time. This is because the development of a vision system requires knowledge from several disciplines, from signal processing to computer architecture, through theories of probability, linear algebra, computer science, artificial intelligence and analog and digital electronics. The work proposed in this paper is located at the intersection of embedded systems and image processing domains. The objective is to propose an embedded vision system for video acquisition and processing by adding hardware accelerators in order to extract some image characteristics. With the introduction of reconfigurable platforms, such as new All Programmable System on Chip (APSoC) platforms and the advent of new high-level Electronic Design Automation (EDA) tools to configure them, FPGA-SoC based image processing has emerged as a practical solution for most computer vision problems. In this paper, we are interested in the design and implementation of an embedded vision system. This design facilitates video streaming from the camera to the monitor and hardware processing over real-time FPGA-SoC.

Author 1: Ahmed Alsheikhy
Author 2: Yahia Fahem Said

Keywords: Embedded vision; video processing architecture; real-time; All Programmable System on Chip (APSoC)

PDF

Paper 14: Using Brain Imaging to Gauge Difficulties in Processing Ambiguous Text by Non-native Speakers

Abstract: Processing ambiguous text is an ever challenging problem for humans. In this study, we investigate how native-Arabic speakers face problems in processing their non-native English language text which involves ambiguity. As a case study, we focus on prepositional-phrase (PP) attachment ambiguity whereby a PP can be attached to the preceding noun (aka low attachment) or the preceding verb (aka high attachment). We setup an experiment in which human participants read text on a computer screen and their brain activity is monitored using near infrared spectroscopy. Participants read two types of text: one involving PP-attachment ambiguity and the other unambiguous text which is used as a control for comparison purposes. The brain activity data for ambiguous and control text are clustered using hierarchical-clustering technique available in Weka. The data reveal that Arabic speakers face more difficulty in processing ambiguous text as compared to unambiguous text.

Author 1: Imtiaz Hussain Khan

Keywords: Prepositional-phrase attachment ambiguity; near-infrared spectroscopy; Arabic speakers; hierarchical clustering

PDF

Paper 15: Discovering Gaps in Saudi Education for Digital Health Transformation

Abstract: The growing complexity of healthcare systems worldwide and the medical profession’s increasing dependency on information technology for accurate practice and treatment call for specific standardized education in health informatics programming, and accreditation of health informatics programs based on core competencies is progressing at an international level. This study investigates the state of affairs in health informatics programs within the Kingdom of Saudi Arabia (KSA) to determine (1) how well international standards are being met and (2) what further development is needed in light of KSA’s recent eHealth initiatives. This descriptive study collected data from publicly available resources to investigate Health Informatics programs at 109 Saudi institutions. Information about coursework offered at each institution was compared with American Medical Informatics Association (AMIA) curriculum guidelines. Of 109 institutions surveyed, only a handful offered programs specifically in health informatics. Of these, most programs did not match the coursework recommended by AMIA, and the majority of programs mimicked existing curricula from other countries rather than addressing unique Saudi conditions. Education in health informatics in KSA appears to be scattered, non-standardized, and somewhat outdated. Based on these findings, there is a clear opportunity for greater focus on core competencies within health informatics programs. The Saudi digital transformation (eHealth) initiative, as part of Saudi Vision 2030, clearly calls for implementation of internationally accepted health informatics competencies in education programs and healthcare practice, which can only occur through greater collaboration between medical and technology educators and strategic partnerships with companies, medical centers, and governmental institutions.

Author 1: Adeeb Noor

Keywords: Health informatics; education; information technology; American Medical Informatics Association (AMIA); Saudi Arabia; vision 2030

PDF

Paper 16: Performance Comparison of Collaborative-Filtering Approach with Implicit and Explicit Data

Abstract: Challenge in developing a collaborative filtering (CF)-based recommendation system is the problem of cold-starting of items that causes the data to sparse and reduces the accuracy of the recommendations. Therefore, to produce high accuracy a match is needed between the types of data and the approach used. Two approaches in CF include user-based and item-based CFs, both of which can process two types of data; implicit and explicit data. This work aims to find a combination of approaches and data types that produce high accuracy. Cosine-similarity is used to measure the similarity between users and also between items. Mean Absolute Error is also measured to discover the accuracy of a recommendation. Testing of three groups of data based on sparseness results in the best accuracy in an explicit data-based approach that has the smallest MAE value. The result is that the average MAE value for user based (implicit data) is 0.1032, user based (explicit data) is 0.2320, item based (implicit data) is 0.3495, and item based (explicit data) is 0.0926. The best accuracy is in the item-based (explicit-data) approach which is the smallest average MAE value.

Author 1: Fitri Marisa
Author 2: Sharifah Sakinah Syed Ahmad
Author 3: Zeratul Izzah Mohd Yusoh
Author 4: Tubagus Mohammad Akhriza
Author 5: Wiwin Purnomowati
Author 6: Rakesh Kumar Pandey

Keywords: Recommender system; collaborative-filtering; user-based; item-based; implicit-data; explicit-data

PDF

Paper 17: A Prediction-based Curriculum Analysis using the Modified Artificial Bee Colony Algorithm

Abstract: Due to the vast amount of students’ information and the need of quick retrieval, establishing databases is one of the top lists of the IT infrastructure in learning institutions. However, most of these institutions do not utilize them for knowledge discovery which can aid in informed decision-making, investigation of teaching and learning outcomes, and development of prediction models in particular. Prediction models have been utilized in almost all areas and improving the accuracy of the model is sought- after this study. Thus, the study presents a Scoutless Rule-driven binary Artificial Bee Colony (SRABC) as a searching strategy to enhance the accuracy of the prediction model for curriculum analysis. Experimental verification revealed that SRABC paired with K-Nearest Neighbor (KNN) increases the prediction accuracy from 94.14% to 97.59% than paired with Support Vector Machine (SVM) and Logistic Regression (LR). SRABC is efficient in selecting 14 out of 60 variables through majority voting scheme using the data of the BSIT students of Davao Del Norte State College (DNSC), Davao del Norte, Philippines.

Author 1: Reir Erlinda E Cutad
Author 2: Bobby D. Gerardo

Keywords: Binary artificial bee colony; rule-driven mechanism; prediction model; curriculum analysis

PDF

Paper 18: Comparative Analysis between a Photovoltaic System with Two-Axis Solar Tracker and One with a Fixed Base

Abstract: In this article, the comparative analysis of the stored energies between a photovoltaic system with a two-axis solar tracker, controlled by Arduino with respect to the energy stored by a fixed-base photovoltaic system is done. This with the option of using electrical energy efficiently, since the optimal installation of photovoltaic systems plays an important role in its efficiency. Once the comparative analysis was performed, the performance of the photovoltaic system with solar tracker is determined to be 24.06% higher than the second fixed-base photovoltaic system. The correlational analysis was also carried out for the data collected from the stored energy with respect to time, thus determining that the photovoltaic system with a solar tracker has a low correlation of 0.334, given that in the solar tracker the energy stored without dependence on time or moment when the energy is captured, since, if there is a variation during the day of the direction of the sun's rays, the photovoltaic system will always seek to focus as much as possible on the sun's rays, guaranteeing sustainability in flexible storage of energy; while the fixed-base photovoltaic system has a moderate inverse correlation of - 0.489, that is, as the hours of the day pass the orientation of the sun's rays changes, and in the absence of dynamism in the orientation of the solar cells (for be fixed-based), limited energy as the hours of the day increase. Taking these reference results, it is expected to implement photovoltaic system projects with solar tracker in rural areas of Peru that lack electrical services, since it is more efficient than the fixed base photovoltaic system.

Author 1: Omar Freddy Chamorro Atalaya
Author 2: Dora Yvonne Arce Santillan
Author 3: Martin Diaz Choque

Keywords: Photovoltaic system; solar cells; displacement; two axes; performance; orientation; stored energy

PDF

Paper 19: Merge of X-ETL and XCube towards a Standard Hybrid Method for Designing Data Warehouses

Abstract: There is no doubt that the hybrid approach is the best paradigm for designing effective multidimensional schemas. Its strength lies in its ability to combine the top-down and bottom-up approaches, thus exploiting the advantages of both approaches. In this paper, the authors try to identify and analyze the different hybrid methods developed for building data warehouses. The analysis revealed that the existing methods are too complicated and time consuming in the deployment phase. In order to solve this problem, the authors introduced a new hybrid method that is easy to use and saves a huge amount of deployment time. This new method consists of two main steps: the first data driven step allows an analysis of the source models by using the X-ETL method and gives rise to star models. The second requirements driven step performs a semantic analysis of the needs expressed in natural language by using the XCube Assist method. This analysis allows to improve the quality of star models generated by the X-ETL method without the intervention of a designer.

Author 1: Nawfal El Moukhi
Author 2: Ikram El Azami
Author 3: Abdelaaziz Mouloudi
Author 4: Abdelali Elmounadi

Keywords: Data warehouse design; hybrid method; relational model; multidimensional model; star model; X-ETL; semantic analysis; XCube assist

PDF

Paper 20: Object Detection System to Help Navigating Visual Impairments

Abstract: The number of people with severe visual impairments and blind people in the world is 216.6 million and 38.5 million, respectively in 2018 and that number will increase every year. While the development of Computer Vision technology became popular after that method is used in automatic driving system using an object detection system to detect surrounding object, this technology can be a solution to help blind people too. This can be done by implementing Harris Corner Detection method. Harris Corner Detection method is used to detect the corner of the object in the image taken. The number of corner and location of corner in detection result can be used for predicting position and distance. To predict the distance, a triangle rule will be used in finding the distance. Furthermore, it can predict the location and distance of the object in the picture taken. From the results of the implementation above it was found that the accuracy of object detection using Harris Corner Detector's angle detection method is 88%. Therefore, this application can help detecting objects based on the number of corner and location detected using a Smartphone.

Author 1: Cahya Rahmad
Author 2: Kohei Arai
Author 3: Rawansyah
Author 4: Tanggon Kalbu

Keywords: Object detection; corner detection; computer vision; visual impairments; blind people

PDF

Paper 21: Faculty’s Social Media usage in Higher Education Embrace Change or Left Behind

Abstract: This paper addresses faculty members’ (academic staff) viewpoints on benefits, barriers and concerns of utilizing social media and also investigates differences with respect to their social media experience in teaching, age and the purpose of using social media. The data was collected through an adopted questionnaire from 324 faculty members of two public and two private universities in north part of Cyprus and was analyzed through descriptive statistics, independent samples t-test and one-way ANOVA. Results revealed that, although faculty members appreciate benefits of using social media, they do have concerns and they are aware of barriers almost as to same degree as benefits of using social media. Those who are familiar and have used social media before think more about concerns than those who haven’t used it. Elder faculty members possess less concern about using social media than their younger and middle age colleagues. Furthermore, the purpose (personal, educational, professional) of using social media has no effect on faculty members’ viewpoints on benefits, concerns and barriers of using social media. Abundant literature on social media usage from students’ perspective and relatively limited studies examining teachers/instructors point of views on social media use particularly for developing countries constitute the primary motivation behind the emergence of such research. Faculty members should be endorsed to adopt social media for instructional and professional purpose and misconceptions about using social media and barriers should be eliminated to enhance conscious utilization of social media for teaching.

Author 1: Seren Basaran

Keywords: Academic staff; age; purpose; social media experience; social networking sites; university

PDF

Paper 22: Multi-Band and Multi-Parameter Reconfigurable Slotted Patch Antenna with Embedded Biasing Network

Abstract: RF PIN diodes are used to achieve reconfigurability in frequency, polarization, and radiation pattern. The antenna can be used in different bands by controlling ON and OFF states of two PIN diodes using the embedded biasing network (EBN). The antenna can be used for ultra-wideband (UWB) applications (1.0 GHz to 15.2 GHz) with a resonant frequency of 9.2 GHz. Besides ultra-wideband, it can also be switched to other bands (C, X, and Ku) with different operating frequencies (5.75 GHz, 12.3 GHz, and 15.5 GHz) at other biasing combinations. With this type of antenna, Linear and Circular polarization are achievable. The radiation pattern reconfigurable behavior in the vertical plane has also been achieved. Single Design of the proposed antenna is optimized for the multi-band and multi-parameter reconfigurability applications.

Author 1: Manoj Kumar Garg
Author 2: Jasmine Saini

Keywords: Multi-band; Multi-parameter reconfigurability; EBN; UWB; PIN diode

PDF

Paper 23: Bioinspired Immune System for Intrusions Detection System in Self Configurable Networks

Abstract: In the last couple of years, the computer frameworks have become more vulnerable to external attacks. The PC security has become the prime cause of concern for every organization. To achieve this objective Intrusion Detection System (IDS) in self-configurable networks has played a vital role in the last few decades to guard LANs. In this work, an IDS in self-configurable networks is deployed based on Bioinspired Immune System. IDS in self-configurable networks are accustomed to monitor data and network activity and alert when any suspicious activity observed security heads are alerted. A vital and common application space for versatile frameworks swarm-based is that of PC security. A PC security framework ought to protect a machine or accumulation of machines from unapproved gatecrashers. The framework seems to be capable of counteracting against external activity. Also it is comparable in usefulness to secure framework shielding from intrusion by external threats like in case of attacking microorganisms. A counterfeit insusceptible framework is a PC programming framework that mirrors a few sections of the conduct of the human resistant framework to shield PC systems from infections and comparable digital assaults. This paper demonstrates the need of a novice substring seeks calculation based on bio-roused calculations. Tests are required to create system for Network Intrusion detection that aids in securing a machine or clusters of machines from unapproved intruders. In this paper IDS in self-configurable networks is implemented by using Bio-inspired Immune System and KMP algorithm as a model IDS.

Author 1: Noor Mohd
Author 2: Annapurna Singh
Author 3: H.S. Bhadauria

Keywords: Networks security; intrusion detection system; AIS algorithm; KMP algorithm; self-configurable networks

PDF

Paper 24: An Immunity-based Error Containment Algorithm for Database Intrusion Response Systems

Abstract: The immune system has received a special attention as a potential source of inspiration for innovative approaches to solve database security issues and build artificial immune systems. Database security issues need to be correctly identified to ensure that suitable responses are taken. This paper proposes an immunity-based error containment algorithm for providing optimum response in detected intrusions. The objective of the proposed algorithm is to reduce the false positive alarms to the minimum since not all the incidents are malicious in nature. The proposed algorithm is based on apoptotic and necrotic signals which are parts of the immunity structure in human immune system. Apoptotic signals define low-level alerts that could result from legitimate users but could be also the prerequisites for an attack, while necrotic signals define high-level alerts that result from actual successful attacks.

Author 1: Nacim YANES
Author 2: Ayman M. MOSTAFA
Author 3: Nasser ALSHAMMARI
Author 4: Saad A. ALANAZI

Keywords: Database security; artificial immune system; error containment algorithm; database auditing; apoptotic signal; necrotic signal; secret sharing

PDF

Paper 25: The Respondent’s Haptic on Academic Universities Websites of Pakistan Measuring Usability

Abstract: This study based on survey, by using four higher educational (Universities) websites were selected for the usability testing with the help of response to the experience of eighty students of same age group and investigated to make pre-survey and post-survey based on an eight questionnaire for websites usability. The source for the survey was Laptops of windows 8.1 operation system used. The questionnaires were depends on two factors: one factor contains gender, nationality, respondents and second factor contains strongly agree, Agree, Undecided, Disagree, Strongly Disagree. The factor structure replicated across the study with data collected during usability test respectively in survey. There was evidence of usability with existing questionnaires, including the website usability testing by applying guidelines of webcredible. The overall results were acceptable and more meaningful for future researchers and web-developers. The questionnaire can be used to understand of websites quality and how well websites work.

Author 1: Irum Naz Sodhar
Author 2: Baby Marina
Author 3: Azeem Ayaz Mirani

Keywords: Usability testing; survey; questionnaire; higher education websites; guidelines webcredible; operating system

PDF

Paper 26: A Built-in Criteria Analysis for Best IT Governance Framework

Abstract: The implementation of IT governance is important to lead and evolve the information system in agreement with stakeholders. This requirement is seriously amplified at the time of the digital area considering all the new technologies that have been launched recently (Big DATA, Artificial Intelligence, Machine Learning, Deep learning, etc.). Thus, without a good rudder, every company risks getting lost in a sea endless and unreachable goal. This paper aims to provide decision-making system that allows professionals to choose IT governance framework suitable to desired criteria and their importance based on a multi-criteria analysis method (WSM), we did implement a case study based on a Moroccan company. Moreover, we present a better understanding of IT Governance aspects such as standards and best practices. This paper goes into a global objective that aims to build an integrated generated meta-model for a better approach of IT Governance.

Author 1: HAMZANE Ibrahim
Author 2: Belangour Abdessamad

Keywords: IT Governance; COBIT; ISO 38500; CMMI; ITIL; TOGAF; PMBOK; PRINCE 2; SCRUM

PDF

Paper 27: Achieving High Privacy Protection in Location-based Recommendation Systems

Abstract: In recent years, privacy has become great attention in the research community. In Location-based Recommendation Systems (LbRSs), the user is constrained to build queries depend on his actual position to search for the closest points of interest (POIs). An external attacker can analyze the sent queries or track the actual position of the LbRS user to reveal his\her personal information. Consequently, ensuring high privacy protection (which is including location privacy and query privacy) is a fundamental thing. In this paper, we propose a model that guarantees high privacy protection for LbRS users. The model is work by three components: The first component (selector) uses a new location privacy protection approach, namely, the smart dummy selection (SDS) approach. The SDS approach generates a strong dummy position that has high resistance versus a semantic position attack. The second component (encryptor) uses an encryption-based approach that guarantees a high level of query privacy versus a sampling query attack. The last component (constructor) constructs the protected query that is sent to the LbRS server. Our proposed model is supported by a checkpoint technique to ensure a high availability quality attribute. Our proposed model yields competitive results compared to similar models under various privacy and performance metrics.

Author 1: Tahani Alnazzawi
Author 2: Reem Alotaibi
Author 3: Nermin Hamza

Keywords: Recommender models; attacker; privacy protection; dummy; encryption; checkpoint

PDF

Paper 28: Investigating Social Media Utilization and Challenges in the Governmental Sector for Crisis Events

Abstract: The use and utilization of social media applications, tools, and services enables advanced services in daily routines, activities, and work environments. Nowadays, disconnection from social media services is a disadvantage due to their increasing use and functionality. The use of social media applications and services has provided different methods and routines for communications that ranges from posting, reposting, commenting, interacting, and live communication that can reach a mass population with minimum time, effort, and expenses compared with traditional media systems and channels. The current benefits of using social media can assist in providing better services in terms of communication and guidance for civil protection services within governmental sectors, as reported by different research studies. The use of social media has been found to be critically important by governmental agencies in different situations for directing, educating, and engaging people during different events. This study investigates the use of social media services in Saudi Arabia in governmental sectors to outline the opportunities and challenges faced, given the challenging situations faced annually during the Hajj and Ramadan rituals, and sporadic flood crisis events. This research focuses on defining the current stand and challenges of using social media services for providing mass communication and civil engagement during hazardous and challenging events in Saudi Arabia. The results of this study will be used as a roadmap for future investigation in this regard.

Author 1: Waleed Afandi

Keywords: Component; civil protection; hajj; social media; Saudi Arabia; governmental sector; flood crisis

PDF

Paper 29: Project Management Metamodel Construction Regarding IT Departments

Abstract: Given the fast technological progress, the need for project management continues to grow in terms of methodology and new concepts. In this article, we will build a framework of generating a metamodel that we will apply on the project management to generate a generic metamodel of project management, in this approach we will be based on two methodologies of project management; predictive method (ex: PRINCE 2), Agile method (ex: SCRUM). The goal of this research is to validate and apply this methodology on all the components of IT Governance then to aggregate the metamodels to restore a global metamodel for all IT Governance domains.

Author 1: HAMZANE Ibrahim
Author 2: Belangour Abdessamad

Keywords: MDA; MDE; SCRUM; PRINCE 2; PMBOK; IT

PDF

Paper 30: JsonToOnto: Building Owl2 Ontologies from Json Documents

Abstract: The amount of data circulating through the web has grown rapidly recently. This data is available as semi-structured or unstructured documents, especially JSON documents. However, these documents lack semantic description. In this paper, we present a method to automatically extract an OWL2 ontology from a JSON document. We propose a set of transformation rules to transform JSON elements to ontology components. Our approach also allows analyzing the content of JSON documents to discover categorization in order to generate class hierarchy. Finally, we evaluate our approach by conducting experiments on several JSON documents. The results show that the obtained ontologies are rich in terms of taxonomic relationships.

Author 1: Sara Sbai
Author 2: Mohammed Reda Chbihi Louhdi
Author 3: Hicham Behja
Author 4: Rabab Chakhmoune

Keywords: JSON documents; OWL2 ontologies; ontology generation; transformation rules; information theory; classification; decision trees

PDF

Paper 31: Enhancement of Packet Delivery Ratio during Rain Attenuation for Long Range Technology

Abstract: Countries with tropical climates experience various weather changes throughout the year. The weather can drastically change from extremely hot and humid to a complete downpour in a twenty-four-hour cycle. Different atmospheric conditions such as atmospheric gas attenuation, cloud attenuation, and rain attenuation can cause interruption of electromagnetic signals and weaken the radio signals. The amount of attenuation is mostly depending on the raindrops. Rate of attenuation by rain depends on the composition, temperature, orientation, shape and fall velocity of raindrops. In this paper, we measure the effect of different atmospheric attenuations, particularly due to rain for non-line-of-sight environments and proposed a LoRa (Long Range) based wireless mesh network to enhance packet delivery ratio (PDR). We experimented with the LoRa based wireless network by taking the packet delivery ratio at different times of the day when there was no rain and performed some experiments while raining. The experiments conclude that PDR is affected by different volumes of rain where PDR decreases significantly from 100% when it was not raining and decreases to 89.5% when it rains. The results also show that the LoRa device can successfully transmit up to 1.7km in a line-of-sight environment and around 1.3km in a non-line-of-sight environment without rain. The results show the effect of atmospheric attenuation to LoRa wireless network and become a consideration factor when designing any LoRa applications for outdoor deployment.

Author 1: MD Hossinuzzaman
Author 2: Dahlila Putri Dahnil

Keywords: LoRa; packet delivery ratio; wireless mesh network; atmospheric attenuation; rain attenuation

PDF

Paper 32: Design of an Efficient Steganography Model using Lifting based DWT and Modified-LSB Method on FPGA

Abstract: The data transmission with information hiding is a challenging task in today world. To protect the secret data or image from attackers, the steganography techniques are essential. The steganography is a process of hiding the information from one channel to another in data communication. In this research work, Design of an Efficient Steganography Model using Lifting Based DWT and Modified-LSB Method on FPGA is proposed. The stegano module includes DWT (Discrete Wavelet Transformation) with lifting scheme for the cover image and encryption with Bit mapping for a secret image, an embedded module using Modified Least Significant Bit (MLSB) Method, and Inverse DWT to generate the stegano image. The recovery module includes DWT, decoding module with pixel extraction and bit retrievals, and decryption to generate the recovered secret image. The steganography model is designed using Verilog-HDL on Xilinx platform and implemented with Artix-7 Field Programmable Gate Array (FPGA). The hardware resource constraints like Area, time, and power utilization of the proposed model results are tabulated. The performance analysis of the work is evaluated using Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) Ratio for a different cover and secret images with better quality. The proposed steganography model operates at high speed, which improves communication performance.

Author 1: Mahesh A A
Author 2: Raja K.B

Keywords: Discrete Wavelet Transformation (DWT); steganography; Modified Least Significant Bit (MLSB) Method; XOR Method; FPGA; cover image; secret image; PSNR; MSE

PDF

Paper 33: Developing Agriculture Land Mapping using Rapid Application Development (RAD): A Case Study from Indonesia

Abstract: The use of Information and Communication Technology (ICT) in agriculture has become one of the steps to improve agricultural efficiency, effectiveness, productivity, and also expected to encourage the creation of Precision Agriculture. Precision agriculture has an impact on the efficiency of operational costs to increase margins in the production of agricultural products using ICTs. One of the problems that often arise in agriculture is related to the management of agricultural land in each farmer group area. This information is closely related to the needs of agricultural production facilities and infrastructure, such as the need for fertilizers, seeds, and other resources. Web Mapping System is one of the systems to assist in land or area mapping. In this study, the Web Mapping System is expected to be used to help at agricultural land mapping, owned by farmer members of farmer groups. The developed system will store spatial data from farmland members and farmer groups. The Web Mapping System was developed using the Rapid Application Development (RAD) method, where there are several iterative processes. The result of this study is the Web Mapping System for agricultural land. With this application, farmers can find out the status of the land being cultivated or owned. In addition, the Web Mapping System can record the status of the existing land in a farmer group and the need for agricultural production facilities and infrastructure. Further, the Web Mapping System also provides information in a dashboard that can help farmer groups to manage the land owned by each farmer who is a member of the group.

Author 1: Antonius Rachmat Chrismanto
Author 2: Halim Budi Santoso
Author 3: Argo Wibowo
Author 4: Rosa Delima
Author 5: Reinald Ariel Kristiawan

Keywords: Farmland; precision agriculture; land mapping system; dashboard; software development

PDF

Paper 34: CREeLS: Crowdsourcing based Requirements Elicitation for eLearning Systems

Abstract: Crowdsourcing is the process of having a task performed by the crowd. Because of the Web evolution, recently crowdsourcing is being used in the field of Requirements Engineering to help in simplifying its activities. Among the information systems that were highly affected by the Web evolution are the eLearning Systems (eLS). eLS has special characteristics, such as the large number and diversity of users who could be geographically dispersed. To the best of our knowledge, there is little evidence that a crowdsourcing based requirements elicitation approach especially tailored for eLS that addresses their special characteristics exists. In this paper we attempt to fill in this gap. We present Crowdsourcing based Requirements Elicitation for eLS (CREeLS), which is made up of a framework of the necessary elements of crowdsourcing suggesting specific tools for each element, and a phased approach to implement the framework. We evaluated our approach through analyzing real-life users’ reviews and extracted keywords that represent users’ requirements by using topic modeling techniques. The reached results were then evaluated by manual text reviewing and the extracted features were found to be coherent. CREeLS has 0.66 precision and 0.79 recall. Hence we contend that CREeLS can help requirements engineers of eLS to analyze users’ opinions and identify the most common users’ requirements for better software evolution.

Author 1: Nancy M Rizk
Author 2: Mervat H. Gheith
Author 3: Ahmed M. Zaki
Author 4: Eman S. Nasr

Keywords: Requirements engineering; requirements elicitation; crowdsourcing; eLearning systems

PDF

Paper 35: e-Learning Proposal Supported by Reasoning based on Instances of Learning Objects

Abstract: In recent years, new research has appeared in the area of education, which has focused on the use of information technology and the Internet to promote online learning, breaking many barriers of traditional education such as space, time, quantity and coverage. However, we have found that these new proposals present problems such as linear access to content, patronized teaching structures, and non-flexible methods in the style of user learning. Therefore, we have proposed the use of an intelligent model of personalized learning management in a virtual simulation environment based on instances of learning objects, using a similarity function through the weighted multidimensional Euclidean distance. The results obtained by the proposed model show an efficiency of 99.5%; which is superior to other models such as Simple Logistic with 98.99% efficiency, Naive Bayes with 97.98% efficiency, Tree J48 with 96.98% efficiency, and Neural Networks with 94.97% efficiency. For this, we have designed and implemented the experimental platform MIGAP (Intelligent Model of Personalized Learning Management), which focuses on the assembly of mastery courses in Newtonian Mechanics. Additionally, the application of this model in other areas of knowledge will allow better identification of the best learning style of each student; with the objective of providing resources, activities and educational services that are flexible to the learning style of each student, improving the quality of current educational services.

Author 1: Benjamin Maraza-Quispe
Author 2: Olga Melina Alejandro-Oviedo
Author 3: Walter Choquehuanca-Quispe
Author 4: Alejandra Hurtado-Mazeyra
Author 5: Walter Fernandez-Gambarini

Keywords: Learning; management; intelligence; styles; instances; objects; reasoning; model; personalized

PDF

Paper 36: Distance based Sweep Nearest Algorithm to Solve Capacitated Vehicle Routing Problem

Abstract: The Capacitated Vehicle Routing Problem (CVRP) is an optimization problem owing to find minimal travel distances to serve customers with homogeneous fleet of vehicles. Clustering customers and then assign individual vehicles is a widely-studied way, called cluster first and route second (CFRS) method, for solving CVRP. Cluster formation is important between two phases of CFRS for better CVRP solution. Sweep (SW) clustering is the pioneer one in CFRS method which solely depends on customers’ polar angle: sort the customers according to polar angle; and a cluster starts with customer having smallest polar angle and completes it considering others according to polar angle. On the other hand, Sweep Nearest (SN) algorithm, an extension of Sweep, also considers smallest polar angle customer to initialize a cluster but inserts other customer(s) based on the nearest neighbor approach. This study investigates a different way of clustering based on nearest neighbor approach. The proposed Distance based Sweep Nearest (DSN) method starts clustering from the farthest customer point and continues for a cluster based on nearest neighbor concept. The proposed method does not rely on polar angle of the customers like SW and SN. To identify the effectiveness of the proposed approach, SW, SN and DSN have been implemented in this study for solving benchmark CVRPs. For route optimization of individual vehicles, Genetic Algorithm, Ant Colony Optimization and Particle Swarm Optimization are considered for clusters formation with SW, SN and DSN. The experimental results identified that proposed DSN outperformed SN and SW in most of the cases and DSN with PSO was the best suited method for CVRP.

Author 1: Zahrul Jannat Peya
Author 2: M. A. H. Akhand
Author 3: Tanzima Sultana
Author 4: M. M. Hafizur Rahman

Keywords: Capacitated vehicle routing problem; sweep algorithm; sweep nearest algorithm; genetic algorithm; ant colony optimization; particle swarm optimization

PDF

Paper 37: Ontology Learning from Relational Databases: Transforming Recursive Relationships to OWL2 Components

Abstract: Relational databases (RDB) are widely used as a backend for information systems, and contain interesting structured data (schema and data). In the case of ontology learning, RDB can be used as knowledge source. Multiple approaches exist for building ontologies from RDB. They mainly use schema mapping to transform RDB components to ontologies. Most existing approaches do not deal with recursive relationships that can encapsulate good semantics. In this paper, two technics are proposed for transforming recursive relationships to OWL2 components: (1) Transitivity mechanism and (2) Concept Hierarchy. The main objective of this work is to build richer ontologies with deep taxonomies from RDB.

Author 1: Mohammed Reda CHBIHI LOUHDI
Author 2: Hicham BEHJA

Keywords: Relational databases; ontologies; OWL2; recursive relationship; transitivity; concept hierarchy

PDF

Paper 38: Hyperspectral Image Classification using Support Vector Machine with Guided Image Filter

Abstract: Hyperspectral images are used to identify and detect the objects on the earth’s surface. Classifying of these hyperspectral images is becoming a difficult task, due to more number of spectral bands. These high dimensionality problems are addressed using feature reduction and extraction techniques. However, there are many challenges involved in the classification of data with accuracy and computational time. Hence in this paper, a method has been proposed for hyperspectral image classification based on support vector machine (SVM) along with guided image filter and principal component analysis (PCA). In this work, PCA is used for the extraction and reduction of spectral features in hyperspectral data. These extracted spectral features are classified using SVM like vegetation fields, building, etc., with different kernels. The experimental results show that SVM with Radial Basis Functions (RBF) kernel will give better classification accuracy compared to other kernels. Moreover, classification accuracy is further improved with a guided image filter by incorporating spatial features.

Author 1: Shambulinga M
Author 2: G. Sadashivappa

Keywords: Support Vector Machine (SVM); hyperspectral images; guided image filter; Principal Component Analysis (PCA)

PDF

Paper 39: Modeling Ant Colony Optimization for Multi-Agent based Intelligent Transportation System

Abstract: This paper focuses on Sumo Urban Mobility Simulation (SUMO) and real-time Traffic Management System (TMS) simulation for evaluation, management, and design of Intelligent Transportation Systems (ITS). Such simulations are expected to offer the prediction and on-the-fly feedback for better decision-making. In these regards, a new Intelligent Traffic Management System (ITMS) was proposed and implemented - where a path from source to destination was selected by Dijkstra algorithm, and the road segment weights were calculated using real-time analyses (Deep-Neuro-Fuzzy framework) of data collected from infrastructure systems, mobile, distributed technologies, and socially-build systems. We aim to simulate the ITMS in pragmatic style with micro traffic, open-source traffic simulation model (SUMO), and discuss the challenges related to modeling and simulation for ITMS. Also, we expose a new model- Ant Colony Optimization (ACO) in SUMO tool to support a multi-agent-based collaborative decision-making environment for ITMS. Beside we evaluate ACO model performance with exiting built-in optimum route-finding SUMO models (Contraction Hierarchies Wrapper) -CHWrapper, A-star (A*), and Dijkstra) for optimum route choice. The results highlight that ACO performs better than other algorithms.

Author 1: Shamim Akhter
Author 2: Md. Nurul Ahsan
Author 3: Shah Jafor Sadeek Quaderi

Keywords: Intelligent Traffic Management System (ITMS); Simulation of Urban Mobility (SUMO); traffic simulation; Contraction Hierarchies Wrapper (CHWrapper); Dijkstra; A-star (A*); Deep-Neuro-Fuzzy Classification

PDF

Paper 40: Palm Vein Verification System based on Nonsubsampled Contourlet Transform

Abstract: This document presents a new approach in verification system to verify the identity of person by his intrinsic characteristics “Palm vein” which is unique, universal and easy to captured. The first step in this system is to extract the region of interest (ROI) which represent the most informative region of palm, then coding step based on nonsubsampled contourlet transform (NSCT) is presented to produce a binary vector for each ROI, next a matching step of different representative vectors is given and finally a decision is made for both identification and verification mode. This approach is tested on CASIA multispectral database; the experimentations have proved the effectiveness of this coding system in verification modes to gives 0.19% of Equal Error Rate (EER).

Author 1: Amira Oueslati
Author 2: Kamel Hamrouni
Author 3: Nadia Feddaoui
Author 4: Safya Belghith

Keywords: Verification; palm-vein; nonsubsampled contourlet transform; region of interest; equal error rate

PDF

Paper 41: An Enhanced Weighted Associative Classification Algorithm without Preassigned Weight based on Ranking Hubs

Abstract: Heart disease is the preeminent reasons for death worldwide and in excess of 17 million individuals were kicked the bucket from heart disease in the past years and the mortality rate will be increased in upcoming years revealed by WHO. It is very tough to diagnose the heart problem by just observing the patient. There is a high demand in developing an efficient classifier model to help the physician to predict such threatening disease to recover the human life. Now a day, many researchers have focused novel classifier model based on Associative Classification (AC). But most of the AC algorithm does not consider the consequence of the attribute in the database and treat every itemsets equally. Moreover, weighted AC ignores the significance of the itemsets and suffering the rule evaluation due to support measure. In this proposed method we have introduced attribute weight, which does not require manual assignment of weight instead the weight would be calculated from link based model. Finally, the performance of the proposed algorithm is verified on different medical datasets from UCI repository with classical associative classification.

Author 1: Siddique Ibrahim S P
Author 2: Sivabalakrishnan M

Keywords: Association rule mining; hub weight; classification; heart disease; attribute weight; associative classification

PDF

Paper 42: Security Issues in Software Defined Networking (SDN): Risks, Challenges and Potential Solutions

Abstract: SDN (Software Defined Networking) is an architecture that aims to improve the control of network and flexibility. It is mainly connected with open flow protocol and ODIN V2 for wireless communication. Its architecture is central, agile and programmatically configured. This paper presents a security analysis that enforces the protection of GUI by requiring authentication, SSL/TLS integration and logging/security audit services. The role based authorization FortNOX and ciphers like AES and DES will be used for encryption of data and improving the security of SDN environment. These techniques are useful for enhancing the security framework of the controller.

Author 1: Maham Iqbal
Author 2: Farwa Iqbal
Author 3: Fatima Mohsin
Author 4: Muhammad Rizwan
Author 5: Fahad Ahmad

Keywords: SDN; Wireless SDN; Security Threats; AES; DES; FortNOX; TLS

PDF

Paper 43: MVC Frameworks Modernization Approach

Abstract: The use of web development frameworks has grown significantly, specially the Model-View-Controller (MVC) based frameworks. The ability to immigrate web applications between different frameworks available becomes more and more relevant. The automation of the migration through transformations avoid the necessity to rewrite the code entirely. Architecture Driven Modernization (ADM) is the most successful approach that standardizes and automates the reengineering process. In this paper, we define an ADM approach to generate MVC web applications models in the highest level of abstraction from Struts 2 and Codeignitter Models. To do this, we add the MVC concepts to the KDM metamodel and then we specify a set of transformations to generate MVC KDM models. This proposal is validated through the use of our approach to transform CRUD (Create, Read, Update and Delete) applications models from MVC frameworks to MVC KDM.

Author 1: Amine Moutaouakkil
Author 2: Samir Mbarki

Keywords: Framework; Architecture-Driven Modernization (ADM); Knowledge Discovery Model (KDM); Model-View-Controller (MVC)

PDF

Paper 44: Validation Policy Statement on the Digital Evidence Storage using First Applicable Algorithm

Abstract: Digital Evidence Storage is placed to store digital evidence files. Digital evidence is very vulnerable to damage. Therefore, making digital evidence storage need access control. Access control has several models, one of them is ABAC (Attribute-Based Access Control). ABAC is a new access control model. ABAC model has a flexible function, allows intersect with many attributes. This will be very complex and causing inconsistency and incompleteness. Access control testing is a must before access control is implemented because it is the main key in the security of a system. Especially in digital evidence storage because the data in it is very vulnerable to damage either intentionally or not. The type of access control that is widely used is ABAC because this ABAC model has a flexible function. This ABAC model intersects with many attributes, it is necessary to test the policy statement. This test is carried out to avoid inconsistencies and incompleteness in the policy statement. An example tool for testing policy statements is ACPT (Access Control Policy Testing). At ACPT there are various algorithms for creating and testing policy statements. This study uses the first applicable algorithm to test policy statements in digital evidence storage. This research has successfully tested the policy statement properly and found no inconsistencies and incompleteness.

Author 1: Achmad Syauqi
Author 2: Imam Riadi
Author 3: Yudi Prayudi

Keywords: Testing; policy statement; rule; ABAC; digital evidence

PDF

Paper 45: Selection of Sensitive Buses using the Firefly Algorithm for Optimal Multiple Types of Distributed Generations Allocation

Abstract: Power loss is one aspect of an electric power system performance indicator. Loss of power can have an impact on poor voltage performance at the receiving end. DG integration in the network has become one of the more powerful methods. To get the maximum benefit from synchronizing the system with DG, it is necessary to ascertain the size, location, and type of DG. This study aims to determine the capacity and location of DG connections for DG type I and type II. To address the aim of this paper, a metaheuristic solution based on a firefly algorithm is used. FA can cover up the lack of metaheuristic algorithms that require a long computational time. To ensure that the load bus location solution is selected as the best DG connection location, the input of the load bus candidate has been filtered based on stability sensitivity. The proposed method is tested on IEEE 30 buses. The optimization results show a decrease in power loss and an increase in bus voltage, which affects an increase in system stability by integrating three DG units. FA validation of the evolution-based algorithm shows a significant reduction in computational time.

Author 1: Yuli Asmi Rahman
Author 2: Salama Manjang
Author 3: Yusran
Author 4: Amil Ahmad Ilham

Keywords: Firefly algorithm; time computation; real power loss index; voltage profile index; multi-type DG

PDF

Paper 46: An Evaluation of User Awareness for the Detection of Phishing Emails

Abstract: Phishing attacks are among the most serious Internet criminal activities. They aim to make Internet users believe that they are using a trusted entity, for the purpose of stealing sensitive information, such as bank account or credit card information. Phishing costs Internet users millions of dollars each year. An effective method that can prevent such attacks is improving the security awareness of Internet users, especially in light of the significant growth of online services. This paper discusses a real-world experiment, which aims to analyze and monitor the phishing awareness of an organization’s users in order to improve their awareness. The experiments have been targeting 1500 users in the education sector. The results of the experiment reveal that phishing awareness has a significant positive effect on users’ ability to distinguish phishing emails and websites, thereby avoiding attacks.

Author 1: Mohammed I Alwanain

Keywords: Anti-phishing countermeasures; online fraud; evaluation experiments

PDF

Paper 47: Lexicon-based Bot-aware Public Emotion Mining and Sentiment Analysis of the Nigerian 2019 Presidential Election on Twitter

Abstract: Online social networks have been widely engaged as rich potential platforms to predict election outcomes’ in several countries of the world. The vast amount of readily-available data on such platforms, coupled with the emerging power of natural language processing algorithms and tools, have made it possible to mine and generate foresight into the possible directions of elections’ outcome. In this paper, lexicon-based public emotion mining and sentiment analysis were conducted to predict win in the 2019 presidential election in Nigeria. 224,500 tweets, associated with the two most prominent political parties in Nigeria, People’s Democratic Party (PDP) and All Progressive Congress (APC), and the two most prominent presidential candidates that represented these parties in the 2019 elections, Atiku Abubakar and Muhammadu Buhari, were collected between 9th October 2018 and 17th December 2018 via the Twitter’s streaming API. tm and NRC libraries, defined in the ‘R’ integrated development environment, were used for data cleaning and preprocessing purposes. Botometer was introduced to detect the presence of automated bots in the preprocessed data while NRC Word Emotion Association Lexicon (EmoLex) was used to generate distributions of subjective public sentiments and emotions that surround the Nigerian 2019 presidential election. Emotions were grouped into eight categories (sadness, trust, anger, fear, joy, anticipation, disgust, surprise) while sentiments were grouped into two (negative and positive) based on Plutchik’s emotion wheel. Results obtained indicate a higher positive and a lower negative sentiment for APC than was observed with PDP. Similarly, for the presidential aspirants, Atiku has a slightly higher positive and a slightly lower negative sentiment than was observed with Buhari. These results show that APC is the predicted winning party and Atiku as the most preferred winner of the 2019 presidential election. These predictions were corroborated by the actual election results as APC emerged as the winning party while Buhari and Atiku shared very close vote margin in the election. Hence, this research is an indication that twitter data can be appropriately used to predict election outcomes and other offline future events. Future research could investigate spatiotemporal dimensions of the prediction.

Author 1: Temitayo Matthew Fagbola
Author 2: Surendra Colin Thakur

Keywords: Nigeria; 2019 presidential_election; bots-awareness; EmoLex; lexicon_analysis; public_opinion; emotion_mining; sentiment_analysis; twitter; APC; PDP; win_prediction; muhammadu_buhari; atiku_abubaka

PDF

Paper 48: Towards Understanding Internet of Things Security and its Empirical Vulnerabilities: A Survey

Abstract: The Internet of things is no longer a concept; it is a reality already changing our lives. It aims to interconnect almost all daily used devices to help them exchange contextualized data in order to offer services adequately. Based on the existing Internet, IoT suffers indisputably from security issues that could threaten its evolution and its users’ interests. Starting from this fact, we try to define the main security threats for the IoT perimeter and propose some pertinent solutions. To do so, we first establish a state of the art concerning the IoT definition, protocols, environment, architecture and security. Then, we expose a case study of a standard IoT platform to illustrate the impact of security on all IoT layers. Furthermore, the paper presents the results of a security audit on our implemented platform. Finally, based on our evaluation, we highlight many solutions as well as possible directions for future research.

Author 1: Salim El Bouanani
Author 2: Omar Achbarou
Author 3: My Ahmed Kiram
Author 4: Aissam Outchakoucht

Keywords: Internet of things; IoT security; security audit; IoT architecture; IoT protocols

PDF

Paper 49: Model for Time Series Imputation based on Average of Historical Vectors, Fitting and Smoothing

Abstract: This paper presents a novel model for univariate time series imputation of meteorological data based on three algorithms: The first of them AHV (Average of Historical Vectors) estimates the set of NA values from historical vectors classified by seasonality; the second iNN (Interpolation to Nearest Neighbors) adjusts the curve predicted by AHV in such a way that it adequately fits to the prior and next value of the NAs gap; The third LANNf allows smoothing the curve interpolated by iNN in such a way that the accuracy of the predicted data can be improved. The results achieved by the model are very good, surpassing in several cases different algorithms with which it was compared.

Author 1: Anibal Flores
Author 2: Hugo Tito
Author 3: Deymor Centty

Keywords: Univariate time series imputation; average of historical vectors; interpolation to nearest neighbors

PDF

Paper 50: Enhanced, Modified and Secured RSA Cryptosystem based on n Prime Numbers and Offline Storage for Medical Data Transmission via Mobile Phone

Abstract: The transmission of medical data by mobile telephony is an innovation that constitutes the m-health or more generally e-health. This telemedicine handles personal data of patients who deserve to be protected when they are transmitted via the operator or private network, so that malicious people do not have access to them. This is where cryptography comes in to secure the medical data transmitted, while preserving their confidentiality, integrity and authenticity. In this field of personal data security, public key cryptography or asymmetric cryptography is becoming increasingly prevalent, as it provides a public key to encrypt the transmitted message and a second private key, linked to the first by formal mathematics, that only the final recipient has to decrypt the message. The RSA algorithm of River and Shamir provides this asymmetric cryptography based on a public key and a private key, on two prime numbers. However, the factorization of these two prime numbers to give the variable N of RSA can be discovered by a hacker and thus make the security of medical data vulnerable. In this article, we propose a more secured RSA algorithm with n primes and offline storage of the essential parameters of the RSA algorithm. We performed a triple encryption-decryption with these n prime numbers, which made it more difficult to break the factorization of the variable N. Thus, the key generation time is longer than that of traditional RSA.

Author 1: Achi Harrisson Thiziers
Author 2: Haba Cisse Théodore
Author 3: Jérémie T. Zoueu
Author 4: Babri Michel

Keywords: e-Health; medical data transmission; asymmetric cryptography; RSA algorithm; first numbers

PDF

Paper 51: Developing an Algorithm for Securing the Biometric Data Template in the Database

Abstract: In the current technology advancement, biometric template provides a dependable solution to the problem of user verification in an identity control system. The template is saved in the database during the enrollment and compared with query information in the verification stage. Serious security and privacy concerns can arise, if raw, unprotected data template is saved in the database. An attacker can hack the template information in the database to gain illicit access. A novel approach of encryption-decryption algorithm utilizing a design pattern of Model View Template (MVT) is developed to secure the biometric data template. The model manages information logically, the view shows the visualization of the data, and the template addresses the data migration into pattern object. The established algorithm is based on the cryptographic module of the Fernet key instance. The Fernet keys are combined to generate a multiFernet key to produce two encrypted files (byte and text file). These files are incorporated with Twilio message and securely preserved in the database. In the event where an attacker tries to access the biometric data template in the database, the system alerts the user and stops the attacker from unauthorized access, and cross-verify the impersonator based on the validation of the ownership. Thus, helps inform the users and the authority of, how secure the individual biometric data template is, and provided a high level of the security pertaining the individual data privacy.

Author 1: Taban Habibu
Author 2: Edith Talina Luhanga
Author 3: Anael Elikana Sam

Keywords: Biometric template; template-database; multiFernet; encryption-algorithm; decryption-algorithm; Twilio SMS

PDF

Paper 52: A Robust Method for Diagnostic Energetic System with Bond Graph

Abstract: Surveillance and supervision systems have a major role in insuring the safety and availability of industrial equipments and installations. Default detection and diagnosis is highly important to facilitate the planning and implementation of curative and preventive actions. Industrial systems are usually governed by different physical phenomena’s and diverse technological components. Bond graph, being a powerful tool based on energetic and multi-physical analysis can be a well-adapted tool in default detection. The resulting Bond Graph model, allows to apply model based diagnosis methods to detect and eventually isolate defaults. In this paper, energetic systems diagnosis problems are discussed by detailing existing diagnosis methods. The proposed modeling tool is then introduced with illustration of different use cases and applications examples. Diagnosis methods based on Bond Graph model are presented, as well as the extension of these methods with uncertain parameters models. Finally, the studied diagnosis method is applied for default detection and isolation using the study case of asynchrony motor.

Author 1: Belgacem Hamdouni
Author 2: Dhafer Mezghani
Author 3: Jamel Riahi
Author 4: Abdelkader Mami

Keywords: Bond graph; diagnostic; fault detection; energy systems

PDF

Paper 53: Speculating on Speculative Execution

Abstract: Threat actors continue to design exploits that specifically target physical weaknesses in processor hardware rather than more traditional software vulnerabilities. The now infamous attacks, Spector and Meltdown, ushered in a new era of hardware-based security vulnerabilities that have caused some experts to question whether the potential cybersecurity risks associated with simultaneous multithreading (SMT), also known as hyperthreading (HT), are potent enough to outweigh its computational advantages. A small pool of researchers now touts the need to disable SMT completely. However, this appears to be an extreme reaction; while a more security focused environment might be inclined to disable SMT, environments with a greater level of risk tolerance that may need the performance advantages offered by SMT to facilitate business operations, should not disable it by default and instead evaluate software application-based patch mitigations. This paper provides insights that can help make informed decisions when determining the suitability of SMT by exploring key processes related to multithreading, reviewing the most common exploits, and describing why Spectre and Meltdown do not necessarily warrant disabling HT.

Author 1: Jefferson Dinerman

Keywords: Speculative execution; hyperthreading; Spectre; meltdown; simultaneous multithreading

PDF

Paper 54: Performance Analysis of Acceleration Sensor for Movement Detection in Vehicle Security System

Abstract: The vehicle security system is a critical part of an entire car system in order to prevent unauthorized access into the car. As the statistic has shown that the number of cases of the private car being stolen is increasing and the recovery rate is decreasing sharply, it shows that the car security system failed to perform to prevent unauthorized access. Most of the vehicle security system simply consists of a few door-open detection switches, siren, and remote control to protect the car, which appears to be weak against experienced car theft. Therefore, the project is carried out to develop a vehicle security system that can measure the dynamic acceleration inside the vehicle using the ADXL345 accelerometer and locate the coordinate of the vehicle by using U-Blox Neo-6M GPS receiver. In order to evaluate the performance of the proposed vehicle security system, the experiment to determine the most suitable position among the four places inside a car to place the device was conducted. Then, the performance analysis of the GPS receiver for accurate tracking also was done. The results showed that the most suitable position to place the device is inside the center of the car dashboard and the GPS receiver has a mean cold start-up time of 5 minutes 47 seconds and hot start-up time of 11.72 seconds, with a standard deviation of 0.000003706° in latitude and 0.000002762° in longitude for position tracking.

Author 1: A M Kassim
Author 2: A. K. R. A. Jaya
Author 3: A. H. Azahar
Author 4: H. I. Jaafar
Author 5: S Sivarao
Author 6: F. A. Jafar
Author 7: M. S. M. Aras

Keywords: Security system; acceleration sensor; movement detection; ADXL345

PDF

Paper 55: A Framework for Hoax News Detection and Analyzer used Rule-based Methods

Abstract: Currently, the era where social media can present various facilities can answer the needs of the community for information and utilization for socio-economic interests. But the other impact of the presence of social media opens an ample space for the existence of information or hoax news about an event that is troubling the public. The hoax also provides cynical provocation, which is inciting hatred, anger, incitement to many people, directly influencing behavior so that it responds as desired by the hoax makers. Fake news is playing an increasingly dominant role in spreading Misinformation by influencing people's Perceptions or knowledge to distort their awareness and decision-making. A framework is develope dataset collection of hoax gathered using web crawlers from several websites, using classification techniques. This hoax news will be categorized into several detection parameters including, page URL, title hoax news, publish date, author, and content. Matching each word hoax using the similarity algorithm to produce the accuracy of the hoax news uses the rule-based detection method. Experiments were carried out on eleven thousand-hoax news used as training datasets and testing data sets; this data set for validation using similarity algorithms, to produce the highest accuracy of hoax text similarity. In this study, each hoax news will label into four categories, namely, Fact, Hoax, Information, Unknown. Contributions propose Automatic detection of hoax news, Automatic Multilanguage Detection, and a collection of datasets that we gather ourselves and validation that results in four categories of hoax news that have measured in terms of text similarity using similarity techniques. Further research can be continued by adding objects hate speech, black campaign, blockchain technique to ward off hoaxes, or can produce algorithms that produce better text accuracy.

Author 1: SY Yuliani
Author 2: Mohd Faizal Bin Abdollah
Author 3: Shahrin Sahib
Author 4: Yunus Supriadi Wijaya

Keywords: Component; hoax; news; framework; web crawling; detection; multilanguage; unsupervised algorithm; similarity algorithm

PDF

Paper 56: Classification of Arabic Writing Styles in Ancient Arabic Manuscripts

Abstract: This paper proposes a novel and an effective ap-proach to classify ancient Arabic manuscripts in “Naskh” and “Reqaa” styles. This work applies SIFT and SURF algorithms to extract the features and then uses several machine learning algorithms: Gaussian Na¨ıve Bayes (GNB), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) classifiers. The contribution of this work is the introduction of synthetic features that enhance the classification performance. The training phase encompasses four training models for each style. For testing purposes, two famous books from the Islamic literature are used: 1) Al-kouakeb Al-dorya fi Sharh Saheeh Al-Bokhary; and 2) Alfaiet Ebn Malek: Mosl Al-tolab Le Quaed Al-earab. The experimental results show that the proposed algorithm yields a higher accuracy with SIFT than with SURF which could be attributed to the nature of the dataset.

Author 1: Mohamed Ezz
Author 2: Mohamed A. Sharaf
Author 3: Al-Amira A. Hassan

Keywords: Arabic manuscripts; classification; feature extrac-tion; machine learning; GNB; DT; RF; K-NN classifiers; SURF; SIFT

PDF

Paper 57: A Method for Segmentation of Vietnamese Identification Card Text Fields

Abstract: The development of deep learning in computer vision has motivated researches in related fields, including Optical Character Recognition (OCR). Many proposed models and pre-trained models in the literature demonstrate their efficient in optical text recognition. In this context, image processing techniques has an essential role in improving the accuracy of recognition task. Because, depending on the practical application, image text often suffering several degradation from blur, uneven illumination, complex background, perspective distortion and so on. In this paper, we propose a method for pre-processing, text area extraction and segmentation of Vietnamese Identification Card, in order to improve the accuracy of Region of Interest detection. The proposed method was evaluated with a large data set with different practical qualities. Experiment results demonstrate the efficiency of our method.

Author 1: Tan Nguyen Thi Thanh
Author 2: Khanh Nguyen Trong

Keywords: Optical Character Recognition (OCR); text identifi-cation; identification card detection and recognition

PDF

Paper 58: Static Analysis on Floating-Point Programs Dealing with Division Operations

Abstract: Numerical accuracy is a critical point in safe computations when it comes to floating-point programs. Given a certain accuracy for the inputs of a program, the static analysis computes a safe approximation of the accuracy on the outputs. This accuracy depends on the propagation of the errors on the data and on the round-off errors on the arithmetic operations performed during the execution. Floating point values disposes a large dynamic range. But the main pitfall is the inaccuracies that occur with floating point computations. Based on the theory of abstract interpretation, in the paper an upper bound to the precision of the results of these computations in program have been demonstrated.

Author 1: MG Thushara
Author 2: K. Somasundaram

Keywords: Abstract interpretation; static analysis; forward analysis; abstract domain

PDF

Paper 59: Using Project-based Learning in a Hybrid e-Learning System Model

Abstract: After conducting the historical review and estab-lishing the state of the art, the authors of this paper focus on the incorporation of Project Based Learning (PBL), in an adaptive e-Learning environment, a novel and emerging perspective, which allows the application of what today constitutes one of the most effective strategies for the process of teaching learning. In PBL, each project is defined as a complex task or problem of reality, for which resolution, the student must develop research activities, planning, design, development, validation, testing, etc. For the proposal of the Hybrid Architecture of the e-Learning system model, the authors use artificial intelligence techniques, which make it possible to identify the Learning Styles (LS), with the purpose of automatically assigning the projects, according to the characteristics, interests, expectations and demands of the student, who will interact with an e-Learning environment, with a high capacity of adaptation to each individual. Finally, the conclusions and recommendations of the research work are established.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Jorge Luna-Urquizo

Keywords: Adaptative e-Learning; Project Based Learning (PBL); intelligent agents; back propagation neural networks; fuzzy logic; case base reasoning

PDF

Paper 60: Towards a Prototype of a Low-Priced Domestic Incubator with Telemetry System for Premature Babies in Lima, Peru

Abstract: Complications due to preterm birth are the main factors of death in the group of children with five years of age or less. Hence, a thorough care for these babies is needed especially during the first weeks or months after birth. Because in Peru not too many families can afford to rent or buy a incubator, this work puts forward the design and construction of a low-priced domestic incubator with telemetry system. The most important parameters to monitor are considered to be: the temperature and the humidity inside of the incubator and the heart pulse of the baby. To maintain the levels of temperature and humidity according to medical standards, a software was developed in an Arduino Uno. In order that the parents monitor the aforementioned parameters not necessarily being in the same room where the incubator is, a bluetooth module was used with the Arduino Uno to transmit the data to an app installed in a mobile phone. The first tests have shown that the humidity and temperature levels within the incubator are maintained as desired, also the the heart pulse is the expected one. However, there is still some work to do in regard to the upper limits of the humidity and temperature levels, which will be implemented as the next step of the project. It is expected that this incubator will serve Peruvian families, specially those living at the edge of poverty, who do not have the possibility of affording an expensive incubator at home or paying for these services at hospitals, for their premature babies.

Author 1: Jason Chicoma-Moreno

Keywords: Preterm babies; incubator; Arduino; telemetry

PDF

Paper 61: Sentiment Analysis and Classification of Photos for 2-Generation Conversation in China

Abstract: Appropriate photos can help the Chinese empty-nest elderly and young volunteers find common topics to promote communication. However, there are little researches on such photo in China. This paper used 40 online photos with 160 sessions for the conversation experiment for the Chinese elderly and young people to analyze these photos and classify them. Sentiment analysis of Chinese conversational texts was used to estimate the speaker’s attitude towards these photos. We collected the data set from the average value of sentiment analysis, the number of words uttered by the speakers, the pulse of the elderly, and the stress level of the youth for each photo. Principal Component Analysis (PCA) was carried out as a data preprocessing step to improve classification accuracies, and we selected four Principal Components (PCs) that account for 85.20% of total variance in the data. Next, we normalized these four PCs scores for Hierarchical Clustering Analysis (HCA) of the photos, and we got four clusters with different features. The results showed that photos in cluster2 were only optimal for the youth; cluster3 only made the elderly participants speak more; cluster1 and cluster4 was not suitable for the elders and the young people. This paper firstly classified the photos for 2-generation conversation and describing their features in China. Although, we did not find any photos suitable for both the elderly and the youth, this empirical study took a step forward in the investigation of photos for 2-generation conversation in China.

Author 1: Zhou Xiaochun
Author 2: Choi Dong-Eun
Author 3: Panote Siriaraya
Author 4: Noriaki Kuwahara

Keywords: Photo; 2-generation conversation; sentiment analy-sis; Principal Component Analysis (PCA); Hierarchical Clustering Analysis (HCA); China

PDF

Paper 62: LSSCW: A Lightweight Security Scheme for Cluster based Wireless Sensor Network

Abstract: In last two decades, Wireless Sensor Network (WSN) is used for large number of Internet of Things (IoT) applications, such as military surveillance, forest fire detection, healthcare, precision agriculture and smart homes. Because of the wireless nature of communication, Wireless Sensor Network suffers from various attacks such as Denial of Service (DoS) attack and replay attack. Dealing with scalability and security issues is the challenging task in WSN. In this paper, we have presented a Lightweight Security Scheme for Cluster based Wireless Sensor Network (LSSCW). LSSCW has two phases: Initialization phase and data transfer phase. The work focuses on secured data aggregation in wireless sensor network with the help of symmetric and session key generation technique. Data from sensor nodes are securely transferred to base station. LSSCW is lightweight and satisfies security requirements includ-ing authenticity, confidentiality and integrity. The performance of LSSCW is verified using Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Results shows that LSSCW is secured and is efficient in terms of computation and communication overhead.

Author 1: Ganesh R Pathak
Author 2: M.S.Godwin Premi
Author 3: Suhas H. Patil

Keywords: Authentication; Automated Validation of Internet Security Protocols and Applications tool; Internet of Things (IoT); key management; security; Wireless Sensor Network (WSN)

PDF

Paper 63: A Method for Designing Domain-Specific Document Retrieval Systems using Semantic Indexing

Abstract: Using domain knowledge and semantics to con-duct e‡ective document retrieval has attracted great attention from researchers in many di‡erent communities. Ultilizing that approach, we presents the method for designing domain-speciÿc document retrieval systems, which manages semantic information related to document content and supports se-mantic processing in search. ⁄e proposed method integrates components such as an ontology describing domain knowl-edge, a database of document repository, semantic represen-tations for documents; and advanced search techniques based on measuring semantic similarity. In this article, a model of domain knowledge for various information retrieval tasks, called ⁄e Classed Keyphrase based Ontology (CK-ONTO), will be presented in details. We also present graph-based models for representing documents together measures for evaluating the semantic relevance for usage in searching. ⁄e above methodology has been used in designing many real-world applications such as the Job-posting retrieval system. Evaluation with real-world inspired dataset, our methods showed noticeable improvements over traditional retrieval solutions.

Author 1: ThanhThuong T. Huynh
Author 2: TruongAn PhamNguyen
Author 3: Nhon V. Do

Keywords: Document representation; document retrieval system; graph matching; semantic indexing; semantic search; domain ontology

PDF

Paper 64: Immersive Technologies in Marketing: State of the Art and a Software Architecture Proposal

Abstract: After conducting the historical review of marketing and especially experiential marketing, which considers various types of experiences such as sensations, feelings, thoughts, actions and relationships, seeking in the consumer greater satisfaction and therefore greater effectiveness in the action of marketing, as well as establishing the state of the art of immersive technologies and their applications in marketing, the authors propose a software architecture model for hotel services, which includes the description of hardware and software elements for development and implementation. The model would make it possible to bring customers closer to experiences that are very close to reality, based on their profiles and characteristics, previously treated by a recommendation module included in the proposal, a fact that supports the decision of purchase, with a high degree of adaptation to their needs and requirements. The proposal and development of the model with attributes of originality, aims to contribute to the development and technological innovation of marketing in the hotel industry. Finally, conclusions and recommendations for future work are established.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Jorge Luna-Urquizo
Author 4: Juan Carlos Zu´niga
Author 5: Alonso Portocarrero
Author 6: Alberto Barbosa Raposo

Keywords: Marketing; experiential marketing; immersive tech-nologies; immersive technologies in marketing

PDF

Paper 65: A Review of Blockchain based Educational Projects

Abstract: Blockchain is a decentralized and shared dis-tributed ledger that records the transaction history done by totally different nodes within the whole network. The technology is practically used in the field of education for record-keeping, digital certification, etc. There have already been several papers published on this, but no one can’t find a single paper covering the blockchain-based educational projects. So, There is a gap of latest trends to education. Blockchain-based educational projects resolve the issues of today’s educators. On that basis, we conclude that there is a need for conducting a systematic literature review. This study, therefore, reviews the artistic gap between these two based on educational projects. For this purpose, the paper focuses on exploring some block-chain based projects and protocols that are used in these projects. It also analyses the block-chain features that are being used and the services are offered by the existing educational projects using block-chain features to improve the execution of this technology in education.

Author 1: Bushra Hameed
Author 2: Muhammad Murad Khan
Author 3: Abdul Noman
Author 4: M. Javed Ahmad
Author 5: M. Ramzan Talib
Author 6: Faiza Ashfaq
Author 7: Hafiz Usman
Author 8: M. Yousaf

Keywords: Blockchain; educational-project; education; digital-certification; record-keeping

PDF

Paper 66: Virtual Reality Full Immersion Techniques for Enhancing Workers Performance, 20 years Later: A Review and a Reformulation

Abstract: The principal aim of this article is to review and reformulate the work published by Alfaro-Casas, Bridi and Fialho [1], in 1997, about the use of virtual reality immersion techniques for enhancing workers performance’. The challenge to be solved is related to the discussion about eventual advances occurring since the publication of the original work published. The strength of the achievements relies on the open dialogue established with different human cognition theories. We consider not only Humberto Maturana and Francisco Varela autopoiesis (human biological foundations theories) but also some other approaches derived from Education Sciences and Knowledge Management. The focus is on Artificial Intelligence and the use of Immersive technologies. The state of the art is established and its contributions towards the construction of knowledge are investigated, as a means for the development of formation and capacitation activities of the workforce. The methodology used is the bibliographical revision on several databases and the search of theses in the main universities. The greatest weakness of the research relies on the fact that we limited the search to documents using the English, Spanish, or Portuguese language. Some of the open immersion problems of virtual immersion are also treated.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Jorge Luna-Urquizo
Author 4: Sof´ia Alfaro
Author 5: Francisco Fialho

Keywords: Autopoiesis; knowledge construction; knowledge management; knowledge construction by full immersion in virtual reality environments

PDF

Paper 67: Classification of People who Suffer Schizophrenia and Healthy People by EEG Signals using Deep Learning

Abstract: More than 21 million people worldwide suffer from schizophrenia. This serious mental disorder exposes people to stigmatization, discrimination, and violation of their human rights. Different works on classification and diagnosis of mental illnesses use electroencephalogram signals (EEG) because it reflects brain functioning, and how these diseases affect it. Due to the information provided by the EEG signals and the perfor-mance demonstrated by Deep Learning algorithms, the present work proposes a model for the classification of schizophrenic and healthy people through EEG signals using Deep Learning methods. Considering the properties of an EEG, high-dimensional and multichannel, we applied the Pearson Correlation Coefficient (PCC) to represent the relations between the channels, this way instead of using the large amount of data that an EEG provides, we used a shorter matrix as an input of a Convolutional Neural Network (CNN). Finally, results demonstrated that the proposed EEG-based classification model achieved Accuracy, Specificity, and Sensitivity of 90%, 90%, and 90%, respectively.

Author 1: Carlos Alberto Torres Naira
Author 2: Cristian Jos´e L´opez Del Alamo

Keywords: Convolutional Neural Network (CNN); electroen-cephalography; Electroencephalogram Signals (EEG); deep learn-ing; schizophrenia; classification; Pearson Correlation Coefficient (PCC); Universidad Nacional de San Agust´in (UNSA)

PDF

Paper 68: Securing Informative Fuzzy Association Rules using Bayesian Network

Abstract: In business association rules being considered as important assets, play a vital role in its productivity and growth. Different business partnership share association rules in order to explore the capabilities to make effective decision for enhance-ment of business and core capabilities. The fuzzy association rule mining approach emerged out of the necessity to mine quantitative data regularly present in database. An association rule is sensitive when it violates few rules and regulation for sharing particular nature of information to third world. Like classical association rules, there is a need for some privacy measures to be taken for retaining the standards and importance of fuzzy association rules. Privacy preservation is used for valu-able information extraction and minimizing the risk of sensitive information disclosure. Our proposed model mainly focuses to secure the sensitive information revealing association rules. In our model, sensitive fuzzy association rules are secured by identifying sensitive fuzzy item to perturb fuzzified dataset. The resulting transformed FARs are analyzed to conclude/calculate the accuracy level of our model in context of newly generated fuzzy association rules, hidden rules and lost rules. Extensive experiments are carried out in order to demonstrate the results of our proposed model. Privacy preservation of maximum number of sensitive FARs by keeping minimum perturbation highlights the significance of our model.

Author 1: Muhammad Fahad
Author 2: Khalid Iqbal
Author 3: Somaiya Khatoon
Author 4: Khalid Mahmood Awan

Keywords: Fuzzy association rules; privacy preservation; fuzzi-fication; sensitive rules; Bayesian network; perturbation

PDF

Paper 69: Evaluating a Cloud Service using Scheduling Security Model (SSM)

Abstract: The development in technology makes cloud com-puting widely used in different sectors such as academic and business or for a private purposes. Also, it can provide a convenient services via the Internet allowing stakeholders get all the benefits that the cloud can facilitate. With all the benefits of cloud computing still there are some risks such as security. This brings into consideration the need to improve the Quality of Service (QoS). A Scheduling Security Model (SSM) for Cloud Computing has been developed to address these issues. This paper will discuss the evaluation of the SSM model on some examples with different scenarios to investigate the cost and the effect on the service requested by customers.

Author 1: Abdullah Sheikh
Author 2: Malcolm Munro
Author 3: David Budgen

Keywords: Cloud computing; security; scheduling; evaluating; cloud models

PDF

Paper 70: Statistical Analysis and Security Evaluation of Chaotic RC5-CBC Symmetric Key Block Cipher Algorithm

Abstract: In some previous research works, it has been theoretically proven that RC5-CBC encryption algorithm behaves as a Devaney topological chaos dynamical system. This unpre-dictable behavior has been experimentally illustrated through such sensitivity tests analyses encompassing the avalanche effect phenomenon evaluation. In this paper, which is an extension of our previous work, we aim to prove that RC5 algorithm can guarantee a much better level of security and randomness while behaving chaotically, namely when embedded with CBC mode of encryption. To do this, we have began by evaluating the quality of such images encrypted under chaotic RC5-CBC symmetric key encryption algorithm. Then, we have presented the synthesis results of an hardware architecture that implements this chaotic algorithm in FPGA circuits.

Author 1: Abdessalem Abidi
Author 2: Anissa Sghaier
Author 3: Mohammed Bakiri
Author 4: Christophe Guyeux
Author 5: Mohsen Machhout

Keywords: Cipher Block Chaining (CBC); Rivest Cipher 5 (RC5); chaotic dynamical system; sensibility; security; randomness

PDF

Paper 71: Hybrid Control of PV-FC Electric Vehicle using Lyapunov based Theory

Abstract: Lyapunov based control is used to test whether a dynamical system is asymptotically stable or not. The control strategy is based on linearization of system. A Lyapunov function is constructed to obtain a stabilizing feedback controller. This paper deals with Lyapunov based control of multiple input single output system for hybrid electric vehicles (HEVs). Generally, an electric vehicle has an energy management system (EMS), an inverter, a DC-DC converter and a traction motor for the operation of its wheels. The control action is applied on the DC-DC converter, which works side-by-side with the EMS of the electric vehicle. The input sources considered in this study are: photo-voltaic (PV) panel, fuel cell and high voltage lithium-ion (Li-ion) battery. PV cell and fuel cell are considered as the primary sources of energy and the battery is considered as the secondary source. The converter used is a DC-DC boost converter which is connected with all the three sources. The idea follows the basic HEV principle in which multiple sources are incorporated to satisfy the power demands of the vehicle, using a DC-DC converter and an inverter, to operate its traction motor. The target is to achieve necessary tracking of all input source currents and output voltage, and fulfill the power demand of the HEV under severe load transients. The operations of the DC-DC converter are divided into three stages, each representing a different combination of the input sources. The analysis and proof of the stability of the HEV system is done using the Lyapunov stability theory.The results are discussed in the conclusion.

Author 1: Saad Hayat
Author 2: Sheeraz Ahmed
Author 3: Tanveer-ul-Haq
Author 4: Sadeeq Jan
Author 5: Mehtab Qureshi
Author 6: Zeeshan Najam
Author 7: Zahid Wadud

Keywords: Energy Management System (EMS); Hybrid Elec-tric Vehicle (HEV); DC-DC converter; Multiple Input-Multiple Output (MIMO) system

PDF

Paper 72: Automatic Classification of Academic and Vocational Guidance Questions using Multiclass Neural Network

Abstract: The educational and professional orientation is an essential phase for each student to succeed in his life and his curriculum. In this context, it is very important to take into account the interests, occupations, skills, and the type of each student's personalities to make the right choice of training and to build a solid professional outline. This article deals with the problematic of educational and vocational orientation and we have developed a model for automatic classification of orientation questions. “E-Orientation Data” is a machine learning method based on John L. Holland’s Theory of RIASEC typology that uses a multiclass neural network algorithm. This model allows us to classify the questions of academic and professional orientation according to their four categories, thus allows automatic generation of questions in this area. This model can serve E-Orientation practitioners and researchers for further research as the algorithm gives us good results.

Author 1: Omar Zahour
Author 2: El Habib Benlahmar
Author 3: Ahmed Eddaoui
Author 4: Oumaima Hourrane

Keywords: Academic and vocational guidance; multiclass neural network; e-orientation; machine learning; Holland’s theory

PDF

Paper 73: Software Architecture Solutions for the Internet of Things: A Taxonomy of Existing Solutions and Vision for the Emerging Research

Abstract: Recently, Internet of Thing (IoT) systems enable an interconnection between systems, humans, and services to create an (autonomous) ecosystem of various computation-intensive things. Software architecture supports an effective modeling, specification, implementation, deployment, and maintenance of software-intensive things to engineer and operationalize IoT systems. In order to conceptualize and optimize the role of software architectures for IoTs, there is a dire need for research efforts to analyse the existing research and solutions to formulate the vision for futuristic research and development. In this research, we propose to empirically analyse and taxonomically classify the impacts of research on designing, architecting, and developing IoT-driven software systems. We have conducted a survey-based study of the existing research – investigating challenges, solutions and required futuristic efforts – on architecting IoT systems. The results of survey highlight that software architecture solutions support various research themes for IoT systems such as (i) cloud-based ecosystems, (ii) reference architectures, (ii) autonomous systems, and (iv) agent-based systems for IoT-based software. The results also indicate that any futuristic vision to architect IoT software should incorporate architectural processes, patterns, models and languages to support reusable, automated, and efficient development of IoTs. The proposed research documents structured and systemised knowledge about software architecture to develop IoT systems. Such knowledge can facilitate the researchers and developers to identify the key areas, understand the existing solution and their limitations to conceptualize and propose innovation solutions for existing and emerging challenges related to the development of IoT software.

Author 1: Aakash Ahmad
Author 2: Sultan Abdulaziz
Author 3: Adwan Alanazi
Author 4: Mohammed Nazel Alshammari
Author 5: Mohammad Alhumaid

Keywords: Software and system architecture; Internet of Things; software engineering; software engineering for IoT

PDF

Paper 74: Data Augmentation to Stabilize Image Caption Generation Models in Deep Learning

Abstract: Automatic image caption generation is a challenging AI problem since it requires utilization of several techniques from different computer science domains such as computer vision and natural language processing. Deep learning techniques have demonstrated outstanding results in many different applications. However, data augmentation in deep learning, which replicates the amount and the variety of training data available for learning models without the burden of collecting new data, is a promising field in machine learning. Generating textual description for a given image is a challenging task for computers. Nowadays, deep learning performs a significant role in the manipulation of visual data with the help of Convolutional Neural Networks (CNN). In this study, CNNs are employed to train prediction models which will help in automatic image caption generation. The proposed method utilizes the concept of data augmentation to overcome the fuzziness of well-known image caption generation models. Flickr8k dataset is used in the experimental work of this study and the BLEU score is applied to evaluate the reliability of the proposed method. The results clearly show the stability of the outcomes generated through the proposed method when compared to others.

Author 1: Hamza Aldabbas
Author 2: Muhammad Asad
Author 3: Mohammad Hashem Ryalat
Author 4: Kaleem Razzaq Malik
Author 5: Muhammad Zubair Akbar Qureshi

Keywords: Convolutional Neural Networks (CNN); image caption generation; data augmentation; deep learning

PDF

Paper 75: From Poster to Mobile Calendar: An Event Reminder using Mobile OCR

Abstract: Technological innovations are the foundation of new services today. Successful services address real-life issues that help people manage life more conveniently using relevant technologies. Currently, images are a part of daily life. People are often taking pictures of different posters for different events as exhibitions, workshops, conferences, etc., with their mobile. Unfortunately, sometimes these pictures are forgotten and events’ dates expire. As consequence, people miss events they were interested in. Hence, with the vision to provide technology-powered services, affordable and turnkey applications, this paper presents Event-Reminder, a fully automated lightweight reminder system builds upon a mobile offline OCR (Optical Character Recognition) with touch interaction making some daily tasks easier. Event-Reminder is a mobile application that would recognize the images’ text content, extract event’s date and venue and upload this event information automatically to mobile calendar in order to remind the user about the event at proper time. A prototype system is introduced in this paper.

Author 1: Fatiha Bousbahi

Keywords: OCR; API; mobile apps; reminder systems

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org