The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 6 Issue 12

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Introducing a Method for Modeling Knowledge Bases in Expert Systems Using the Example of Large Software Development Projects

Abstract: Goal of this paper is to develop a meta-model, which provides the basis for developing highly scalable artificial intelligence systems that should be able to make autonomously decisions based on different dynamic and specific influences. An artificial neural network builds the entry point for developing a multi-layered human readable model that serves as knowledge base and can be used for further investigations in deductive and inductive reasoning. A graph-theoretical consideration gives a detailed view into the model structure. In addition to it the model is introduced using the example of large software development projects. The integration of Constraints and Deductive Reasoning Element Pruning are illustrated, which are required for executing deductive reasoning efficiently.

Author 1: Franz Felix Füssl
Author 2: Detlef Streitferdt
Author 3: Weijia Shang
Author 4: Anne Triebel

Keywords: Knowledge Engineering; Ontology Engineering; Knowledge Modelling; Knowledge Base; Expert System; Artificial Intelligence; Deductive Reasoning Element Pruning

PDF

Paper 2: A Prediction Model for Mild Cognitive Impairment Using Random Forests

Abstract: Dementia is a geriatric disease which has emerged as a serious social and economic problem in an aging society and early diagnosis is very important for it. Especially, early diagnosis and early intervention of Mild Cognitive Impairment (MCI) which is the preliminary stage of dementia can reduce the onset rate of dementia. This study developed MCI prediction model for the Korean elderly in local communities and provides a basic material for the prevention of cognitive impairment. The subjects of this study were 3,240 elderly (1,502 males, 1,738 females) in local communities over the age of 65 who participated in the Korean Longitudinal Survey of Aging (close) conducted in 2012. The outcome was defined as having MCI and set as explanatory variables were gender, age, level of education, level of income, marital status, smoking, drinking habits, regular exercise more than once a week, monthly average hours of participation in social activities, subjective health, diabetes and high blood pressure. The random Forests algorithm was used to develop a prediction model and the result was compared with logistic regression model and decision tree model. As the result of this study, significant predictors of MCI were age, gender, level of education, level of income, subjective health, marital status, smoking, drinking, regular exercise and high blood pressure. In addition, Random Forests Model was more accurate than the logistic regression model and decision tree model. Based on these results, it is necessary to build monitoring system which can diagnose MCI at an early stage.

Author 1: Haewon Byeon

Keywords: random forests; data mining; dementia; mild cognitive impairment; risk factors

PDF

Paper 3: Spectrum Sensing Methodologies for Cognitive Radio Systems: A Review

Abstract: Spectrum sensing is an important functional unit of the cognitive radio networks. The spectrum sensing is one of the main challenges encountered by cognitive radio. This paper presents a survey of spectrum sensing techniques and they are studied from a cognitive radio perspective. The challenges that go with spectrum sensing are reviewed. Two sensing schemes, namely; cooperative sensing and eigenvalue-based sensing are studied. The various advantages and disadvantages are highlighted. Based on this study, the cooperative spectrum sensing is proposed for employment in spectrum sensing in wideband based cognitive radio systems.

Author 1: Ireyuwa E. Igbinosa
Author 2: Olutayo O. Oyerinde
Author 3: Viranjay M. Srivastava
Author 4: Stanley Mneney

Keywords: Cognitive radio; Cooperative sensing; Data Fusion; OFDM; Spectrum Sensing; wideband sensing

PDF

Paper 4: A Posteriori Pareto Front Diversification Using a Copula-Based Estimation of Distribution Algorithm

Abstract: We propose CEDA, a Copula-based Estimation of Distribution Algorithm, to increase the size, achieve high diversity and convergence of optimal solutions for a multiobjective optimization problem. The algorithm exploits the statistical properties of Copulas to produce new solutions from the existing ones through the estimation of their distribution. CEDA starts by taking initial solutions provided by any MOEA (Multi Objective Evolutionary Algorithm), construct Copulas to estimate their distribution, and uses the constructed Copulas to generate new solutions. This design saves CEDA the need of running an MOEA every time alternative solutions are requested by a Decision Maker when the found solutions are not satisfactory. CEDA was tested on a set of benchmark problems traditionally used by the community, namely UF1, UF2, ..., UF10 and CF1, CF2, ..., CF10. CEDA used along with SPEA2 and NSGA2 as two examples of MOEA thus resulting in two variants CEDA-SPEA2 and CEDA-NSGA2 and compare them with SPEA2 and NSGA2. The results of The experiments show that, with both variants of CEDA, new solutions can be generated in a significantly smaller without compromising quality compared to those found SPEA2 and NSGA2.

Author 1: Abdelhakim Cheriet
Author 2: Foudil Cherif

Keywords: Multiobjective Optimization Problems; Evolutionary Algorithms; Estimation of Distribution Algorithms; Copulas

PDF

Paper 5: Vitality Aware Cluster Head Election to Alleviate the Wireless Sensor Network for Long Time

Abstract: The Wireless Sensor Networks (WSN) motivated by its unique characters such as it is capable of enduring callous ecological circumstances, and grant better scalability. The wireless sensor network is composed of insignificant sensors and a base station. The battery supplies the energy for the sensors. Hence, the lifetime of the network gets tainted while overworking for transmission. Since the WSN is being utilized for the dangerous purpose, we have to swell the lifespan of the network. The clustering is one of the foremost mechanisms to maximize the network's lifespan. The cluster head assortment plays an imperative role given the fact that clusters head was answerable for the transformation of data between cluster member and the base station. This present article deals with the novel scheme for the cluster head selection entitled as vitality aware cluster head election. In this scheme, the sensor nodes are being clustered into an optimal number. Subsequently, the cluster head is selected by a ballot for each and every group based on its remaining energy. To weigh up the performance of the proposed method, a Network Simulator (NS-2) has been employed.

Author 1: P. Thiruvannamalai Sivasankar
Author 2: Dr. M. RamaKrishnan

Keywords: Wireless Sensor Networks(WSNs); Residual energy; Clustering; Life span; Sensor

PDF

Paper 6: Designing an IMS-LD Model for Collaborative Learning

Abstract: The context of this work is that of designing an IMS-LD model for collaborative learning. Our work is specifically in the field or seeking to promote, by means of information technology from a distance, a collective knowledge construction. Our approach is to first think about the conditions for creating a real collective activities between learners, and designing the IT environment that supports these activities. We chose to use the pedagogy project as a basis for teaching these collective activities. This pedagogy has already proven itself, mostly in traditional learning situations in the classroom.

Author 1: Fauzi El Moudden
Author 2: Prof. Mohamed Khaldi
Author 3: Prof. Aammou Souhaib

Keywords: Collaborative Learning; Pedagogy Project; Socio-constructivist; IMS-LD

PDF

Paper 7: An Enhanced Steganographic Model Based on DWT Combined with Encryption and Error Correction Techniques

Abstract: The problem of protecting information, modification, privacy and origin validation are very important issues and became the concern of many researchers. Handling these problems definitely is a big challenge and this is probably why so much attention directed to the development of information protection schemes. In this paper, we propose a robust model that combines and integrates steganographic techniques with encryption, and error detection and correction techniques in order achieve secrecy, authentication and integrity. The idea of applying these techniques is based on decomposing the image into three separate color planes Red, Green and Blue and then depending on the encryption key we divide the image into N blocks. By applying DWT on each block independently, this model enables hiding the information in the image in an unpredictable manner. The part of the image where information embedded is a key depended and unknown to the intruder and by this we achieve blinded DWT effect. To enhance reliability the proposed model that uses hamming code which helps to recover lost or modified information. The proposed Model implemented and tested successfully.

Author 1: Dr.Adwan Yasin
Author 2: Mr.Nizar Shehab
Author 3: Dr.Muath Sabha
Author 4: Mariam Yasin

Keywords: Steganography; DWT; LSB; hamming code; encryption and decryption

PDF

Paper 8: A Multimedia System for Breath Regulation and Relaxation

Abstract: In the hectic life today, detrimental stress has caused numerous illness. To adjust mental states, breath regulation plays a core role in multiple relaxation techniques. In this paper, we introduce a multimedia system supporting breath regulation and relaxation. Features of this system include non-contact respiration detection, bio-signal monitoring, and breath interaction. In addition to illustrating this system, we also propose a novel form of breath interaction. Through this form of breath interaction, the system effectively influenced user breath such that their breathing features turned into patterns that appeared when people were relaxed. An experiment was conducted to compare the effects of three forms of regulation, the free breathing mode, the pure guiding mode, and the local-mapping mode. Experiment results show that multimedia-assisted breath interaction successfully deepened and slowed down user breath, compared with free breathing mode. Besides objective breathing feature changes, subjective feedback also showed that participants were satisfied and became relaxed after using this system.

Author 1: Wen-Ching Liao
Author 2: Han-Hong Lin
Author 3: He-Lin Ruo
Author 4: Po-Hsiang Hsu

Keywords: breathing; relaxation; biofeedback; interaction; multimedia

PDF

Paper 9: A Secure Network Communication Protocol Based on Text to Barcode Encryption Algorithm

Abstract: Nowadays, after the significant development in the Internet, communication and information exchange around the world has become easier and faster than before. One may send an e-mail or perform money transaction (using a credit card) while being at home. The Internet users can also share resources (storage, memory, etc.) or invoke a method on a remote machine. All these activities require securing data while the data are sent through the global network. There are various methods for securing data on the internet and ensuring its privacy; one of these methods is data encryption. This technique is used to protect the data from hackers by scrambling these data into a non-readable form. In this paper, we propose a novel method for data encryption based on the transformation of a text message into a barcode image. In this paper, the proposed Bar Code Encryption Algorithm (BCEA) is tested and analyzed.

Author 1: Abusukhon Ahmad
Author 2: Bilal Hawashin

Keywords: Encryption, Decryption; Algorithm; Secured Communication; Private Key; Barcode Image

PDF

Paper 10: Comparison Contour Extraction Based on Layered Structure and Fourier Descriptor on Image Retrieval

Abstract: In this paper, a new content-based image retrieval technique using shape feature is proposed. A shape features extracted by layered structure representation has been implemented. The approach is extract feature shape by measuring the distance between centroid (center) and boundaries of the object that can capture multiple boundaries in the same angle, an object shape that has some points with the same angle.Once an input taking into account, the method will search most related image to the input. The correlation between input and output has been defined by specific role. Firstly the input image has to be converted from RGB image to Grayscale image and then follow by edge detection process. After edge detection process the boundary object will be obtained and then calculate the distance between the center of an object and the boundary of an object and put it in the feature vector and if there is another boundary on the same angle then put it in the different feature vector with a different layer. The experiment result on the plankton dataset shows that the proposed method better than other conventional Fourier descriptor method.

Author 1: Cahya Rahmad
Author 2: Kohei Arai

Keywords: Cbir; Mlccd; extract features; rgb; Fourier descriptor; shape; retrieval

PDF

Paper 11: Arabic Sentiment Analysis: A Survey

Abstract: Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score.

Author 1: Adel Assiri
Author 2: Ahmed Emam
Author 3: Hmood Aldossari

Keywords: Arabic Sentiment Analysis; Qualitative Analysis; Quantitative Analysis; Smoothness Analysis

PDF

Paper 12: A Novel Ball on Beam Stabilizing Platform with Inertial Sensors

Abstract: This research paper presents a novel controller design for one degree of freedom (1-DoF) stabilizing platform using inertial sensors. The plant is a ball on a pivoted beam. Multi-loop controller design technique has been used. System dynamics is observable but uncontrollable. The uncontrollable polynomial of the system is not Hurwitz hence system is not stabilizable. Hybrid compensator design strategy is implemented by partitioning the system dynamics into two parts: controllable subsystem and uncontrollable subsystem. Controllable part is compensated by partial pole assignment in the inner loop. Prediction observer is designed for unmeasured states in the inner loop. Rapid control prototyping technique is used for compensator design for the outer loop containing the controlled inner loop and uncountable part of the system. Real-time system responses are monitored using MATLAB/Simulink that show promising performance of the hybrid compensation technique for reference tracking and robustness against model inaccuracies.

Author 1: Ali Shahbaz Haider
Author 2: Muhammad Bilal
Author 3: Samter Ahmed
Author 4: Saqib Raza
Author 5: Imran Ahmed

Keywords: stabilizing platform; ball on beam; multi-loop controller; inertial sensors; rapid control prototyping; partial pole assignment

PDF

Paper 13: A Feature Analysis of Risk Factors for Stroke in the Middle-Aged Adults

Abstract: In order to maintain health during middle age and achieve successful aging, it is important to elucidate and prevent risk factors of middle-age stroke. This study investigated high risk groups of stroke in middle age population of Korea and provides basic material for establishment of stroke prevention policy by analyzing sudden perception of speech/language problems and clusters of multiple risk factors. This study analyzed 2,751 persons (1,191 males and 1,560 females) aged 40–59 who participated in the 2009 Korea National Health and Nutrition Examination Survey. Outcome was defined as prevalence of stroke. Set as explanatory variables were age, gender, final education, income, marital status, at-risk drinking, smoking, occupation, subjective health status, moderate physical activity, hypertension, and sudden perception of speech and language problems. A prediction model was developed by the use of a C4.5 algorithm of data-mining approach. Sudden perception of speech and language problems, hypertension, and marital status were significantly associated with stroke in Korean middle aged people. The most preferentially involved predictor was sudden perception of speech and language problems. In order to prevent middle-age stroke, it is required to systematically manage and develop tailored programs for high-risk groups based on this prediction model.

Author 1: Haewon Byeon
Author 2: Hyeung Woo Koh

Keywords: C4.5; stroke; decision tree; risk factor; speech problem

PDF

Paper 14: Analysis on Existing Basic Slas and Green Slas to Define New Sustainable Green SLA

Abstract: Nowadays, most of the IT (Information Technology) and ICT (Information and Communication Technology) industries are practicing sustainability under green computing hoods. Users/Customers are also moving towards a new sustainable society. Therefore, while getting or providing different services from different ICT vendors, Service Level Agreement (SLA) becomes very important for both the service providers/vendors and users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Service (QoS) aspects through SLAs. However, these basic SLAs actually do not cover eco-efficient green issues or ethical issues for actual sustainable development. That is why green SLA (GSLA) should come into play. GSLA is a formal agreement incorporating all the traditional/basic commitments as well as respecting the ecological, economical and ethical aspects of sustainability. This research would survey on different basic SLA parameters for various services in ICT industries. At the same time, this survey would focus on finding the gaps and incorporating basic SLA parameters with existing green computing issues and ethical issues for different services in various computing domains. This research defines future GSLA in relationship with ICT product life and three pillars of sustainability. The proposed definition and overall survey could help different service providers/vendors to define their future GSLA as well as business strategies for this new transitional sustainable society.

Author 1: Iqbal Ahmed
Author 2: Hiroshi Okumura
Author 3: Khoei Arai

Keywords: SLA; GSLA; Green ICT; Sustainability; IT ethics; ICT Product Life

PDF

Paper 15: EMCC: Enhancement of Motion Chain Code for Arabic Sign Language Recognition

Abstract: In this paper, an algorithm for Arabic sign language recognition is proposed. The proposed algorithm facilitates the communication between deaf and non-deaf people. A possible way to achieve this goal is to enable computer systems to visually recognize hand gestures from images. In this context, a proposed criterion which is called Enhancement Motion Chain Code (EMCC) that uses Hidden Markov Model (HMM) on word level for Arabic sign language recognition (ArSLR) is introduced. This paper focuses on recognizing Arabic sign language at word level used by the community of deaf people. Experiments on real-world datasets showed that the reliability and suitability of the proposed algorithm for Arabic sign language recognition. The experiment results introduce the gesture recognition error rate for a different sign is 1.2% compared to that of the competitive method.

Author 1: Mahmoud Zaki Abdo
Author 2: Alaa Mahmoud Hamdy
Author 3: Sameh Abd El-Rahman Salem
Author 4: Elsayed Mostafa Saad

Keywords: image analysis; Sign language recognition; hand gestures; HMM; hand geometry; and MCC

PDF

Paper 16: A Novel Approach for Ranking Images Using User and Content Tags

Abstract: In this study, a tag and content-based ranking algorithm is proposed for image retrieval that uses the metadata of images as well as the visual features of images, also known as “visual words” to retrieve more relevant images. Thus, making the retrieval process more accurate than the keyword-based retrieval approaches. Both tag and content-based image retrieval techniques have their own advantages and disadvantages. By combining the two, their disadvantages have been offset. The proposed system has been developed to bridge the gap between the existing techniques and the desired user requirements. Initially, the system extracts the metadata of images and stores them into a custom designed dictionary dataset. Then, the system creates a visual vocabulary and trains a classifier on a dataset of images belonging to different categories. Next, for any given userquery, the system makes a decision to display a class of images that best matches the query. These class images are processed in a way that we compute the relevance scores for each image and display the result based on the score.

Author 1: Arif Ur Rahman
Author 2: Muhammad Muzammal
Author 3: Humayun Zaheer Ahmad
Author 4: Awais Majeed
Author 5: Zahoor Jan

PDF

Paper 17: A Disaster Document Classification Technique Using Domain Specific Ontologies

Abstract: Manual data collection and entry is one of the bottlenecks in conventional disaster management information systems. Time is a critical factor in emergency situations and timely data collection and processing may help in saving several lives. An effective disaster management system needs to collect data from World Wide Web automatically. A prerequisite for data collection process is document classification mechanism to classify a particular document into different categories. Ontologies are formal bodies of knowledge used to capture machine understandable semantics of a domain of interest and have been used successfully to support document classification in various domains. This paper presents an ontology-based document classification technique for automatic data collection in a disaster management system. A general ontology of disasters is used that contains the description of several natural and man-made disasters. The proposed technique augments the conventional classification measures with the ontological knowledge to improve the precision of classification. A preliminary implementation of the proposed technique shows promising results with up to 10% overall improvement in precision when compared with conventional classification methods.

Author 1: Qazi Mudassar Ilyas

Keywords: Disaster Management; Document Classification; Ontology; Supervised Learning; Information Retrieval

PDF

Paper 18: Intrusion Detection System in Wireless Sensor Networks: A Review

Abstract: The security of wireless sensor networks is a topic that has been studied extensively in the literature. The intrusion detection system is used to detect various attacks occurring on sensor nodes of Wireless Sensor Networks that are placed in various hostile environments. As many innovative and efficient models have emerged in the last decade in this area, we mainly focus our work on Intrusion detection Systems. This paper reviews various intrusion detection systems which can be broadly classified based on certain traditional techniques, namely signature based, anomaly based and hybrid based. The models proposed by various researchers have been critically examined based on certain classification parameters, such as detection rate, false alarm, algorithms used, etc. This work contains a summarization study of various intrusion detection systems used particularly in Wireless Sensor Networks, and also highlights their distinct features.

Author 1: Anush Ananthakumar
Author 2: Tanmay Ganediwal
Author 3: Dr. Ashwini Kunte

Keywords: Wireless sensor networks; Intrusion Detction System; Signature based IDS; Anomaly based IDS; Hybrid based IDS; Algorithms

PDF

Paper 19: A Survey on the Internet of Things Software Arhitecture

Abstract: The Internet of Things (IoT) is a concept and a paradigm that considers the pervasive presence in the environment of a variety of things/objects through wired or wireless that are uniquely addressed and are able to interact with each other and cooperate with other things/objects in order to create new applications/services and to achieve common objectives. IoT defines a new world where the real, the digital and the virtual converge to create an environment that makes the energy, transport, city, and many other areas to become more intelligent. The IoT purposed is to validate the connection type: anytime, anywhere, and everything and everyone. IoT may be considered as a network of physical objects with embedded communication technologies that 'feel' or interact with internal or external environment. This paper presents a survey on the Internet of Things software architectures that meets the requirements listed above.

Author 1: Nicoleta-Cristina Gaitan
Author 2: Vasile Gheorghita Gaitan
Author 3: Ioan Ungurean

Keywords: middleware; Internet of Things; things; software architecture

PDF

Paper 20: A Carrier Signal Approach for Intermittent Fault Detection and Health Monitoring for Electronics Interconnections System

Abstract: Intermittent faults are completely missed out by traditional monitoring and detection techniques due to non-stationary nature of signals. These are the incipient events of a precursor of permanent faults to come. Intermittent faults in electrical interconnection are short duration transients which could be detected by some specific techniques but these do not provide enough information to understand the root cause of it. Due to random and non-predictable nature, the intermittent faults are the most frustrating, elusive, and expensive faults to detect in interconnection system. The novel approach of the author injects a fixed frequency sinusoidal signal into electronics interconnection system that modulates intermittent fault if persist. Intermittent faults and other channel effects are computed from received signal by demodulation and spectrum analysis. This paper describes technology for intermittent fault detection, and classification of intermittent fault, and channel characterization. The paper also reports the functionally tests of computational system of the proposed methods. This algorithm has been tested using experimental setup. It generate an intermittent signal by external vibration stress on connector and intermittency is detected by acquiring and processing propagating signal. The results demonstrate to detect and classify intermittent interconnection and noise variations due to intermittency. Monitoring the channel in-situ with low amplitude, and narrow band signal over electronics interconnection between a transmitter and a receiver provides the most effective tool for continuously watching the wire system for the random, unpredictable intermittent faults, the precursor of failure.

Author 1: Syed Wakil Ahmad
Author 2: Dr. Suresh Perinpanayagam
Author 3: Prof. Ian Jennions
Author 4: Dr. Mohammad Samie

Keywords: NFF; Intermittent; Intermittency; Fault detection; Health Monitoring

PDF

Paper 21: A Synchronous Stream Cipher Generator Based on Quadratic Fields (SSCQF)

Abstract: In this paper, we propose a new synchronous stream cipher called SSCQF whose secret-key is Ks=(z1,...zn) where zi is a positive integer. Let d1, d2,..., dN be N positive integers in {0,1,...2m -1} such that di=zi mod2m with m and m>=8. Our purpose is to combine a linear feedback shift registers LFSRs, the arithmetic of quadratic fields: more precisely the unit group of quadratic fields, and Boolean functions [14]. Encryption and decryption are done by XRO`ing the output pseudorandom number generator with the plaintext and ciphertext respectively. The basic ingredients of this proposal stream generator SSCQF rely on the three following processes: In process I , we constructed the initial vectors IV={X1,...,Xn} from the secret-key Ks=(z1,...zn) by using the fundamental unit of Q( Nvdi) if di is a square free integer otherwise by splitting di, and in process II, we regenerate, from the vectors Xi, the vectors Yi having the same length L, that is divisible by 8 (equations (2) and (3) ). In process III , for each Yi , we assign L/8 linear feedback shift registers, each of length eight. We then obtain N x L/8 linear feedback shift registers that are initialized by the binary sequence regenerated by process II , filtered by primitive polynomials, and the combine the binary sequence output with L/8 Boolean functions. The keystream generator, denoted K , is a concatenation of the output binary sequences of all Boolean functions.

Author 1: Younes ASIMI
Author 2: Ahmed ASIMI

Keywords: Synchronous stream cipher SSCQF; linear feedback shift registers LFSRs; arithmetic of quadratic fields; Boolean functions; pseudorandom number generator and keystream generator

PDF

Paper 22: Pneumatic Launcher Based Precise Placement Model for Large-Scale Deployment in Wireless Sensor Networks

Abstract: Sensor nodes (SNs) are small sized, low cost devices used to facilitate automation, remote controlling and monitoring. Wireless sensor network (WSN) is an environment monitoring network formed by the number of SNs connected by a wireless medium. Deployment of SNs is an essential phase in the life of a WSN as all the other performance matrices such as connectivity, life and coverage directly depends on it. Moreover, the task of deployment becomes challenging when the WSN is to be established in a large scale candidate region within a limited time interval in order to deal with emergency conditions. In this paper a model for time efficient and precise placement of SNs in large-scale candidate region has been proposed. It constitute of two sets of pneumatic launchers (PLs), one on either side of a deployment helicopter. Each PL is governed by software which determines the launch time and velocity of a SN for its precise placement on the predetermined positions. Simulation results show that the proposed scheme is more time efficient, feasible and cost effective in comparison to the existing state of art models of deployment and can be opted as an effective alternative to deal with emergency conditions.

Author 1: Vikrant Sharma
Author 2: R B Patel
Author 3: H S Bhadauria
Author 4: D Prasad

Keywords: WSN; deployment; placement; aerial; coverage

PDF

Paper 23: Tree-Combined Trie: A Compressed Data Structure for Fast IP Address Lookup

Abstract: For meeting the requirements of the high-speed Internet and satisfying the Internet users, building fast routers with high-speed IP address lookup engine is inevitable. Regarding the unpredictable variations occurred in the forwarding information during the time and space, the IP lookup algorithm should be able to customize itself with temporal and spatial conditions. This paper proposes a new dynamic data structure for fast IP address lookup. This novel data structure is a dynamic mixture of trees and tries which is called Tree-Combined Trie or simply TC-Trie. Binary sorted trees are more advantageous than tries for representing a sparse population while multibit tries have better performance than trees when a population is dense. TC-trie combines advantages of binary sorted trees and multibit tries to achieve maximum compression of the forwarding information. Dynamic reconfiguration of TC-trie, made it capable of customizing itself along the time and scaling to support more prefixes or longer IPv6 prefixes. TC-trie provides a smooth transition from current large IPv4 databases to the large IPv6 databases of the future Internet.

Author 1: Muhammad Tahir
Author 2: Shakil Ahmed

Keywords: IP address lookup; compression; dynamic data structure; IPv6

PDF

Paper 24: Performance Evaluation of K-Mean and Fuzzy C-Mean Image Segmentation Based Clustering Classifier

Abstract: This paper presents Evaluation K-mean and Fuzzy c-mean image segmentation based Clustering classifier. It was followed by thresholding and level set segmentation stages to provide accurate region segment. The proposed stay can get the benefits of the K-means clustering. The performance and evaluation of the given image segmentation approach were evaluated by comparing K-mean and Fuzzy c-mean algorithms in case of accuracy, processing time, Clustering classifier, and Features and accurate performance results. The database consists of 40 images executed by K-mean and Fuzzy c-mean image segmentation based Clustering classifier. The experimental results confirm the effectiveness of the proposed Fuzzy c-mean image segmentation based Clustering classifier. The statistical significance Measures of mean values of Peak signal-to-noise ratio (PSNR) and Mean Square Error (MSE) and discrepancy are used for Performance Evaluation of K-mean and Fuzzy c-mean image segmentation. The algorithm’s higher accuracy can be found by the increasing number of classified clusters and with Fuzzy c-mean image segmentation.

Author 1: Hind R.M Shaaban
Author 2: Farah Abbas Obaid
Author 3: Ali Abdulkarem Habib

Keywords: Segmentation; image segmentation; Evaluation image Segmentation; K-means clustering; Fuzzy C-means

PDF

Paper 25: Identifying Cancer Biomarkers Via Node Classification within a Mapreduce Framework

Abstract: Big data are giving new research challenges in the life sciences domain because of their variety, volume, veracity, velocity, and value. Predicting gene biomarkers is one of the vital research issues in bioinformatics field, where microarray gene expression and network based methods can be used. These datasets suffer from the huge data voluminous, causing main memory problems. In this paper, a Random Committee Node Classifier algorithm (RCNC) is proposed for identifying cancer biomarkers, which is based on microarray gene expression data and Protein-Protein Interaction (PPI) data. Data are enriched from other public databases, such as IntACT1 and UniProt2 and Gene Ontology3 (GO). Cancer Biomarkers are identified when applied to different datasets with an accuracy rate an accuracy rate 99.16%, 99.96% precision, 99.24% recall, 99.16% F1-measure and 99.6 ROC. To speed up the performance, it is run within a MapReduce framework, where RCNC MapReduce algorithm is much faster than RCNC sequential algorithm when having large datasets.

Author 1: Taysir Hassan A. Soliman

Keywords: Big data; cancer biomarkers; MapReduce; node classification

PDF

Paper 26: Intelligent Mobility Management Model for Heterogeneous Wireless Networks

Abstract: Growing consumer demands for access of communication services in a ubiquitous environment is a driving force behind the development of new technologies. The rapid development in communication technology permits the end users to access heterogeneous wireless networks to utilize the swerve range of data rate service “anywhere any time”. These forces to technology developers to integrate different wireless access technologies which is known as fourth generation (4G). It is become possible to reduce the size of mobile nodes (MNs) with manifold network interfaces and development in IP-based applications. 4G mobile/wireless computing and communication heterogeneous environment consist of various access technologies that differ in bandwidth, network conditions, service type, latency and cost. A major challenge of the 4G wireless network is seamless vertical handoff across the heterogeneous wireless access network as the users are roaming in the heterogeneous wireless network environment. Today communication devices are portable and equipped with manifold interfaces and are capable to roam seamlessly among the various access technology networks for maintaining the network connectivity, since no single-interface technology provides ubiquitous coverage and quality-of-service (QoS). This paper reports a mobile agent based heterogeneous wireless network management system. In this system agent’s decision focuses on multi parameter system (MPS). This system works on the parameters- network delay, received signal strength, network latency and study of the collected information about adjoining network cells viz., accessible channel. System is simulated and a comparative study is also made. From results it is observed that system improves the performance of wireless network.

Author 1: Sanjeev Prakash
Author 2: R B Patel
Author 3: V. K. Jain

Keywords: FNS; MNS; MN; WLAN; Mobile Agent

PDF

Paper 27: Development of Adaptive Mobile Learning (AML) on Information System Courses

Abstract: In general, the learning process is done conventionally, where the learning process is done face to face between teachers with learners in the classroom. Teachers have a very important role in determining the quantity and quality of the implementation study. Therefore, teachers must think and plan carefully to improve learning opportunities for learners and improve the quality of teaching. Along with the development of mobile technology and communication is rapidly increasing, enabling the learning process is not only done in the classroom, but can be done anywhere and anytime. Based on the analysis of the results of observations in the class conducted by a researcher and as a teacher in the learning courses of Information Systems, found some obstacles encountered during the learning process This research is to develop an Adaptive Mobile Learning on Information Systems courses. The method used in this research is the development of research methods (research and development), which selected the design development using System Development Life Cycle model. Adaptive Mobile Learning will be validated and tested through three phases of testing are: (1) Product technical test as a software. (2) Testing of the product as a medium of learning, through expert review by a media expert, (3) Field test to evaluate the response of the students that learned Adaptive Mobile Learning. The results show that Adaptive Mobile Learning software is can present the material in the course of Information Systems. Media Adaptive Mobile Learning can be used as an alternative medium (supplement) of learning Information Systems courses. The response of students to the development and use of software for Adaptive Mobile Learning Information Systems courses is likely to very positive, which is at 67.7% very positive and 32.3% is positive.

Author 1: I Made Agus Wirawan
Author 2: Made Santo Gitakarna

Keywords: Mobile Learning; Information System Course; Learning Media; Adaptive Learning; Learners Response; Research and Development

PDF

Paper 28: Ontology-Based Clinical Decision Support System for Predicting High-Risk Pregnant Woman

Abstract: According to Pakistan Medical and Dental Council (PMDC), Pakistan is facing a shortage of approximately 182,000 medical doctors. Due to the shortage of doctors; a large number of lives are in danger especially pregnant woman. A large number of pregnant women die every year due to pregnancy complications, and usually the reason behind their death is that the complications are not timely handled. In this paper, we proposed ontology-based clinical decision support system that diagnoses high-risk pregnant women and refer them to the qualified medical doctors for timely treatment. The Ontology of the proposed system is built automatically and enhanced afterward using doctor’s feedback. The proposed framework has been tested on a large number of test cases; experimental results are satisfactory and support the implementation of the solution.

Author 1: Umar Manzoor
Author 2: Muhammad Usman
Author 3: Mohammed A. Balubaid
Author 4: Ahmed Mueen

Keywords: High-risk patient; Pregnant woman; Ontology-based CDSS; Clinical Decision Support System

PDF

Paper 29: Distributed Optimization Model of Wavelet Neuron for Human Iris Verification

Abstract: Automatic human iris verification is an active research area with numerous applications in security purposes. Unfortunately, most of feature extraction methods in human iris verification systems are sensitive to noise, scale and rotation. This paper proposes an integrated hybrid model among Discrete Wavelet Transform, Wavelet Neural Network and Genetic Algorithms for optimizing the feature extraction and verification methods. For any iris image, the wavelet features are extracted by Discrete Wavelet Transform without any dependency on scale and pixels' intensity. Besides, Wavelet Neural Network classifier is integrated as a local optimization method to solve the orientation problem and increase the intrinsic features. In solving the down sample process caused by DWT, each human iris should be characterized by a set of parameters of its optimal wavelet analysis function at a determined analysis level. Thus, distributed Genetic Algorithms, meta-heuristic algorithm, is introduced as a global optimization searching technique to discover the optimal parameter values. The details and limitation of this paper will be discussed where a comparative study should appear. Moreover, conclusions and future work are described.

Author 1: Elsayed Radwan
Author 2: Mayada Tarek

Keywords: Discrete Wavelet Transform (DWT); Wavelet Features; Wavelet Neural Network (WNN); Distributed Genetic Algorithms (GA); Human Iris Verification

PDF

Paper 30: Composable Modeling Method for Generic Test Platform for Cbtc System Based on the Port Object

Abstract: The Communications-based train control(CBTC) system has gradually become the first choice for signal systems of urban mass transit. However, how to guarantee its safety has become a research hotspot in safety fields. The generic test system with high efficiency has become the main means to verify the function and performance of CBTC system. This paper discusses a composable modeling method for the generic test platform for CBTC system based on the port object. This method defines the port object(PO) model as the basic component for composable modeling, verifies its port behavior and generates its compositional properties. Based on the port description and the test environment description, it builds port sets and environment port cluster, respectively. Then it analyzes and extracts possible crosscutting concerns, and finally generates a variable PO component library. It takes the modeling of block port objects in line simulation of generic test platform for CBTC systems as an example to verify the feasibility of the method.

Author 1: WAN Yongbing
Author 2: WANG Daqing
Author 3: MEI Meng

Keywords: composable modeling; test platform; CBTC; port object; line simulation

PDF

Paper 31: JPI UML Software Modeling

Abstract: Aspect-Oriented Programming AOP extends object-oriented programming OOP with aspects to modularize crosscutting behavior on classes by means of aspects to advise base code in the occurrence of join points according to pointcut rules definition. However, join points introduce dependencies between aspects and base code, a great issue to achieve an effective independent development of software modules. Join Point Interfaces JPI represent join points using interfaces between classes and aspect, thus these modules do not depend of each other. Nevertheless, since like AOP, JPI is a programming methodology; thus, for a complete aspect-oriented software development process, it is necessary to define JPI requirements and JPI modeling phases. Towards previous goal, this article proposes JPI UML class and sequence diagrams for modeling JPI software solutions. A purpose of these diagrams is to facilitate understanding the structure and behavior of JPI programs. As an application example, this article applies the JPI UML diagrams proposal on a case study and analyzes the associated JPI code to prove their hegemony.

Author 1: Cristian Vidal Silva
Author 2: Leopoldo López
Author 3: Rodolfo Schmal
Author 4: Rodolfo Villarroel
Author 5: Miguel Bustamante
Author 6: Víctor Rea Sanchez

Keywords: JPI; UML; AOP; JPI UML Class Diagram; JPI UML Sequence Diagram

PDF

Paper 32: Association Rule Hiding Techniques for Privacy Preserving Data Mining: A Study

Abstract: Association rule mining is an efficient data mining technique that recognizes the frequent items and associative rule based on a market basket data analysis for large set of transactional databases. The probability of most frequent data item occurrence of the transactional data items are calculated to present the associative rule that represents the habits of buying products of the customers in demand. Identifying associative rules of a transactional database in data mining may expose the confidentiality and privacy of an organization and individual. Privacy Preserving Data Mining (PPDM) is a solution for privacy threats in data mining. This issue is solved using Association Rule Hiding (ARH) techniques in Privacy Preserving Data Mining (PPDM). This research work on Association Rule Hiding technique in data mining performs the generation of sensitive association rules by the way of hiding based on the transactional data items. The property of hiding rules not the data makes the sensitive rule hiding process is a minimal side effects and higher data utility technique.

Author 1: Gayathiri P
Author 2: Dr. B Poorna

Keywords: Association rule mining; transactional data; privacy preservation; Association Rule Hiding (ARH); Privacy Preserving Data Mining (PPDM)

PDF

Paper 33: Improving Video Streams Summarization Using Synthetic Noisy Video Data

Abstract: For monitoring public domains, surveillance camera systems are used. Reviewing and processing any subsequences from large amount of raw video streams is time and space consuming. Many efficient approaches of video summarization were proposed to reduce the amount of irrelevant information. Most of these approaches do not take into consideration the illumination or lighting changes that cause noise in video sequences. In this work, video summarization algorithm for video streams has been proposed using Histogram of Oriented Gradient and Correlation coefficients techniques. This algorithm has been applied on the proposed multi-model dataset which is created by combining the original data and the dynamic synthetic data. This dynamic data is proposed using Random Number Generator function. Experiments on this dataset showed the effectiveness of the proposed algorithm compared with traditional dataset.

Author 1: Nada Jasim Al-Musawi
Author 2: Saad Talib Hasson

Keywords: Video summarization; Histogram of Oriented Gradient (HOG); Correlation coefficients (R); key frames; illumination changes; noise; Random Numbers Generator function

PDF

Paper 34: A New Algorithm for Post-Processing Covering Arrays

Abstract: Software testing is a critical component of modern software development. For this reason, it has been one of the most active research topics for several years, resulting in many different algorithms, methodologies and tools. Combinatorial testing is one of the most important testing strategies. The test generation problem for combinatorial testing can be modeled as constructing a matrix which has certain properties, typically this matrix is a covering array. The construction of covering arrays with the fewest rows remains a challenging problem. This paper proposes a post-processing technique that repeatedly adjusts the covering array in an attempt to reduce its number of rows. In the experiment, 85 covering arrays, created by a state-of-the-art algorithm, were subject to the reduction process. The results report a reduction in the size of 28 covering arrays (~33%).

Author 1: Carlos Lara-Alvarez
Author 2: Himer Avila-George

Keywords: Software testing; Combinatorial testing; Covering arrays; Post-Processing

PDF

Paper 35: Database Preservation: The DBPreserve Approach

Abstract: In many institutions relational databases are used as a tool for managing information related to day to day activities. Institutions may be required to keep the information stored in relational databases accessible because of many reasons including legal requirements and institutional policies. However, the evolution in technology and change in users with the passage of time put the information stored in relational databases in danger. In the long term the information may become inaccessible when the operating system, database management system or the application software is not available any more or the contextual information not stored in the database may be lost thus affecting the authenticity and understandability of the information. This paper presents an approach for preserving relational databases for the long-term. The proposal involves migrating a relational database to a dimensional model which is simple to understand and easy to write queries against. Practical transformation rules are developed by carrying out multiple case studies. One of the case studies is presented as a running example in the paper. Systematic implementation of the rules ensures no loss of information in the process except for the unwanted details. The database preserved using the approach is converted to an open format but may be reloaded to a database management system in the long-term.

Author 1: Arif Ur Rahman
Author 2: Muhammad Muzammal
Author 3: Gabriel David
Author 4: Cristina Ribeiro

Keywords: Database Preservation, Transformation Rules

PDF

Paper 36: Detection of Denial of Service Attack in Wireless Network using Dominance based Rough Set

Abstract: Denial-of-service (DoS) attack is aim to block the services of victim system either temporarily or permanently by sending huge amount of garbage traffic data in various types of protocols such as transmission control protocol, user datagram protocol, internet connecting message protocol, and hypertext transfer protocol using single or multiple attacker nodes. Maintenance of uninterrupted service system is technically difficult as well as economically costly. With the invention of new vulnerabilities to system new techniques for determining these vulnerabilities have been implemented. In general, probabilistic packet marking (PPM) and deterministic packet marking (DPM) is used to identify DoS attacks. Later, intelligent decision proto-type was proposed. The main advantage is that it can be used with both PPM and DPM. But it is observed that, data available in the wireless network information system contains uncertainties. Therefore, an effort has been made to detect DoS attack using dominance based rough set. The accuracy of the proposed model obtained over the KDD cup dataset is 99.76 and it is higher than the accuracy achieved by resilient back propagation (RBP) model.

Author 1: N. Syed Siraj Ahmed
Author 2: D. P. Acharjya

Keywords: Denial of service; Rough set; Lower and upper approximation; Dominance relation; Data analysis

PDF

Paper 37: Enhanced Version of Multi-algorithm Genetically Adaptive for Multiobjective optimization

Abstract: Multi-objective EAs (MOEAs) are well established population-based techniques for solving various search and optimization problems. MOEAs employ different evolutionary operators to evolve populations of solutions for approximating the set of optimal solutions of the problem at hand in a single simulation run. Different evolutionary operators suite different problems. The use of multiple operators with a self-adaptive capability can further improve the performance of existing MOEAs. This paper suggests an enhanced version of a genetically adaptive multi-algorithm for multi-objective (AMAL-GAM) optimisation which includes differential evolution (DE), particle swarm optimization (PSO), simulated binary crossover (SBX), Pareto archive evolution strategy (PAES) and simplex crossover (SPX) for population evolution during the course of optimization. We examine the performance of this enhanced version of AMALGAM experimentally over two different test suites, the ZDT test problems and the test instances designed recently for the special session on MOEA’s competition at the Congress of Evolutionary Computing of 2009 (CEC’09). The suggested algorithm has found better approximate solutions on most test problems in terms of inverted generational distance (IGD) as the metric indicator.

Author 1: Wali Khan Mashwani
Author 2: Abdellah Salhi
Author 3: Muhammad Asif jan
Author 4: Rashida Adeeb Khanum
Author 5: Muhammad Sulaiman

Keywords: Multi-objective optimization, Multi-objective Evolu-tionary algorithms (MOEAs), Pareto Optimality, Multi-objective Memetic Algorithm (MOMAs)

PDF

Paper 38: Extracting Topics from the Holy Quran Using Generative Models

Abstract: The holy Quran is one of the Holy Books of God. It is considered one of the main references for an estimated 1.6 billion of Muslims around the world. The Holy Quran language is Arabic. Specialized as well as non-specialized people in religion need to search and lookup certain information from the Holy Quran. Most research projects concentrate on the translation of the holy Quran in different languages. Nevertheless, few research projects pay attention to original text of the holy Quran in Arabic language. Keyword search is one of the Information Retrieval (IR) methods but will retrieve what is called exact search. Semantic search aims at finding deeper meanings of a text, and it is a hot field of study in Natural Language Processing (NLP). In this paper topic modeling techniques are explored to setup a framework for semantic search in the holy Quran. As the Holy Quran is the word of God, its meanings are unlimited. In this paper the words of chapter Joseph (Peace Be Upon Him (PBUH)) from the Holy Quran is analyzed based on topic modeling techniques as a case study. Latent Dirichlet Allocation (LDA) topic modeling technique has been applied in this paper into two structures (Hizb Quarters and verses) of Joseph chapter as: words, roots and stems. The log-Likelihood has been calculated for the two structures of the chapter. Results show that the best structure to use is verses, which gives the least energy for data. Some of the results of the attained topics are shown. These results suggest that topic modeling techniques failed to capture in an accurate manner the coherent topics of the chapter.

Author 1: Mohammad Alhawarat

Keywords: Statistical models; Latent Dirichlet Analysis (LDA); Holy Quran; Unsupervised Learning

PDF

Paper 39: Localisation of Information and Communication Technologies in Cameroonian Languages and Cultures:Experience and Issues

Abstract: In this paper, we tackle the problem of adapting Information and Communication Technologies (ICTs) in local languages of Cameroon. The objectives are to reduce the digital and language divides, and to pave the way for the usage of such technologies to local populations who don’t understand this technological language. We first discuss and highlight several concerns about the localisation of ICTs. Afterwords, we address some challenges and issues to computerize cultural and linguistic features, and indigenous knowledge (IK) for national languages and cultures in Cameroon. As case study, we describe our experience in localising an open source editor for the Yemba language, within the of Rural Electronic Schools in African Languages Project. Because Cameroonian languages are based on the same basic alphabet, this qualitative research is extensible to other languages.

Author 1: Mathurin Soh
Author 2: Jean Romain Kouesso
Author 3: Laure Pauline Fotso

Keywords: Culture, Digital divide, ICTs, Language divide, Localisation, National language

PDF

Paper 40: Real-Time Talking Avatar on the Internet Using Kinect and Voice Conversion

Abstract: We have more chances to communicate via the in-ternet. We often use text/video chat, but there are some problems, such as a lack of communication and anonymity. In this paper, we propose and implement a real-time talking avatar, where we can communicate with each other by synchronizing character’s voice and motion from ours while keeping anonymity by using a voice conversion technique. For the voice conversion, we improve accuracy of the voice conversion by specializing to the target character’s voice. Finally, we conduct subjective experiments and show the possibility of a new style of communication on the internet.

Author 1: Takashi Nose
Author 2: Yuki Igarashi

Keywords: Talking avatar; Voice conversion; Kinect; Inter-net; Real-time communication

PDF

Paper 41: Fine-Grained Quran Dataset

Abstract: Extracting knowledge from text documents has become one of the main hot topics in the field of Natural Language Processing (NLP) in the era of information explosion. Arabic NLP is considered immature due to several reasons including the low available resources. On the other hand, automatically extracting reliable knowledge from specialized data sources as holy books is considered ultimately a challenging task but of great benefit to all humans. In this context, this paper provides a comprehensive Quranic Dataset as a first part (foundation) of an ongoing research that attempts to lay grounds for approaches and applications to explore the holy Quran. The paper presents the algorithms and approaches that have been designed to extract an aggregative data from massive Arabic text sources including the holy Quran and tightly associated books. Holy Quran text is transferred into structured multi-dimensional data records starting from the chapter level, the word level and then the character level. All these are linked with interpretations and meanings, parsing, translations, intonation roots and stems of words, all from authentic and reliable sources. The final dataset is represented in excel sheets and database records format. Also, the paper presents models of the dataset at all levels. The Quranic dataset presented in this paper was designed to be appropriate for: database, data mining, text mining and Artificial Intelligence applications; it is also designed to serve as a comprehensive encyclopedia of holy Quran and the Quranic Science books.

Author 1: Mohamed Osman Hegazi
Author 2: Anwer Hilal
Author 3: Mohammad Alhawarat

Keywords: Arabic Language; Holy Quran; Quranic Dataset; Text Mining; NLP

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org