An Optimized Analogy-Based Project Effort Estimation

Despite the predictive performance of Analogy-Based Estimation (ABE) in generating better effort estimates, there is no consensus on:(1) how to predetermine the appropriate number of analogies,(2) which adjustment technique produces better estimates. Yet, there is no prior works attempted to optimize both number of analogies and feature distance weights for each test project. Perhaps rather than using fixed number, it is better to optimize this value for each project individually and then adjust the retrieved analogies by ...


Introduction
Analogy-Based Estimation (ABE) has preserved popularity within software engineering research community because of its outstanding performance in prediction when different data types are used [1,15]. The idea behind this method is rather simple such that the new project's effort can be estimated by reusing efforts about similar, already documented projects in a dataset, where in a first step one has to identify similar projects which contain the useful predictions [15]. The predictive performance of ABE relies significantly on the choice of two interrelated parameters: number of nearest analogies and adjustment strategy [8]. The goal of using adjustment in ABE is twofold: (1) minimizing the difference between a new project and its nearest analogies, and (2) producing more successful estimates in comparison to original ABE [2]. If the researchers read the literature on ABE, they will encounter large number of ABE models that use variety of adjustment strategies. Those strategies suffer from common problems such as they are not able to produces stable results when applied in different contexts as well as they use fixed number of analogies for the whole dataset [1]. Using fixed number of analogies has been proven to be unsuccessful in many situations because it depends heavily on expert opinion and requires extensive experimentation to identify the best k value, which might not be predictive for individual projects [2].
The aim of this work is therefore to propose a new method based on Artificial Bees Algorithm (BA) [14] to adjust ABE by optimizing the feature similarity coefficients that minimizes difference between new project and its nearest projects, and predicting the best k number of nearest analogies. The paper is structured as follows: Section 2 introduces an overview to ABE and adjustment methods. Section 3 presents the proposed adjustment method. Section 4 presents research methodology. Section 5 shows obtained results. Finally the paper ends with our conclusions.

Related Works
ABE method generates new prediction based on assumption that similar projects with respect to features description have similar efforts [8,15]. Adjustment is a part of ABE that attempts to minimize the difference between new observation ( i ê ) and each nearest similar observation ( i e ), then reflects that difference on the derived solution in order to obtain better solution ( t e ). Consequentially, all adjusted solutions are aggregated using simple statistical methods such as mean ( 1ˆ) . In previous study [18] we investigated the performance of BA on adjusting ABE and finding best k value for the whole dataset. This model showed some improvements on the accuracy, but on the other side it did not solve the problem of predicting the best k value for each individual project. In addition the solution space of BA was a challenge because there was only one common weights for all nearest analogies. The used optimization criteria (i.e. MMRE) was problematic because it was proven to be biased towards underestimation. For all these reason and since we need to compare our proposed model with validated and replicated models, we excluded this model from comparison later in this paper. This paper thereby attempts to solve abovementioned limitations.
In literature there is a significant number of adjustment methods that have been documented and replicated in previous studies. Therefore we selected and summarized only the most widely used strategies. Walkerden and Jeffery proposed Linear Size Adjustment (LSE) [16] based on the size extrapolation. Mendes et al. [12] proposed Multiple Linear Feature Extrapolation (MLFE) to include all related size features. Jorgenson et al. [6] proposed Regression Towards the Mean (RTM) to adjust projects based on their productivity values. Chiu and Huang [4] proposed another adjustment based on Genetic Algorithm (GA) to optimize the coefficient αj for each feature distance based on minimizing performance measure. Recently, Li et al. [10] proposed the use of Neural Network (NN) to learn the difference between projects and reflects the difference on the final estimate. Further details about these methods and their functions can be found in [1].
Indeed, the most important questions to consider when to use such methods is how to predict the best number of nearest analogies (k). In recent years various approaches have been proposed to specify this number such as: 1) fixed number selection (i.e. k=1, 2, 3…etc) as in studies of [7,11,12,16], 2) Dynamic selection based on clustering as in study of [2,18]. 3) Similarity threshold based selection as in studies of [5,9]. Generally, these studies except [2] use the same k value for all projects in the dataset which does not necessarily produce best performance for each individual project. On the other hand, the certain problem with [2] is that it does not include adjustment method but it predicts the best k value based on the structure of dataset.

The Proposed Method (OABE)
The proposed adjustment method starts with Bees Algorithm in order to find out, for each project: (1) the feature weights (w), and (2) the best k number of nearest analogies that minimize mean absolute error. The search space of BA can be seen as a set of n weight matrixes where the size of each matrix (i.e. solution) is k × m. That means each possible solution contains weight matrix with dimension equivalent to the number of analogies (k) and number of features (m) as shown in Figure 1. The number of rows (i.e. k) and weight values are initially generated by random. Each row represents weights for one selected analogy and accordingly    The setting parameters for AB have been found after performing sensitivity analysis on the employed datasets to see the appropriate values. Table 1 shows BA parameters, their abbreviations and initial values used in this study. Below we briefly describe the process of BA in finding best k values and the corresponding weights for each new project. The algorithm starts with an initial set of weight matrixes generated after randomly initializing k for each matrix. The solutions are assessed and sorted in ascending order after they are being evaluated based on MR. The best from 1 to b solutions are being selected for neighborhood search for better solutions, and form new patch. Similarly, a number of bees (nsp) are also recruited for each solution ranked from b+1 to u, to search in the neighborhood. The best solution in each patch will replace the old best solution in that patch and the remaining bees will be replaced randomly with other solutions. The algorithm continues searching in the neighborhood of the selected sites, recruiting more bees to search near to the best sites which may have promising solutions. These steps are repeated until the criterion of stop (minimum MR) is met or the number of iteration has finished.  The proposed OABE model has been validated over 8 software effort estimation datasets come from companies of different industrial sectors [3]. The datasets characteristics are provided in Table 2 which shows that the datasets are strongly positively skewed indicating many small projects and a limited number of outliers. It is important to note that all continuous features have been scaled and all observation with missing values are excluded.

Performance Measures
A key question to any estimation model is whether the predications are accurate, the difference between the actual effort ( i e ) and the predicted effort ( i ê ) should be as small as possible because large deviation will have opposite effect on the development progress of the new software project [13]. This section describes several performance measures used in this research as shown in Table 3. Although some measures such as MMRE, MMER have been criticized as biased to under and over estimations, we insist to use them because they are widely used in commenting on the success of predictions [13]. Interpreting these error measures without any statistical test can lead to conclusion instability, therefore we used win-tie-loss algorithm [8]  Also, the Bonferroni-Dunn test [17] is used to perform multiple comparisons for different models based on the absolute error to check whether there are differences in population rank means among more than populations.

Results
This section presents performance figures of OABE against various adjustment techniques used in constructing ABE models. Since the selection of the best k setting in OABE is dynamic, there was no need to pre-set the best k value. In contrast, for other variants of adjustment techniques there was necessarily finding the best k value that almost fits each model, therefore we applied different k settings from 1 to 5 on each model as suggested by Li et al. [9] and the setting that produces best overall performance has been selected for comparison with other different models.  However, these findings are indicative of the superiority of BA in optimizing k analogies and adjusting the retrieved project efforts, and consequentially improve overall predictive performance of ABE. Also from the obtained results we can observe that there is evidence that using adjustment techniques can work better for datasets with discontinuities (e.g. Maxwell, Kemerer and COCOMO). Note that the result is exactly the "searching for the best k setting" result as might be predicted by the researchers mentioned in the related work. We speculate that prior Software Engineering researchers who failed to find best k setting, did not attempt to optimize this k value with adjustment technique itself for each individual project before building the model.  Figure 3 shows the sum of win, tie and loss values for all models, over all datasets. Every model in Figure 2 is compared to other five models, over six error measures and eight datasets. Notice in Figure 2 that except the low performing model on, the tie values are in 49-136 band. Therefore, they would not be so informative as to differentiate the methods, so we consult win and loss statistics to tell us which model performs better over all datasets using different error measures. Apparently, there is significant difference between the best and worst models in terms of win and loss values (in the extreme case it is close to 119). The win-tie-loss results offer yet more evidence for the superiority of OABE over other adjustment techniques. Also the obtained win-tie-loss results confirmed that the predictions based on OABE model presented statistically significant but necessarily accurate estimations than other techniques. Two aspects of these results are worth commenting: 1) The NN was the big loser with bad performance for adjustment. 2) LSE technique performs better than MLFE which shows that using size measure only is more predictive than using all size related features. We use the Bonferroni-Dunn test to compare the OABE method against other methods as shown in Figure 3. The plots have been obtained after applying ANOVA test followed by Bonferroni test. The ANOVA test results in pvalue close to zero which implies that the difference between two methods are statistically significant based on AR measure. The horizontal axis in these figures corresponds to the average rank of methods based on AR. The dotted vertical lines in the figures indicate the critical difference at the 95% confidence level. Obviously, the OABE methods generated lower AR than other methods over most datasets except for small datasets. For such datasets, all models except NN generated relatively similar estimates but with preference to OABE that has smaller error. This indicates that OABE adjustment method is far less prone to incorrect estimates.

Conclusions and Future Works
This paper presents a new adjustment technique to tune ABE using Bees optimization algorithm. The BA was used to automatically find the appropriate k value and its feature weights in order to adjust the retrieved k closest analogies. The results obtained over 8 datasets showed significant improvements on prediction accuracy of ABE. We can notice that all models' ranking can change by some amount but OABE has relatively stable ranking according to all error measure as shown in Figure 2. Future work is planned to study the impact of using ensemble adjustment techniques.