Evaluating the impact of information systems on end user performance : A proposed model

In the last decades, information systems (IS) researchers have concentrated their efforts in developing and testing models that help with the investigation of IS and user performance in different environments. As a result, a number of models for studying end users’ systems utilization and other related issues including system usefulness, system success and user aspects in business organizations have appeared. A synthesized model consolidating three well-known and widely used models in IS research is


I. INTRODUCTION
From the mid-nineties, IS researchers have concentrated their research efforts in developing and testing models that help in investigating IS aspects in different environments.As a result, a number of models for studying the systems utilization of end users and other related issues including system use, system success and user aspects in business organizations appeared.
The most commonly used models are, the technology acceptance model (TAM), the tasktechnology fit model (TTF), and DeLone and McLean (D&M) model.Each model focuses on different aspects and has different perspectives on the impacts of IS on users and or at least follows a specific researcher's goals and purposes.Overall, previous models provide a much-needed theoretical basis for exploring the factors that explain IS utilization and impacts on user performance [16].
This signifies the need for a model that can help understand the relationship between IS and users in different environments.Such a model should encompass different dimensions of IS, technology and users contemporaneously.This would help identify most overall important aspects and shift the focus from less important factors to more important factors that bring new useful ideas to both practitioners and researchers.
This study thus starts with a common argument that the aforementioned models were criticized for different reasons as each one alone tells only a particular part of the story and none of them alone has achieved a universal acceptance in terms of comprehensiveness and suitability to various IS environments.The study also discusses weaknesses among these models and the overlap between them as a basic step to understand a suitable way to integrating them into one more comprehensive and powerful model.For example, we note that the development of new and complex IS, such as ERP systems require different investigative approaches.

II. LITERATURE REVIEW
The difficulty in measuring actual performance led many previous studies to use multiple perspectives and theories to reach more accurate and rigorous results [39], [1].Thus, we argue that current IS models individually are not broad enough to measure such a relationship as they do only capture a subset of the factors in the broad context of IS, reflecting a common agreement between many researchers [29], [16], [21].For example, TAM and TTF overlap in a significant way and they could provide a more coherent model if they are integrated, such that model could be even stronger than either standing alone.Recent research on the D & M's model also showed that the model is incomplete and needs to be extended with other factors [29], such as usefulness and the importance of the systems [34].
In light of these facts, especially the difficulty of objectively measuring performance, IS researchers have used these models as surrogate measures to predict users behaviours and IS successes in various types of IS environments and business organizations [35].For that reason, research on extending, integrating and replicating these models and constructs has been appearing in the IS literature [18], [32].http://ijacsa.thesai.org/There are many examples of this, for instance, [32] developed a new model by integrating the user satisfaction model with other variables such as information accuracy, system adequacy and timeliness to investigate user satisfaction in small business.In another instance, [39] extended TAM in order to investigate the actual usage of the systems.[17], integrated TAM and TTF to investigate individual performance because the new model has more significant improvement over either model alone.The integrated model provided more explanatory power than any of these models alone.Later on [18], also proposed a model extending TTF with a computer self-efficacy construct explaining the link between the two models to help managers understand PEOU can be increased.
In a similar vein, [27] extended TAM and TTF in on an internet environment and found support for a model that includes TTF and the TAM to predict actual use and attention to use, others also extended TAM with other variables from IS literature and found support for integrating new variables to new models in different environments [5], [2], [7].
Recently, researchers have started even to expand these models with new factors aiming at developing new models to suit with the advanced and complex IS projects in various industries [23], [26].[38], used an extended model to study the relationship between TAM variables and actual usage.[41], demonstrated that the extended TAM with initiative IT factors such as facility and compatibility the model possesses the ability to interpret individual behaviour and users' acceptance.
Prior IS models including TAM and TTF were used to carried out research in traditional and relatively simple but important environments, such as spreadsheet software and personal computing [2].However, with the development and implementation of complex and costly IS that cut across organizational functions, it is clear that there is an increased need for research that examines these models and extends them to a complex IS environment [24].
Despite the large body of existing research on TAM, TTF and D & M's models [21], [28], [20], none of these models have achieved universal acceptance either due to narrowness of focus or inadequately developed methodologies [10].These largely independent streams of research sparked our interest to explicitly discuss the main weakness in previous models with the goal of combining a new powerful validated model to further the understanding of the relationship between IS and users including performance impacts and systems usefulness [22], [13], [14].Previous models focused on user acceptance and satisfaction as surrogates to measure the impact of IS on individual user's performance [15], [33], [29].The argument in support of this approach stems from the difficulty in identifying a set of objective measures to evaluate the benefits of IS to users and organizations [3].
Specifically, a number of important shortcomings plague these models.For instance, TAM is widely employed in IS research, but has been criticized because of lack of task concentration [16], inability to address how other variables affect core TAM variables, such as usefulness [2], over assumptions on voluntary system utilization [22], some explicit recognition that frequent utilization of a system may not lead to higher user's performance and inadequate systems may be evaluated positively by users due to factors such as accessibility, and personal characteristics [22].
Similarly, a major concern about studies conducted using TTF is the inadequate attention given to a very important element related to system quality and usefulness especially when it is known that system usefulness must be evaluated before systems can deliver performance impacts [22].
The D & M's model is one of the most widely applied in IS research.It identifies the complexity that surrounds the definition of IS success offering valuable contributions to the understanding of IS performance impacts and providing a scheme for classifying the different measures of IS.However, researchers have claimed that the D & M's model is incomplete; suggesting that further factors should be included in the model [12], [40], [38], [29].
In view of that, this study developed and statistically validated a new model for examining the impact of IS on user performance.The model combines the core factors from the TAM, TTF and D & M's models (See figure 1 below), thereby achieving a more adequate and accurate measure of user performance.

III. METHODOLOGY
This section describes the methodology used in the study and gives an overview of the pilot study and pretest procedures applied in order to validate the study model.

A. PARTICIPA TS
The respondents numbered 387 ERP users in total from various functional areas in different organizations.Data was collected from the ERP users by means of a written questionnaire.The questionnaire was synthesized after an extensive review of the IS and ERP literature.The questionnaire consisted of two parts, the first part involved demographic questions about the respondents and the frequency of ERP usage, while the second part involved questions about the factors including the fit between the system and task requirements and users' needs, System Quality (SQ), Information Quality (IQ), Perceived Usefulness (PU), Perceived Ease of Use (PEOU) and User Performance (UP).Both five and seven point Likert scales were used (see Appendix 1).

B. PILOT STUDY A D PRE-TEST
Although most of the factors used in the instrument were validated by prior research, the adopted questionnaire was evaluated through a focus group and tested in a pilot study to ensure content and construct validity and also to ensure appropriateness within the context of ERP environments.
The instrument then was distributed to 15 ERP users in three universities to evaluate ERP impacts on their performance.The data from those users was analyzed and the results of the analysis showed a high level of reliability.After ensuring appropriateness of the instrument the main study was conducted.

IV.
RESULTS This section provides the main findings of the study and explains the results of the reliability and validity tests.

A. MULTIVARIATE ASSUMPTIO TESTI G
A preliminary analysis was performed to check for violations of the assumptions.The assumptions tested included outliers, linearity [25], homoscedasticity [38] and independent residuals [31].
The histogram plots showed some deviations from the normality for some variables, however, these deviations were not significant and they did not show any violations after they were tested using correlation tests.
The results presented in Table 1 show that all values of Durbin-Watson test came very close to 2, meaning no presence of autocorrelation in the residuals.The results also showed that all values are less than one for Cook's distances and close to zero for the leverages thus confirming that no autocorrelation exists [9].

B. COLLI EARITY A D MULTICOLLI EARITY
In practice, the most common level of cut-off points used for determining the presence of multicollinearity are tolerance values of less than .10,or Variable Inflation Factor (VIF) values of above 10 [31].As illustrated in the Table 2, the tolerance values for all variables were above .10,and VIF values for each variable were less than 10, therefore, the study did not violate the multicollinearity assumption [8].

C. RELIABILITY
The internal consistency reliability was assessed by calculating Cronbach's alpha values.An alpha of .70 or higher is normally considered satisfactory for most purposes [11], [30].
All individual factors, as well as the entire instrument have shown high levels of reliability.The Cronbach's alpha of the study instrument ranges from 0.84 for the usefulness to 0.97 for the user performance indicating high reliability.As summarized in Table 3 in the next section..65 (97) TTF: Task technology fit, IQ: information quality, SQ: system quality, PU: perceived usefulness, PEOU: perceived ease of use, UP: user performance.

D. VALIDITY
Validity is the extent to which a construct measures what is supposed to measure reflecting how truthful the research results are, determining whether the research measures what was intended to measure [19].
Both Convergent and discriminant validity were used to confirm the appropriateness of the measurement obtained for the factors used in the study.The cut-off point used in this analysis was .3, as recommended by [37] and / or [31].All correlations below this point were considered low.The analysis was conducted for each variable as shown in Table 3 below, followed by a discussion of the analysis results.
. We test discriminant validity for a construct using Cronbach's alpha.According to [4], [6] for a construct to be valid its Cronbach's alpha should be greater than its correlation with other constructs.As shown in Table 2 comparison of the correlations with the Cronbach's alphas indicated that this is true for all constructs and thus discriminant validity is satisfied [36].

CONVERGENT VALIDITY
All of the loadings of the constructs' items were higher than the cutoff criteria of 0.50, with most of items above 0.70, demonstrating high construct validity as shown in Table 3.However, two items of the TTF construct (Com1 and ITsub1) did not meet the cutoff criteria and thus were removed from any further analysis.
Similarly, Access3 and Tim3, belonging to the SQ construct were dropped from any further analysis as they did not meet the cutoff criteria.
Accuracy and relevancy were not included in the factor analysis as they were measured by two items only.However, these sub-constructs show high correlation in terms of user performance, so they have been retained in the model.
In relations to UP, one item (Crea 3) was removed from the analysis because it had high loadings with other two sub-constructs and therefore creates ambiguity.To ensure that this item had no adverse effects, the reliability alpha was checked in both cases and showed no significant changes.Lastly, the PEOU and PU were tested.All items had high loadings (< .60) in their perspective constructs suggesting high construct validity.

MULTIPLE REGRESSION ANALYSIS
A multiple regression analysis was performed to identify the significant contributions of each factor in explaining user performance with ERP systems.The results of the analysis, including significance levels, t-statistics and coefficients for each factor are summarized in Table 4. Three factors, PU, SQ and PEOU were found to be the best predictors of user performance explaining 61% of the variance in user performance.Furthermore, since PU had the strongest impacts on user performance further analysis was conducted to identify factors affecting PU.The analysis yielded a regression function with R 2 =0.44, based on all independent variables as summarized in Table 4.

CONCLUSIONS
The study provides insights to a potentially valuable tool for IS researchers and practitioners.The new combined model investigating the relationships between a set of factors including IQ, SQ, TTF and UP shows promise in enhancing the understanding of IS impacts in business organizations related to user performance.
Empirical findings demonstrated the significance of all of these factors but with different relative importance.The findings demonstrated that most factors in the proposed model have direct and/or indirect significant influence on user perceived performance suggesting therefore, that the model possesses the ability to explain the main impacts of these factors on ERP users.
The study shows that the most significant factor influencing user performance is Perceived Usefulness closely followed by system quality.These two factors provide a wider understanding of the factors that impact users when utilizing IS.The study provides a new foundation and draws attention for academic research related to information systems impacts and contributes to the improvement of user performance.

Figure 1 .
Figure 1.The study model TAM, TTF and D & M models have been tested in traditional IS environments, the new combined model was tested within an ERP systems environment.This environment was deemed appropriate as ERP systems are large scale systems

Table 1 .
Independence and outlier's analysis