5th Hellenic Conference on Informatics


Unlocking the Secrets of Information Systems Failures: the Key Role of Evaluation

Angeliki Poulymenakou and Vasilis Serafeimidis
London School of Economics
Department of Information Systems
Houghton Street
London WC2A 2AE
UK
Tel. -44-171-955-7649 Fax. -44-171-955-7385
e-mail: a.poulymenakou@lse.ac.uk
Abstract
This paper examines the potential of evaluation practices for the analysis and mitigation of information systems failure. The paper focuses on the organisational, human and social dimensions of systems failure and contrasts those to concerns of current evaluation practices. A conceptual framework is provided which elucidates these perspectives. The paper concludes by recommending changes to the current perception of information systems evaluation as an organisational activity which are required in order to meet the challenges created by systems failure.

Introduction

Although, expenditure in information technology (IT) is continuing to rise, it has been estimated that as many as 50% of information systems (IS) projects are failures.28 Even more pessimistic statistics are provided by Hochstrasser and Griffiths21 and Willcocks and Lester57 who argue that the IS success rates are as low as 30%-40%.

Information systems failure has gained recently prominence in the concerns of both the information systems professionals and the business community. The interest generated from publicity on failures such as the London Stock Exchange Taurus system and the London Ambulance Service dispatch system, coupled by a mounting pressure to reduce the risks associated with the development of systems, have resulted in a plethora of writings. Most of these, however, are either reviews of specific cases of failure, or grey literature surrounding a failure incident.

Holmes and Poulymenakou22 argue that in order to meet effectively the challenges that systems failure presents for professionals and users alike, we need a framework of understanding which is sufficiently rich and expressive to make the most of our knowledge and experiences concerning the relationship of information technology with its organisational context, in other words, we need an information systems perspective for information systems failure. In this paper, we explore how IS evaluation can contribute in this area.

The evaluation of information systems has moved a long way from being treated as one-off process which was carried out either before the system analysis or immediately after its delivery. The perception of evaluation as a 'hard' topic concerned predominantly with the assignment of monetary values to IT investments and the benchmarking of the technical component has been changed. Serafeimidis and Smithson45 on their review of the evolution of evaluation practices argue that the fact that IT-related changes do not deliver any benefits to organisations and their participants without proper management has shifted the focus of evaluation to a continuous process.

In this paper we review some of the important characteristics of information systems failure as an organisational topic. We look at the nature of failure phenomena from a socio-technical perspective, and we identify the factors that contribute or prevent failure of a system in a particular organisational context. We argue that the socio-technical nature of failure and the situational elements of it are important issues addressed by information systems evaluation practices. Therefore, we see an opportunity for evaluation to play a significant role in the analysis and mitigation of information systems failure. In particular, we argue that properly conceived and implemented evaluation practices could help in the early diagnosis of potential failures. Furthermore, we see evaluation as an opportunity for setting up appropriate support systems and for supplying meaningful feedback for both information systems and business professionals.

Widespread evaluation practices, however, tend to disregard the true nature of information systems. In contrast, the approach that we recommend is concerned with deep understanding and interpretation of the way evaluation takes place in practice, viewing it as a dynamic socio-political process within multi-level social and organisational contexts and considering a variety of values. This conception of evaluation recognises the fundamental role of the contextual elements (e.g. stakeholders) where the organisation operates and attempts to guarantee a constant delivery of benefits through a continuous hunting of tangible and particularly intangible benefits supported by rigorous management of the associated risks.

A new understanding: Social and organisational considerations of systems failure present challenges for evaluation

Information systems can be perceived to fail in three different ways: during development, at the stage of introduction to the users' organisation (implementation), or at some point during their operation. Conversely, in terms of evaluation, failure is typically realised either immediately after the commissioning of the project, or once the initial investment into development has been approved and funds have been absorbed to some extent, or once the project has reached the stage where implementation costs have started accruing in which case project cost may have already overshot the budget. More perversely, failure may also occur once all expenditure has been carried out and still no monetary or other benefits have been realised, or as the system does not meet the desired technical specifications.

Treating the failure of an information system as a technology issue is narrowly conceived and in several cases does not explain fully failure phenomena. For example, in the London Ambulance System case, the computer based part of system was functioning correctly, and yet the information systems around it collapsed. We, therefore, need to address information systems failure as a socio-technical topic which allows us to treat failure as a combination of events and factors that are organisational, social, as well as technical in nature.

Holmes and Poulymenakou22 have reviewed several areas of information systems that have been studied and developed under such a perspective such as socio-technical issues in systems engineering, phenomena related to resistance to change, and the relationship of systems development practices to organisational change. Furthermore, they have developed a framework for studying failure in which they argue that the mitigation of failure is strongly linked to the promotion of organisational learning. They argue that such learning is promoted be three activities related to information systems: project management, the management of change during systems implementation, and information systems evaluation.

In this paper we adopt a rich interpretive framework23,54 to discuss how evaluation can be employed to assist in the mitigation of failure. The relation between failure and evaluation is reviewed from three perspectives: the context of systems practices, the processes underlying the techniques employed to improve these practices, and the content in which these techniques are applied (i.e. which aspect of information systems they target - human, technical, business impact, organisational, etc.).

The social and organisational dimension of failure

Information technology today is treated as the vehicle for implementing change within organisations.60 However, to achieve change requires much more than technology. Certainly, information technology often acts as the main driving force12, but it is the ability to accept and foster change which is the overriding factor and is often embedded in the cultural dimension of organisations.40,60 Organisations often experience difficulties in managing change through their systems projects because they fail to understand the role of intangible benefits, which sometimes are the only type of benefits to flow from an information system. In quoting a CBI survey, Merrill29 indicated that only a third of managers could identify gains in productivity because of information technology; 38% did not even know which criteria they used to quantify these benefits and half were less than satisfied with the performance of the systems. Moreover, Hochstrasser and Griffiths21 argue that 84% of the companies they studied invest in IT without using systematic methods to calculate either the true costs or benefits of that investment. Similar difficulties in evaluating information technology investments have been highlighted by Farbey et al.15, Lang26 and Willcocks.56

With respect to the perception of information systems projects, Earl12 contrasts successful and unsuccessful change projects. In the successful ones, where information technology was seen as the key enabler, it was perceived as a means to an end, not the end itself. In the unsuccessful ones, the project was managed as a technology project, with the IT director standing much to lose. Earl12 concludes that an IT project can often lead to failure because of the lack of human considerations. Similarly, Roos38 believes that it is the lack of consideration of the human resource elements that leads to failure. A study at two General Motors plants found that, despite both adopting two radically different strategies to automation, both were highly unsuccessful in implementing change.

It seems that although people are a major component within information systems development projects, they are often regarded as being secondary to technology. The re-evaluation of information systems in socio-technical terms allows us to consider factors affecting the success or failure of such systems that would otherwise not be addressed. As with the socio-technical school in organisational theory and behaviour,52 we seek to combine elements of technical and social determinism in our perception of systems. Technical determinism may, in part, be embedded within the psyche of systems developers, who, because of the assumptions they hold about technology, tend to resolve problems believing that the organisational and political dimensions do not exist.7 Most of the techniques employed for systems development precipitate this problem in so far as rigidity and bureaucracy, which are implicit within these techniques, may become imposed upon the organisation's social system.1

The social nature of information systems evaluation

The practice on IS evaluation, takes a formal-rational view of organisations, and sees evaluation as a largely quantitative process of calculating the preferred choice and evaluating the likely cost/benefit on the basis of clearly defined criteria. Most evaluation approaches focus on the technical components of the system and are based on variables such as lines of code, system throughput, or mean time between failures.16 Although Hirschheim and Smithson19 are critical of these approaches to evaluation based on quantification of the tangible elements and largely technical criteria, they note that the results of formal evaluation studies of this type often have a considerable legitimacy. This argument is also supported by Currie6 who argues that intangible and qualitative elements of IT are not acceptable as justification to those in authority. However, Powell37 argues that intangibles are increasingly being used in support of project proposals. Hirschheim and Smithson19 argue that placing emphasis on the technical characteristics of the system rather than the human or social aspects of it can have major negative consequences in terms of the system developed, with respect to individual aspects such as user satisfaction, but also broader organisational consequences in terms of system value and success.

Evaluation is highly subjective and sensitive political decision making process. This argument is supported by the findings of Hochstrasser and Griffiths21 who concluded that the greater the expense and strategic importance of an information system, the less relevant formal evaluation methods used. Furthermore, Symons and Walsham49 argue that when a formal analysis is carried out it is more likely to be a symbolic expression of a belief in rational management than a trusted aid to decision-making. Grindley17 comes to the conclusion that in 83% of the cases used cost-benefit analysis to support IT investment proposals are basically a fiction. It seems that systems proponents either try to avoid formal justification procedures or manipulate the proposal in such a way as to squeeze it through the least rigorous procedures.

In this paper we adopt a conception of information systems as social systems which considers the interaction of people who carry out particular tasks and who are influenced by subtle individual, behavioural, social and organisational pressures. In order to gain a deeper understanding of the organisational situation surrounding the development and use of an information system, we need to introduce two major changes in the evaluation perspective: we need to target social and organisational characteristics of systems and review how these interact with technical characteristics of the software, and we need to interpret systems in context. Hirschheim and Smithson19 and Walsham54 propose an interpretive approach to evaluation research as a way of gaining a deeper understanding of the nature and process of evaluation itself, including social groups who are affected by the computer-based IS. In our current conceptions of organisations as transformed by IT, we would include in those affected by the system social groups that may reside outside the boundaries of the traditional organisation.

Situational considerations of failure and their implications for evaluation practices

Holmes and Poulymenakou22 argue that although there are some generic attributes in information systems failure, a socio-technical approach implies that failure is highly situationally specific. The perspective here, can be similar to the one developed for other systems development practices.35 Failure of a system is treated as a situationally sensitive issue contingent upon the organisation, its employees and culture as well as the external markets and environments in which the organisation must operate. Holmes and Poulymenakou22 identified a number of situational elements for failure (see Table 1). In the next section, we discuss the nature of each of these elements, some of their possible interconnections, and their relation to evaluation.

In our view evaluation is highly situational as well. The nature and the focus of evaluation varies considerably depending on the stage of the systems life cycle that evaluation is carried out. Farbey et al.15 believe that such evaluations should map onto the traditional life cycle and, taking this further we believe should provide key to understanding and learning, particularly where failure occurs.

Many authors15,20,53 attempted to classify IT projects according to different characteristics in order to assist practitioners to outcomes and identify the appropriate measures of performance associated with the IT components. In the absence of any widely agreed consensus, we argue that for evaluation purposes the role they play in the business and the contribution they are expected to make should be the key parameters for such classification. Therefore, organisations should develop their own classification based on their business strategies and missions.

A wide range of evaluation methods and techniques has been discussed on the literature.15,19,33,36,48,55 Therefore, alignment and matching of evaluation techniques to situations is important because of the range of techniques available and the consensus that no single technique is valid in every evaluation circumstance.15,21,42,50 However, some authors such as Farbey et al.14 and Willcocks55 provide mechanisms to match evaluation methods with project classifications.

The contingency approach of failure and evaluation emphasise their strong situational component, extends the search for explanations and meaningful actions beyond the narrow realms of technology, and thus makes it possible to identify and analyse factors contributing to failure in their context. It follows from what we have discussed thus far that the contingency approach in treating failure through evaluation has to consider the following: the timing of the evaluation along a project lifecycle, an organisational specific treatment of projects and the configuration of evaluation methods that are appropriate for a particular situation. In the section below we look at the concepts that we need to use to guide decision making in these three directions.

Relating elements of failure to concepts in evaluation

In this section we propose a conceptual framework which allows us to link and discuss failure to evaluation. This framework expands the traditional narrow approach for the identification and quantification of the tangible costs and benefits of an IT investment and introduces a multiple perspective approach which takes into account the factors that we have analysed thus far, namely organisational values, social structures, potential outcomes and the associated risks. The work we present here (summarised in Table 1) builds on a broader conceptualisation of the linkages between the content, process, and context of evaluation and their interactions developed and applied by Serafeimidis and Smithson.43,44 We have made two important additions to the initial framework: the one is the elements of failure dimension alongside the evaluation one within context process and content. The other, is the concept of history as a distinct area covered by the framework.


Perspective  Evaluation issues                 Elements of failure                 

Context      Stakeholders' identification      Approaches to the conception of     
             Organisational elements           systems                             
             (strategy                         IS development issues (e.g. user    
             and planning)                     involvement)                        
             Business process modelling        Systems planning                    
             Investment culture                Organisational roles of IS          
             Environmental elements            professionals                       
             Competing and linked projects     Organisational politics             
                                               Organisational culture              
                                               Skill resources                     

Process      Selection and application of      Development practices (e.g.         
             methods                           participation)                      
             and techniques (e.g. CBA, ROI)    Management of change through IT     
             Benefits management               Project management                  
             Risk management                                                       

Content      Evaluation goals                  Monetary impact of failure          
             Value tracking                    'Soft' and 'hard' perceptions of    
             Metrics and measurement           technology                          
             Organisational targets /          Systems accountability              
             objectives                        Project risk                        
             Projects' classification                                              

History      Previous appraisal exercises      Prior experience with IT            
             Familiarity with the use of       Prior experience with developing    
             methods and                       methods                             
             techniques                        'Faith' in technology               
             Prior experience in benefits      Skills, attitude to risk            
             management                                                            
             Prior experience in dealing with                                      
             risks                                                                 



Table 1 Failure and evaluation of information systems: a conceptual map

We see this framework as serving two goals: it supports the development of a deeper understanding of how issues addressed by evaluation relate to failure phenomena. The framework is also meant as a baseline for supporting organisational decision making and action in the area of evaluation specifically for the mitigation of failure. In this section we explore the understanding function of this framework, while in the next section we make some recommendations for organisational action.

Context is concerned with the multi-level identification of the various systems and structures within which the organisation is located. The concept of context has a static flavour whereas affairs are in a constant state of flux and change. The context may be external one (e.g. the social, political, economic or competitive environment in which an organisation operates) or internal (e.g. the structure, corporate culture, or political context within the organisation42). Various stakeholder groups, either internal or external, should be identified.

From a failure perspective, context embodies the perceptions and expectations of stakeholders (either individuals or interest groups) in a system. Context defines the organisational culture and the politics of power that develop around or affect IS projects. We will illustrate this argument by referring to the treatment of user participation and evaluation itself.

The involvement of users in systems development may be seen as an opportunity to empower users and enable them to develop a more meaningful organisational presence, but it can also be viewed as a process of coercing users into giving their approval to the proposed system. From an evaluation point of view, participation is important for uncovering the organisational and individual value systems that give rise to


Organisational activities       Methods                                           

Evaluation practices            (e.g. Value Analysis, SESAME, Information         
                                Economics)                                        
Investment appraisal practices                                                    
                                Discounted cash flow (e.g. CBA, ROI)              
Benefits management approaches                                                    
                                None available                                    
IS development                                                                    
                                Participative (e.g. ETHICS)                       
                                Structured (e.g. SSADM)                           
                                Prototyping (e.g. RAD)                            
Project management                                                                
                                (e.g. PRINCE)                                     
Risk management                                                                   
                                (e.g. RISKMAN)                                    



Table 2 The process layer: Examples of methods available for different classes of organisational activities

perceptions and expectations from an information system. Symons48 who makes a similar point argues that it is important to differentiate between the concerns and influence of individuals vis a vis that of groups in the evaluation process. In addition she argues that evaluation process should be regarded as a means to encourage the involvement and commitment of stakeholders. We see this as a vehicle for promoting organisational learning. The promotion of organisational learning, however, requires the use of formative, multi-objective, multi-criteria evaluation methods15 that extend the search for appropriate measures and indicators far beyond the monetary value of the investment.

Politics, however, can also affect the way the process of evaluation itself is treated within an organisation. Walsham54 argues that evaluation can easily lose substance and become a ritual in the power play in an organisation. Ritualistic evaluation may be a way of supporting powerful interests and a device to suppress the less powerful in organisational terms. Ritual may prevent the identification of problems early in a project or it may be used as a weapon for gaining legitimacy for particular decisions and actions. Finally, ritualistic evaluation in the context of IS can also be a major hindrance to innovative organisational change. We cannot make value judgements about such phenomena but we can only identify them and alert stakeholders of the potential of their occurrence. Each case needs to be considered on its own merits.

The process layer concerns the way in which evaluation is carried out (the techniques and methods used), and furthermore the relationship of failure to other methods and techniques such as development methods, the way it plays itself out over time, etc. It includes assessments by managers, IS professionals and users at all stages of IS development and operation. It is very important that a means of communication with every level of the organisation is established to achieve organisational and individual learning.

The framework we propose links practices with the reasons that lead to the success or failure of these practices in an organisational context. We can identify several areas of activities (see Table 2) that can contribute to the success or failure of an information system. These activities would be accommodate in the process layer. The reasons however that such practices 'work' in some cases while they do not 'work' in others need to be searched by looking at the context, process and history layers. We will illustrate this by looking at an example of a 'failed' project. A particular organisation


Issues                     Relevant approaches                                    

Investigation of value     Formal technological methods16                         
systems                    Business values2,12                                    
                           Organisational change25                                
                           Socio-technical systems30                              

Measurement of outcomes    Metrics 10,31,41                                       
                           Timing 3                                               
                           Compliance with requirements45                         

Uncertainty                Risk analysis4,59                                      
                           Risk management4                                       



Table 3 The content layer: Issues and relevant approaches

invested in developing an IT investment appraisal suite of methods along sound academic principles and following ample consultation with their users. The implementation of these methods did not meet the expectations of the stakeholders, however, because of factors such as the ways that the approach was introduced in the workplace, the senior management support, constraints from previous organisational practices, current organisational practices in related areas, etc.44

In more general terms, we see in the process layer an interdependency between the practices that it is possible to adopt in each of the areas we identified above and the constraints brought in by the context of their application. Failure is manifested then as inappropriate methods adopted, inflexibility in the way methods and tools are being applied to the problem at hand, and a short-sighted review of the potential opportunities, benefits and risks of the particular application. We review below how a careful examination of the content allows decision makers to avoid such misconceptions.

The content refers to the values and criteria to be considered and what should be measured. It is here where it is particularly important to look beyond the narrow quantification of costs and benefits to an analysis of the opportunities presented by IT, together with the potential constraints on its application. The content emphasises the values and risks of IS and its contribution to business objectives and organisational efficiency. These include the linkage to organisational goals and a consideration of the implementation process. We have identified three issues related to the content for failure and evaluation: the values of the IT systems, the measurement of the outcomes, and the uncertainty factors associated. The approaches in information systems that are relevant for each of these issues are summarised in Table 3.

A historical understanding of all the above conceptual elements is necessary because IT-related changes and their evaluation or failure evolve over time and, at any particular point, present a series of constraints and opportunities shaped by the previous history. We propose two conceptions of history: either we treat it as narrative succession of significant events or as a proximate (immediate) history of social phenomena27 and the effects that their long-term history has on organisational practices.34

Desiderata for action: An evaluation life cycle to mitigate information systems failure

The conceptual framework we presented in the previous section suggests that the culture surrounding evaluation practices needs to change if these are to contribute significantly to the analysis and mitigation of systems failure. In this section we propose a life cycle of evaluation practices which follows closely that of an information system. Information systems can be evaluated at various stages of the life-cycle including feasibility study, ongoing evaluations to provide feedback during the design and development process, and post-implementation evaluation.

Our proposals take in consideration the social and organisational context of systems and recognise that IS are frequently used to enhance organisational performance without necessarily any reduction in costs, and produces benefits which are often intangible, uncertain, and extremely difficult to quantify in bottom line terms. Sometimes it is more relevant to examine intermediate outputs based on time, quality, cost and flexibility, which are more directly linked to IT than profit or other financial measures of performance.2,41 Some costs are fairly clear but others are less obvious, and there may be insidious effects such as a deskilling of work or a decline in job satisfaction.

A timeline for evaluation activities

We view evaluation as a dynamic process whose nature, focus and purpose change according to the stage within the life-cycle. It is very important to stress the fact that the conceptual framework discussed above is a conceptual platform for consideration during every stage. What changes is the focus and the importance of the different elements.

In ex-ante terms, or during the early stages of the systems life cycle, management should most concerned with defining the broader context (organisational and environmental) from where opportunities and constraints for IT investments will derive. It is important to identify the relevant viewpoints (stakeholders' views) which will be adopted, identify the potential beneficiaries as well as potential alternatives and the risk factors associated with them.15,58

At this point the main concern is the determination of the desired, expected, and accepted values that the IT investment attempts to achieve as well as the ways that these are expected to be realised and 'measured'. It is clear the need for the explicit identification of clear corporate business goals and objectives as a starting point for measuring the contribution of the investment to the success of the organisation and the formulation of an adequate business strategy and implementation plan. The business objectives should always be the source of the requirements that the investments try to meet and, at the same time, the appraisal or evaluation process and benefits management should facilitate measurement to determine the extent of success in achieving them. As Parker et al.32 argued, the use of IT should be directly linked to its impact on business performance in order to be a powerful tool with which management can improve economic performance and thus the overall strength and viability of the organisation.

At a more detailed level, such an approach should support the comprehensive search for benefits as well as identifying and explaining the links between particular causes and effects. The latter is especially useful in accounting for the achievement of unexpected benefits and the failure to achieve forecast benefits. An increased understanding of the technology-business relationship can be fed into any business process re-engineering exercise. In addition, throwing further light on the evaluation aspects of this relationship involves an appreciation of wider management processes (such as the investment life cycle) and the organisational investment culture.46

In ex-post evaluation the aim is to identify costs incurred and benefits achieved and to determine the extent to which these were the outcome of the changes under consideration or in other words if the delivered system meets the initial requirements (usually technical) and objectives. Then, the problem becomes much more one of measurement, of determining the precise impact of the system on the business processes and objectives. As we have argued before, there is a time lag between the delivery of an IT system and the delivery of its benefits - see also Brynjolfsson3. Therefore, an evaluation at this stage cannot provide more information than the performance of the technical IT component.

The traditional notion that post-implementation evaluation is an one-off exercise was supported by two surveys. Kumar24 described the actual practice of post-implementation evaluation of computer-based IS in over 90 US companies. He concluded that, where it was conducted, the primary use of evaluation was as a disengagement device for the systems development department. A similar conclusion derives from Willcocks and Lester58 survey of 50 UK organisations. According to Walsham54 this type of evaluation is ritualistic rather than substantive. The continuous evaluation approach we discuss below aims to turn ritual into substance.

Both ex-ante and ex-post evaluation offer only 'snapshots' of the organisational impact of the system. They are time sensitive findings with a rapidly decreasing value as we move away from the point and context at which they were produced. We believe that the techniques we outlined thus far, however useful and widely practised do not allow a close enough monitoring of the information system to prevent or minimise the effects of failure. We propose a different treatment of evaluation, one which turns it into a monitoring mechanism and which allows the provision of context and time sensitive feedback to the information systems stakeholders. We explain why such feedback is necessary for timely and meaningful action, for benefits realisation and for organisational learning.

Continuous evaluation

Willcocks and Lester58 in their survey found out that 80% of organisations had abandoned projects at some time during the systems development stage due to negative evaluation results. The major reasons given related to changing organisational or user needs and/or the project gone over budget. On a second analysis of data they found out that very few organisations actually considered these objectives in the early stages. It follows that IS evaluation practices need to 'keep an eye' on the organisational situation and monitor closely changes that may affect the content, process as well as context of systems.

Turbulence in business and technological environments, necessitates the treatment of IS evaluation as a continuous process with reviews at regular intervals either during the implementation stage or after a system's delivery. Without regular re-evaluation especially during the operational life of a system, potential benefits can evaporate as investments slip out of control, context is shifted and risks are realised as problems. We propose a three tier strategy for continuous evaluation which we see as better suited to the prevention and mitigation of failure. The elements of this strategy are: supporting the realisation of benefits from an information system, adopting a more contentious approach to risk management and facilitating organisational learning through systems practice.

Benefits management concerns closing the life cycle loop to ensure that benefits are realised. This goes beyond an one-off post-implementation evaluation exercise which itself is often overlooked in practice. The focus should be on performance measurement and continuous improvement through the operational life of the system. Usually the people involved in the original investment proposal and its implementation are not the people who are responsible for the realisation of the benefits which the system is expected to deliver. IT projects rarely achieve bottom line profits in their own right but rather enable business areas to achieve them. Benefits management is needed to ensure that the successful completion of a project will actually provide gains to the stakeholders. A critical part of this process is defining a set of milestones and measures, as well as the stakeholders responsible that enable a firm to monitor and control benefits delivery. These metrics should be clear and unambiguous, covering all the benefits (financial and non-financial). The need for benefits delivery planning is well supported by the idea that new technologies may not have an immediate impact on organisations. Brynjolfsson3 found lags of two to three years before the strongest organisational impacts of IT are felt.

Most organisations are ready to acknowledge risk as a major factor for projects or systems that fail. Very few organisations, however, include the management of risk as an explicit activity in their systems practices, or demand that this is included in the practices of the service providers they call in to build their systems. The management of risk entails three main areas of action; risk identification, risk analysis and risk mitigation.51 The identification of risk should be an organisational practice open to every stakeholder in a particular system. Techniques similar to those used for structuring problems39 can be applicable in this stage. The analysis of risk is often ill advisedly narrowed down to assigning a monetary value and a probability to the occurrence of a particular event. A broader conception of risk needs to be adopted to understand failure as we have discussed in the previous section. For example, risks do not only embody the cost of fixing problems, they also include the obstacles causing the non-realisation of the expected benefits from a system. Therefore, qualitative approaches are also required in risk analysis to complement the quantitative methods usually applied. Qualitative methods are primarily focused on the creation of a wider awareness of the concept of 'risk' in systems practices, therefore encouraging stakeholders to make their own, individual preparations in view of project risks. For example, scenario planning8,9 may be applied to trace the chain of events that might follow the realisation of a particular configuration of risks in a project. Qualitative methods have definitely a role to play,59 but they should support rather than drive the risk analysis process. The mitigation of risk, can only be conceived as a continuous process. In the early stages of a project, this appears as contingency planning, i.e. preparation for different project eventualities, while later more efforts are concentrated in the transfer of risks across project areas.4

Organisational learning

In the face of continuous and radical change, organisations have come to rely heavily on a very different set of resources as compared to that of thirty years ago. As Zuboff61 argues, in the informated organisation of the 1990s, 'learning is the new form of labour and the ultimate source of value added.' The new organisational paradigm is characterised by changes in intellectual skills (knowledge resources), roles and authority and the development of structures and systems that foster learning. Thus, IS should be viewed in terms of business purpose and a vision of the organisation's mission and goals as a holistic approach and not in terms of individual projects. In terms of evaluation, this requires a better understanding of the interaction between the technology and the underlying business processes within a particular organisational context.19 Evaluation becomes an essential tool for organisational learning and development as it serves as a feedback process.11,13,15,18,54 This feedback helps trace and understand the underlying factors leading to failure according to the framework we have discussed in the previous section. The process evaluation layer of the conceptual framework proposed above, is that it draws attention to evaluation as a (group) learning process, mediating between content and context. It is there where an IS evaluator can consciously attempt to create and support an evaluation climate within which learning should flourish and values to promote include the legitimacy of all assessments in the evaluation discourse, that everybody is a learner, and that moral issues can be debated.

While Willcocks56 reports a strong correlation between IS control and measurement, and IS success, Hochstrasser and Griffiths21 and Strassman47 note the lack of correlation between IT expenditure and company performance. Brynjolfsson3 argues that one of the reasons for the existence of lags on IT benefits delivery has basis in learning regarding IT. Because of its unusual complexity and novelty, users require experience before becoming proficient, therefore learning curves are very important. Scott41 argues that organisational learning (and organisational commitment) could be used as alternative measures of the IT payoff. Furthermore, organisational learning can improve operational performance, along four dimensions: time, cost, quality and flexibility. An evaluation exercise always provides an opportunity for personal appraisal, the sharing of ideas between individuals and interest groups, with the aim of generating consensus agreement and thus commitment to the resulting proposals for action. This is further evidence of the value of evaluation as a learning process. The social context of an evaluation activity can sometimes be characterised as one of stakeholder conflict. Learning still takes place in an evaluation activity conducted under these conditions since, for example, the views of others are better understood when contrasted with one's own. However, unless the conflict arises from misunderstandings which can be resolved by the evaluation activity, a result may only be reached by non-consensual approaches such as majority vote.54

Conclusions

The main objective of this paper was to demonstrate the central role that information systems evaluation can play in the identification, mitigation and prevention of systems failure. We set-off by explaining why information systems professionals and researchers need to view both failure and evaluation of information systems from a social and organisational perspective. Even when this need is understood, the practices of evaluation, as well as those related to systems development and implementation, seldom reflect such concerns. We have highlighted non technical characteristics of failure that require us to expand the scope of evaluation practices. Then, we have identified the basic concepts and concerns that need to be addressed by evaluation in this context. We have taken a life cycle perspective of evaluation and we have illustrated how these concerns fit within it. Our main finding is that evaluation should not be restricted to ex-ante and ex-post practices with respect to systems development because most of the value adding in this area can be found in on-going activities that follow closely developments in the construction, implementation and use of an information system. The philosophical undercurrent in the arguments developed in this paper, is based on a systemic conception of organisations embodying human activity systems.5 Such systems can only survive by promoting a continuous interplay between doing and learning.

References

  1. I.O. Angell, and S. Smithson, Information Systems Management - Opportunities and Risks, Macmillan, 1991.
  2. J.Y. Bakos, and C.F. Kemerer, "Recent applications of economic theory in Information Technology research," Decision Support Systems, Vol.8, 1992.
  3. E. Brynjolfsson, "The productivity paradox of information technology," Communications of the ACM, Vol.36, No.12, Dec., 1993, pp. 67-77.
  4. R.N. Charette, Software Engineering Risk Analysis and Management, McGraw Hill, New York, 1989.
  5. P. Checkland, Systems thinking. Systems Practice, John Wiley, 1981.
  6. W.L. Currie, "The art of justifying new technology to top management," Omega, Vol.17, No.5, 1989, pp. 409-418.
  7. T.H. Davenport, R.G. Eccles, and L. Prusak, "Information politics," Sloan Management Review, Fall 1992.
  8. A.P. de Gues, "Planning as Learning," Harvard Business Review, March-April, 1988, pp. 70-74.
  9. A.P. de Gues, "Modelling to predict or to learn?," European Journal of Operational Research, Vol.59, 1992, pp. 1-5.
  10. W.H. DeLone, and E.R. McLean, "Information Systems Success: The Quest of the Dependent Variable," Information Systems Research, Vol.3, March, 1992, pp. 60-95.
  11. M.J. Earl, Management Strategies for Information Technology. Prentice Hall, 1989.
  12. M.J. Earl, "Putting IT in its place: a polemic for the nineties," Journal of Information Technology, Vol.7, 1992.
  13. P. Etzerodt, and K.H. Madsen, "Information Systems Assessment as a Learning Process," in Information Systems Assessment: Issues and Challenges, N. Bjorn-Andersen and G.B. Davis (eds) North Holland, Amsterdam, 1988, pp. 333-345.
  14. B. Farbey, F. Land, D. Targett, "Evaluating investments in IT," Journal of Information Technology, Vol.7, 1992.
  15. B. Farbey, F. Land, and D. Targett, How to Assess your IT Investment. A study of Methods and Practice, Butterworth Heinemann, Oxford, 1993.
  16. N. Fenton, "How effective are software engineering methods?," Journal of Systems and Software, Vol.22, No.2, August, 1993.
  17. K. Grindley, Managing IT at Board Level. The Hidden Agenda Exposed, Pitman, 1991.
  18. R. Hirschheim and S. Smithson, "Information Systems Evaluation: Myth and Reality," in Information Analysis Selecting Readings, R. Galliers (ed), Addison Wesley, 1987.
  19. R. Hirschheim and S. Smithson, "A critical analysis of information systems evaluation," in Information Systems Assessment: Issues and Challenges, N. Bjorn-Andersen and G.B. Davis (eds) North Holland, Amsterdam, 1988, pp. 17-37.
  20. B. Hochstrasser, "Evaluating IT investments. Matching Techniques to Project", Journal of Information Technology, Vol.5, No.4, Dec., 1990, pp. 215-221.
  21. B. Hochstrasser and C. Griffiths, Controlling IT Investments. Strategy and Management, Chapman & Hall, 1991.
  22. A. Holmes and A. Poulymenakou, "Towards a conceptual framework for investigating IS failure," in Proceedings of the 3rd European Conference on Information Systems, Athens, Greece, June, 1995.
  23. J. Iivary, "Assessing IS design methodologies as methods of IS assessment," in Information Systems Assessment: Issues and Challenges, N. Bjorn-Andersen and G.B. Davis (eds) North Holland, Amsterdam, 1988.
  24. K. Kumar, "Post implementation evaluation of computer-based IS: current practices," Communications of the ACM, Vol.33, No.2, 1990, pp. 203-212.
  25. F. Land, "Adapting to changing user requirements," Information and Management, Vol.5, 1982, pp. 59-75.
  26. M. Lang, "Evaluating IT investments," IBM System User, Vol.15, No.3, March 1994.
  27. D. Layder, New Strategies in Social Research. Polity Press, 1993.
  28. K. Lyttinen and R. Hirscheim, "Information systems failures - a survey and classification of the empirical literature," in Oxford Surveys in Information Technology, Vol.4, 1987, pp. 257-309.
  29. G. Merrill, "Uncertainty calls for brave decisions," Management Consultancy, June 1993.
  30. E. Mumford, Designing Human Systems: The ETHICS Method, Manchester Business School, 1983.
  31. M.E. Nissen, "Valuing IT through virtual process measurement," in Proceedings of the 15th International Conference on Information Systems, Vancouver, Canada, Dec, 1994, pp. 309-323.
  32. M.M. Parker, R.J. Benson, and H.E. Trainor, Information Economics: Linking Business Performance to Information Technology, Prentice-Hall, New Jersey, 1988.
  33. G. Peters, "Beyond strategy - benefits identification and management of specific IT investments," Journal of Information Technology, Vol.5, No.4, 1990, pp. 205-214.
  34. A. Pettigrew, The Awakening Giant: Continuity and Change in ICI. Blackwell, Oxford, 1985.
  35. A. Poulymenakou, "A contingency approach to knowledge acquisition: Critical factors for knowledge based systems development," in Proceedings of the third annual symposium of the international association of knowledge engineers, Washington DC, Nov., 1992.
  36. P. Powell, "Information Technology Evaluation: Is It Different?," Journal of Operational Research Society, Vol.43, No.1, 1992, pp. 29-42.
  37. P. Powell, (1992) "Information Technology and Business Strategy: A Synthesis of the Case for Reverse Causality," in Proceedings of the 13th International Conference on Information Systems, Dallas Texas, ICIS, Dec. 1992, pp. 71-80.
  38. H.T. Roos, "Managing technological change," The Computer Conference Analysis Newsletter, No.325, Sept. 1993.
  39. J. Rosenhead (ed), Rational Analysis for a Problematic World. John Wiley, 1990.
  40. C. Sauer, Why Information Systems Fail: A Case Study Approach., Alfred Waller, UK, 1993.
  41. J. Scott, "The link between organisational learning and the business value of information technology," in Proceedings of the 15th ICIS DoctoralConsortium. Vancouver Island, Canada, Dec. 1994.
  42. M.S. Scott Morton (ed), The corporation of the 1990s. Information Technology and Organizational Transformation, Oxford University Press, 1991.
  43. V. Serafeimidis and S. Smithson, "Evaluation of IS/IT Investments: Understanding and Support," in Proceedings of The First European Conference on Information Technology Investment Evaluation, A. Brown and D. Remenyi (eds), Henley Management College, UK, Sept. 1994.
  44. V. Serafeimidis and S. Smithson, "The management of change for a rigorous appraisal of IT investment. The case of a UK insurance organisation," in Proceedings of The 3rd European Conference on Information Systems, Athens, Greece, June, 1995.
  45. V. Serafeimidis and S. Smithson, "Requirements for an IT Investment Appraisal Framework for the 1990s: Towards a More Rigorous Solution," in Proceedings of The Second European Conference on Information Technology Investment Evaluation, Henley Management College, UK, July, 1995.
  46. D.J. Silk, Planning IT. Creating an information management strategy, Butterworth Heinemann, 1991.
  47. P. Strassman, Information Payoff: The Transformation of Work in the Electronic Age, Free Press, New York, 1985.
  48. V.J. Symons, "A review of information systems evaluation: content, context and process," European Journal of Information Systems, Vol.1, No.3, Aug., 1991, pp. 205-212.
  49. S. Symons and G. Walsham, "The evaluation of IS: a critique," Journal of Applied Systems Analysis, Vol.15, 1988, pp. 119-132.
  50. V.J. Symons and G. Walsham, "The evaluation of Information Systems: a critique," in The Economics of Information Systems and Software, R. Veryard (ed) Butterworth-Heinemann Ltd, 1991, pp. 71-88.
  51. R.H. Thayer and B.W. Boehm, Tutorial: software engineering project management Computer Society Press of the Institute of Electrical and Electronics Engineers, Washington, 1988.
  52. E.L. Trist and K.W. Babforth, "Some social and psychological consequences of the Longwall method of coal getting," Human relations, Vol.1, 1951.
  53. J.M. Ward, "A portfolio approach into evaluating information systems investments and setting priorities," Journal of Information Technology, Vol.5, No.4, 1990, pp. 222-231.
  54. G. Walsham, Interpreting Information Systems in Organisations, John Wiley & Sons, Series in Information Systems, 1993.
  55. L. Willcocks, "Evaluating Information Technology investments: research findings and reappraisal," Journal of Information Systems, Vol.2, No.4, 1992, pp. 243-268.
  56. L. Willcocks, "Introduction: of capital importance," in Information Management. The evaluation of information systems investments, L. Willcocks (ed) Chapman & Hall, 1994, pp. 1-27.
  57. L. Willcocks and S. Lester, Evaluating the feasibility of Information Technology Investments, Oxford Institute of Information Management, Research and Discussion Papers, RDP93/1, 1993.
  58. L. Willcocks and S. Lester, Evaluation and Control of IS Investments. Recent UK Survey Evidence, Oxford Institute of Information Management, Research and Discussion Papers, RDP93/3, 1993.
  59. L. Willcocks and H. Margetts, "Risk and information systems: develping the analysis," in Information Management. The evaluation of information systems investments, L. Willcocks (ed) Chapman & Hall, 1994, pp. 207-227.
  60. D. Wilcox, "Wringing out the changes," Computing, July 7th, 1994.
  61. S. Zuboff, "Informate the Enterprise: An Agenda for the 21st Century," National Forum, Summer, 1991, pp. 3-7.


HTML rendering (C) Copyright 1995 SENA S.A. May be freely uploaded by WWW viewers and similar programs. All other rights acknowledged.