lunes, 30 de mayo de 2011

Assessment and Acceptance of ICT (May. 24)

Having covered some of the main aspects of the connection between ICT and society and the participatory approaches for designing socio-technical systems, we now consider one last topic: how do we assess or evaluate resulting ICT artifacts or information systems to anticipate (predict) or measure their success and / or acceptance? Assessment has long been a critical issue for ICT and information systems. Among other things, this is a result of suggestions or claims that ICT systems do not always work as expected. These claims belong to statistical or managerial accounts of IT project developments and their frequent ensuing failure, as in the so-called "software crisis" which began in the sixties, the Standish Group's Chaos Reports since the early nineties, and the pendulum-style attention devoted to the "IT-productivity paradox", which we have already discussed. In essence, the idea is that the majority of IT projects fail (in cost, in time, in actual usage, in benefits reported). But, as we have mentioned, many professionals and academics (most, of course, from the IT sector) have reacted against these reports or trends, arguing that they do not measure what needs to be measured, that they are not transparent, that they are not verifiable, that they are simplistic or that they actually contribute to a negative attitude towards ICT (especially from management circles), rather than suggesting improvements.As a result, several models for measuring ICT or information systems success or acceptance have been developed in the past years.

In the most general level, measurements for the digital divide or for ICT development have evolved from purely access or infrastructure oriented indices to more comprehensive and multi-topical models that go beyond technological determinism. We have covered this somewhat when discussing the digital divide, digital inclusion and the information/knowledge/network society (cfr. Spangenberg, 2005). In that same vein, Barzilai-Nahon (2006) discusses some existing digital divide measurements and proposes an improved contextual index that contains more useful information and better relations among factors to enable better policy-making. In order to build this index, she calls for a recognition of a specific level of analysis (determining whether the measurement is aimed at the individual, community, sector, national or international level) and a coherence between the structure of indices created at different levels (between levels the factors remain, but factor weights are adjusted depending on the context). In addition, the relationships (causality or correlations) must be carefully considered. As a consequence, she proposes an index for the digital divide constructed from the following interrelated factors: social and government constraints or support, affordability, use, infrastructure access, accessibility and sociodemographic factors. In addition to her specific proposal for a comprehensive digital divide index, her contribution is also geared towards suggestions for building any such measurement index.

In terms of particular information systems, several measurements have been employed to determine their success. As expected, initial focus was placed on financial measurements (ROI in particular) but as a consequence of the IT-productivity paradox and similar limitations, measurements have moved beyond these financial measures and embraced other models based, for example, on benchmarking or on the balanced scorecard. One such model that goes beyond financial aspects and has been widely used in the past couple of decades is the DeLone and McLean (D&M) model of information systems success, first proposed in 1992. As discussed in Petter, DeLone and McLean (2008), this model has gone through a series of challenges, revisions and additions from several scholars. The original model presented the influence that system quality, information quality, use and user satisfaction have on the individual impact (benefit) and subsequently on organization impact of the information system under study. Many researchers applied, revised and suggested improvements on that model (and continue to do so). As a result, the authors revised the model in 2003 and added service quality, grouped use with intention to use, and grouped individual and organizational impact into a single factor: net benefits. Through a comprehensive literature study (taking advantage of the popularity of the model) Petter et al. (ibid.) found that most of the hypotheses embedded in the model had been empirically supported throughout the years of use, albeit with some weak support for a couple of factor dependencies, as well as insufficient data for two more. Crucially, however, they found that the model had been used at the individual level of analysis (typically due to the fact that quality and use are determined by individual users via questionnaires) but very few studies had been carried out for the organizational level of analysis. Besides highlighting the recognition that a measurement index should be transparent about its aimed level of analysis, as Barzilai-Nahon also suggests, these results also mean that little has been done to actually empirically determine the organizational benefit of information systems. This goes far beyond an academic gap, since it basically suggests that we still cannot empirically prove that information systems are beneficial for organizations!

Another widely used model (also mentioned in Petter et al.) is the Technology Acceptance Model (TAM) first proposed by Davis et al. (1989). Rather than focusing on the success of a particular system, TAM aims at assessing ex ante, whether a specific technological artifact is potentially acceptable. Perceived usefulness and perceived ease of use are the key factors which influence the attitude towards using the artifact, which in turn influence his or her intention to use it, which subsequently influence his or her actual use of the system. Actual use is here the final outcome which the model aims at predicting, whereas in the D&M model, use is actually an input factor for finding out the impact or benefit. The simplicity of the model, the fact that it has been built upon an existing, empirically supported, model from psychology (the Theory of Reasoned Action) and the fact that it is very useful for evaluating an artifact that has yet to be implemented (for example, a prototype) has made TAM into a very popular research model. This has resulted (as in the D&M model) in a plethora of variations of TAM, as reviewed in Legris (2003). Many of the alternatives have been centered around a contribution of specific factors that affect an individual's perception of usefulness, including: subjective norms, image, job relevance, output quality and result demonstrability, as in TAM2. In addition, abundant research uses TAM and adds modifier variables as additional factors that may influence the connection between perceived usefulness / ease of use and attitude or intention to use the system. For example, in Donaldson and Golding (2009) age and gender are added to the model to hypothesize about the influencing effect that those factors have on attitude and intention. As such, TAM can be used for evaluating or validating an artifact that is yet to be implemented; this is a frequent use in research that results in an artifact or prototype which requires some form of validation considering that the artifact has not yet been implemented (as in design science research). In addition, TAM can also be expanded to account for specific factors that represent new hypotheses to test and thus becomes not just a research (evaluation or validation) instrument but a research model. However, this has probably resulted in misuse or overuse of TAM simply as a way to add a "check" to the validation or empirical research model of a specific contribution. In addition, as in the D&M model, TAM is squarely aimed at the individual level of analysis and does not provide a way of assessing organizational acceptance, beyond the average or generalization of individual responses.

In sum, assessment or measurement of ICT, information systems and the information/knowledge/network society in general is still very much a work in progress. Moreover, current understandings of innovation or stock market behavior still push many people to focus on either financial measures or on market results as the real proof of the acceptance or success of ICT, but as we have seen throughout the course, it is not just a matter of assigning value to an artifact, but of being able to articulate it to a specific context of use. We may recall our early discussion of the inception of the socio-technical approach through the example of Lotus Notes, which indicated that use or success is completely dependent on context and, as such, cannot be generalized across organizations or even individuals. If, however, we still succumb to purely market-oriented technological determinism, we will once again face the fact that in a dynamic and accelerated technological environment, ICT artifacts carry little or no intrinsic value and that whatever value is attached to them is only ephemeral and context-dependent.

7 comentarios:

  1. Comentarios Medicion y Metricas:

    I believe that the concept of measurement is aimed at determining the ratio between the size or occurrence of any object and a fixed unit of measurement. To execute the measurement, the size of the object and the unit should be the same magnitude.

    I think the concept of measurement and metrics are similar because it is a kind of methodology that plans, develops and maintains all types of information systems, where key elements of these metrics are processes, interfaces, techniques, practices and roles or profiles.

    I think the model by Delone and McLean offer a general and comprehensive definition of the success of the SI, so that different perspectives and evaluate cover some topics of the SI. These two authors consider reviewing the existing definitions of successful information systems as well as their respective measures, which classified them into six main categories. Came to conclude a model which referred to a multidimensional measurement with the interdependencies between different categories of success.

    ResponderEliminar
  2. Taller 4.

    Integrantes :
    Jhon Jairo Paez
    Sebastian Miranda
    Jose Manuel Burbano

    1. Cual es el propósito del modelo?

    Rta: El propósito del modelo que realizo Hsiu-Fen-Lin, va dirigido tanto a los diseñadores de la comunidades virtual como también a los usuarios que van a utilizar dicha comunidad, de manera que debe existir una alianza entre el diseño y las comunidades con el fin de que el proyecto termine de manera exitosa.

    2. Cual es la unidad del análisis?.

    Rta: Como se ha observado se puede considerar que la unidad de análisis corresponde a la entidad mayor o representativa de lo que va a ser objeto especifico de estudio en medición, y se refiere al que o quien es objeto de interés en cualquier tipo de investigación, en este caso el modelo hace referencia como unidad de análisis a los usuarios o los miembros de la comunidad, con el fin de realizar diferentes mediciones y obtener resultados para la investigación.

    3. Que Hipótesis quedan soportadas?

    Rta: Las hipótesis que quedan soportadas a través del modelo son: Information Quality, System Quality, Trust, Member Satisfaction, Sense of Belonging, Member Loyalty. Existe solo una hipótesis que no fue soportada en el modelo que es Social Usefulness ya que los valores determinados en la encuesta muestran que no son significantes, y se concluye que dicha hipótesis no es soportada.

    ResponderEliminar
  3. Comentarios Modelos de Adopción y aceptación.

    The Theory of Reasoned Action (TRA) can be viewed in a way where the actions are essentially bases in the different attitudes of individual human beings, this is because the actions essentially describe such attitudes. The information allows the formation of attitudes can be considered cognitive, affective and behavioral.

    So we can see that these theories argue it is produced the famous relationship between attitudes and behavior. There are cases where it has been shown this relationship reflects a fundamental premise: to consider the attitudes and behaviors at the same level of specificity. For example, the purchasing behavior of light bulbs have a similar level of specificity that the attitude to domestic energy consumption, whereas the attitude toward the environment is more general, therefore, not being on the same level of specificity the previous conduct, would be a poor predictor of this.

    I think that the Technology Acceptance Model (TAM) is considered as a theory of the SI, which models how users come to accept and use any technology. This model suggests that when different users are presented with a new technology, a number of factors may influence your decision on how and when to use. TAM is an adaptation of TRA to the field of software engineering because TAM states that perceived usefulness and ease of use that determine the purpose of a person (user), using a system prior to employment face the real system .

    ResponderEliminar
  4. Taller No. 4

    Integrantes:
    Luisa Barrera
    Yolima Uribe
    Juan Carlos Guevara
    Gerardo Ospina

    1. Cual es el propósito del modelo?

    Rta: El propósito del modelo es describir los factores determinantes para el éxito de una comunidad virtual al/los creador(es) o lider(es) de la misma.

    2. Cual es la unidad del análisis?.

    Rta: La unidad de análisis son los miembros de la comunidad (Unidad de análisis individual).

    3. Que hipótesis quedan soportadas?

    Rta: Las hipótesis soportadas son: Information Quality, System Quality, Trust, Member Satisfaction, Sense of Belonging, Member Loyalty

    ResponderEliminar
  5. La mayoría de los proyectos de sistemas de información no cumplen con las expectativas para lo que fueron creados. Es posible que al inicio las cumplan, pero al pasar el tiempo se quedan limitados. Esto se debe a que las condiciones del entorno cambian continuamente y por tanto las dinámicas de las organizaciones donde se implementan.

    Dentro de este contexto la evaluación de los sistemas informáticos, también se queda corta, porque esta depende mucho de la satisfacción de las necesidades, pero si estas cambian los modelos deben adaptarse.

    Los modelos de medición existentes, presentan alternativas interesantes para medir un sistema de información, pero desde mi punto de vista, están limitados, ya que no tienen en cuenta factores como complejidad, cambio de condiciones del entorno, trabajo en red y las características de las personas que los van a utilizar entre otros factores.

    Juan Carlos Guevara

    ResponderEliminar
  6. It is very important to generate indexes to measure the digital divide that go beyond infraestructure mesurement. In this regard, barzilai-nahon index is an improvement as it provides a model to generate such indexes and it also provide a framework to reason about the factors that should be measured to obtain a significative index.

    Measurement of ICT and information systems impact is also important, the measurement models proposed are an inprovement, as they center their measurement in users perceptions, trying to establish correlation between system factors and human factors. These models, short as they are, are steps in the correct direction.

    ResponderEliminar
  7. I think find an indicated method for to evaluate a project it's not easy. Off course we can to know the variables and the principals characteristics of our project, but its not easy specify and use the better method for evaluate our project, specially when we don't have experience in those process of measurement. We can to know the behaviour of a project in a controled context, or at least, having some knowledge about the enviroment and the project deployment, but without to know how a technology is going to be its behaviour in the practice, it's difficult to include in a measurement method all project's changes, variables and responses to practice. Because of this, the experience can to run a important role in this kind of evaluation.

    ResponderEliminar