Having covered some of the main aspects of the connection between ICT and society and the participatory approaches for designing socio-technical systems, we now consider one last topic: how do we assess or evaluate resulting ICT artifacts or information systems to anticipate (predict) or measure their success and / or acceptance? Assessment has long been a critical issue for ICT and information systems. Among other things, this is a result of suggestions or claims that ICT systems do not always work as expected. These claims belong to statistical or managerial accounts of IT project developments and their frequent ensuing failure, as in the so-called "software crisis" which began in the sixties, the Standish Group's Chaos Reports since the early nineties, and the pendulum-style attention devoted to the "IT-productivity paradox", which we have already discussed. In essence, the idea is that the majority of IT projects fail (in cost, in time, in actual usage, in benefits reported). But, as we have mentioned, many professionals and academics (most, of course, from the IT sector) have reacted against these reports or trends, arguing that they do not measure what needs to be measured, that they are not transparent, that they are not verifiable, that they are simplistic or that they actually contribute to a negative attitude towards ICT (especially from management circles), rather than suggesting improvements.As a result, several models for measuring ICT or information systems success or acceptance have been developed in the past years.
In the most general level, measurements for the digital divide or for ICT development have evolved from purely access or infrastructure oriented indices to more comprehensive and multi-topical models that go beyond technological determinism. We have covered this somewhat when discussing the digital divide, digital inclusion and the information/knowledge/network society (cfr. Spangenberg, 2005). In that same vein, Barzilai-Nahon (2006) discusses some existing digital divide measurements and proposes an improved contextual index that contains more useful information and better relations among factors to enable better policy-making. In order to build this index, she calls for a recognition of a specific level of analysis (determining whether the measurement is aimed at the individual, community, sector, national or international level) and a coherence between the structure of indices created at different levels (between levels the factors remain, but factor weights are adjusted depending on the context). In addition, the relationships (causality or correlations) must be carefully considered. As a consequence, she proposes an index for the digital divide constructed from the following interrelated factors: social and government constraints or support, affordability, use, infrastructure access, accessibility and sociodemographic factors. In addition to her specific proposal for a comprehensive digital divide index, her contribution is also geared towards suggestions for building any such measurement index.
In terms of particular information systems, several measurements have been employed to determine their success. As expected, initial focus was placed on financial measurements (ROI in particular) but as a consequence of the IT-productivity paradox and similar limitations, measurements have moved beyond these financial measures and embraced other models based, for example, on benchmarking or on the balanced scorecard. One such model that goes beyond financial aspects and has been widely used in the past couple of decades is the DeLone and McLean (D&M) model of information systems success, first proposed in 1992. As discussed in Petter, DeLone and McLean (2008), this model has gone through a series of challenges, revisions and additions from several scholars. The original model presented the influence that system quality, information quality, use and user satisfaction have on the individual impact (benefit) and subsequently on organization impact of the information system under study. Many researchers applied, revised and suggested improvements on that model (and continue to do so). As a result, the authors revised the model in 2003 and added service quality, grouped use with intention to use, and grouped individual and organizational impact into a single factor: net benefits. Through a comprehensive literature study (taking advantage of the popularity of the model) Petter et al. (ibid.) found that most of the hypotheses embedded in the model had been empirically supported throughout the years of use, albeit with some weak support for a couple of factor dependencies, as well as insufficient data for two more. Crucially, however, they found that the model had been used at the individual level of analysis (typically due to the fact that quality and use are determined by individual users via questionnaires) but very few studies had been carried out for the organizational level of analysis. Besides highlighting the recognition that a measurement index should be transparent about its aimed level of analysis, as Barzilai-Nahon also suggests, these results also mean that little has been done to actually empirically determine the organizational benefit of information systems. This goes far beyond an academic gap, since it basically suggests that we still cannot empirically prove that information systems are beneficial for organizations!
Another widely used model (also mentioned in Petter et al.) is the Technology Acceptance Model (TAM) first proposed by Davis et al. (1989). Rather than focusing on the success of a particular system, TAM aims at assessing ex ante, whether a specific technological artifact is potentially acceptable. Perceived usefulness and perceived ease of use are the key factors which influence the attitude towards using the artifact, which in turn influence his or her intention to use it, which subsequently influence his or her actual use of the system. Actual use is here the final outcome which the model aims at predicting, whereas in the D&M model, use is actually an input factor for finding out the impact or benefit. The simplicity of the model, the fact that it has been built upon an existing, empirically supported, model from psychology (the Theory of Reasoned Action) and the fact that it is very useful for evaluating an artifact that has yet to be implemented (for example, a prototype) has made TAM into a very popular research model. This has resulted (as in the D&M model) in a plethora of variations of TAM, as reviewed in Legris (2003). Many of the alternatives have been centered around a contribution of specific factors that affect an individual's perception of usefulness, including: subjective norms, image, job relevance, output quality and result demonstrability, as in TAM2. In addition, abundant research uses TAM and adds modifier variables as additional factors that may influence the connection between perceived usefulness / ease of use and attitude or intention to use the system. For example, in Donaldson and Golding (2009) age and gender are added to the model to hypothesize about the influencing effect that those factors have on attitude and intention. As such, TAM can be used for evaluating or validating an artifact that is yet to be implemented; this is a frequent use in research that results in an artifact or prototype which requires some form of validation considering that the artifact has not yet been implemented (as in design science research). In addition, TAM can also be expanded to account for specific factors that represent new hypotheses to test and thus becomes not just a research (evaluation or validation) instrument but a research model. However, this has probably resulted in misuse or overuse of TAM simply as a way to add a "check" to the validation or empirical research model of a specific contribution. In addition, as in the D&M model, TAM is squarely aimed at the individual level of analysis and does not provide a way of assessing organizational acceptance, beyond the average or generalization of individual responses.
In sum, assessment or measurement of ICT, information systems and the information/knowledge/network society in general is still very much a work in progress. Moreover, current understandings of innovation or stock market behavior still push many people to focus on either financial measures or on market results as the real proof of the acceptance or success of ICT, but as we have seen throughout the course, it is not just a matter of assigning value to an artifact, but of being able to articulate it to a specific context of use. We may recall our early discussion of the inception of the socio-technical approach through the example of Lotus Notes, which indicated that use or success is completely dependent on context and, as such, cannot be generalized across organizations or even individuals. If, however, we still succumb to purely market-oriented technological determinism, we will once again face the fact that in a dynamic and accelerated technological environment, ICT artifacts carry little or no intrinsic value and that whatever value is attached to them is only ephemeral and context-dependent.
Blog del curso de TICs y Sociedad de la Maestría de Ingeniería de Sistemas y Computación, Universidad Javeriana.
lunes, 30 de mayo de 2011
viernes, 13 de mayo de 2011
User-centered design and end-user development (May. 10)
Participatory design has been a part of ICT development for many decades, as we have seen earlier. But there are two specific strands that have become increasingly important and may well redefine the whole information systems or informatics discipline: user centered design and end-user development. User centered-design (UCD), according to Mao et al. (2005) is “a multi-disciplinary design approach based on the active involvement of users to improve the understanding of user and task requirements, and the iteration of design and evaluation”. By now, this shouldn’t sound surprising or revolutionary, most modern (agile) software development methods emphasize the active involvement of customers or users in an iterative fashion. However, the clear emphasis here is on user-centeredness and on multidisciplinarity. This means that beyond being participatory, the design effort is also focused completely on user needs, rather than the traditional system-centered design which may devote more attention to non-functional requirements, management input or algorithms. Multidisciplinary participation is also brought forth as a natural element in UCD, partly because users come from different disciplines, but more importantly, because different disciplines are required to capture their needs, coordinate their participation and translate requirements into technical specifications together with usage guidelines, manuals, training, change management, and economic and cultural viability assessments and transitions with an eye on sustainable usage and benefit.
Mao and others carried out a survey in which they found that the application of UCD means an improvement in usefulness and usability as well as time saving in the long run (mostly by reducing reworking deployed applications, which is significant given that between 70 to 90% of all software costs occur during “maintenance”). These arguments easily map to the pragmatic proposition behind participatory design (discussed in the previous entry), but it is also clear that the moral proposition is a key issue and that participatory, qualitative methods are required to carry it out in practice, including field studies, iterative design, focus groups, interviews, and task analysis. In an era where consumers are prosumers (well aware of technical trends and capabilities)and where alternative ICT tools may be just a click away, catering responsibly and effectively to users is no longer just desirable but mandatory. If we fail to do so, someone else will do it, including the users themselves…
End-user development (EUD) expresses precisely this behavior when the users themselves end up customizing (personalizing), configuring or even downright programming the software themselves, as discussed in Fischer et al. (2004) . As programming languages become lighter, more high level, easier to use, to access, to install and to get training for, this is becoming increasingly the case. For a user to develop his or her own webpage, not much more than a simple text editing software is required. A few hours browsing through tutorials on CSS, Javascript, PHP, XHTM or other similar tools or languages may be all that is required for more advanced functionality, not to mention the fact that actually getting it online can be completely free. More generally, data processing tools, statistical software, office suites, open source software have all become easier to use and full of customization possibilities, Wizards, or drag-and-drop features that make development at least seem less complicated and which has resulted in large amounts of amateurs developing their own systems. This is not new, there has always been the clerk or secretary that manages to create a very functional tools from simply tailoring a spreadsheet, but the scope and accessibility of languages and development tools is much wider now, enabling users to create ever more sophisticated applications by themselves. It is obvious that this is not without risks (heterogeneity, security, integration, quality, scalability, transferability are all potential issues), but it is also to be expected that the trend will continue. On a strategic level, IT professionals really need to look into this, as it may reshape their role and even their very livelihood. Today big software companies may end competing with basement teenagers that develop mobile applications in a matter of hours generating mass consumption software with much higher returns on investments. The gaming and entertainment industry is also a rich setting for end-used development , which is no small feat considering that games have for sometime been the highest grossing software applications on the market. But from a more operative point of view, the key is for IT professionals to rethink their role not as expert designers, but rather as facilitators of a collaborative process where techbnology os not the end but the means.
For example, in a business organization in order for EUD to be successful, Fischer et al. argue that it should include sustained motivation from the developer (the user), it should be supported by the right tools and it should have management support. Furthermore, they argue for “meta-design” as a way to (under) develop systems such that the result is a socio-technical environment that empowers users to become active and continuously engaged in the development and evolution of the systems which they use. This obviously goes beyond the technical and as such is connected with the socio-technical tradition that has guided all our discussions throughout the course. Indeed, meta-design can also be linked to the work of Maturana and Varela (which you may recall were deep inspiration for the Winograd and Flores book with which we started the course). For them, technology is not the issue, the issue is rather what kind of humans we wish to become and what kind of culture we should strive for. Rationality, for them, is not the answer, the answer is emotional – there is a whole cognitive, systemic, autpoietic theory to back this up, but it goes beyond this entry. We should aim at becoming homo sapiens amans. As such, we should start by empathizing, indeed loving, the users for which we are designing solutions, and getting off our high horses or our isolationist, detached expert positions (all too common in IT professionals). It should be a complex, challenging and evolutionary path, but certainly exciting.
Mao and others carried out a survey in which they found that the application of UCD means an improvement in usefulness and usability as well as time saving in the long run (mostly by reducing reworking deployed applications, which is significant given that between 70 to 90% of all software costs occur during “maintenance”). These arguments easily map to the pragmatic proposition behind participatory design (discussed in the previous entry), but it is also clear that the moral proposition is a key issue and that participatory, qualitative methods are required to carry it out in practice, including field studies, iterative design, focus groups, interviews, and task analysis. In an era where consumers are prosumers (well aware of technical trends and capabilities)and where alternative ICT tools may be just a click away, catering responsibly and effectively to users is no longer just desirable but mandatory. If we fail to do so, someone else will do it, including the users themselves…
End-user development (EUD) expresses precisely this behavior when the users themselves end up customizing (personalizing), configuring or even downright programming the software themselves, as discussed in Fischer et al. (2004) . As programming languages become lighter, more high level, easier to use, to access, to install and to get training for, this is becoming increasingly the case. For a user to develop his or her own webpage, not much more than a simple text editing software is required. A few hours browsing through tutorials on CSS, Javascript, PHP, XHTM or other similar tools or languages may be all that is required for more advanced functionality, not to mention the fact that actually getting it online can be completely free. More generally, data processing tools, statistical software, office suites, open source software have all become easier to use and full of customization possibilities, Wizards, or drag-and-drop features that make development at least seem less complicated and which has resulted in large amounts of amateurs developing their own systems. This is not new, there has always been the clerk or secretary that manages to create a very functional tools from simply tailoring a spreadsheet, but the scope and accessibility of languages and development tools is much wider now, enabling users to create ever more sophisticated applications by themselves. It is obvious that this is not without risks (heterogeneity, security, integration, quality, scalability, transferability are all potential issues), but it is also to be expected that the trend will continue. On a strategic level, IT professionals really need to look into this, as it may reshape their role and even their very livelihood. Today big software companies may end competing with basement teenagers that develop mobile applications in a matter of hours generating mass consumption software with much higher returns on investments. The gaming and entertainment industry is also a rich setting for end-used development , which is no small feat considering that games have for sometime been the highest grossing software applications on the market. But from a more operative point of view, the key is for IT professionals to rethink their role not as expert designers, but rather as facilitators of a collaborative process where techbnology os not the end but the means.
For example, in a business organization in order for EUD to be successful, Fischer et al. argue that it should include sustained motivation from the developer (the user), it should be supported by the right tools and it should have management support. Furthermore, they argue for “meta-design” as a way to (under) develop systems such that the result is a socio-technical environment that empowers users to become active and continuously engaged in the development and evolution of the systems which they use. This obviously goes beyond the technical and as such is connected with the socio-technical tradition that has guided all our discussions throughout the course. Indeed, meta-design can also be linked to the work of Maturana and Varela (which you may recall were deep inspiration for the Winograd and Flores book with which we started the course). For them, technology is not the issue, the issue is rather what kind of humans we wish to become and what kind of culture we should strive for. Rationality, for them, is not the answer, the answer is emotional – there is a whole cognitive, systemic, autpoietic theory to back this up, but it goes beyond this entry. We should aim at becoming homo sapiens amans. As such, we should start by empathizing, indeed loving, the users for which we are designing solutions, and getting off our high horses or our isolationist, detached expert positions (all too common in IT professionals). It should be a complex, challenging and evolutionary path, but certainly exciting.
jueves, 5 de mayo de 2011
Particpatory design of community informatics (May. 3)
By now, the case for participative design of ICT should be clear. Not only in the context of this course but for the information systems discipline in general. As a matter of fact, participative design or participative approaches have been around since the very outset of the discipline. It has been one of the main areas of work under the socio-technical approach; indeed the social aspect of socio-technical systems implies precisely an involvement of users, customers, people affected, managers, designers, experts, etc. in the design of information systems. Methodologies or approaches such as Action Research, Checkland's Soft Systems Methodology or Enid Mumford's ETHICS approach are all testament of this development that has continued through the work we have already discussed (Orlikowski, Walsham, Avgerou, among others). However, these approaches have yet to gain the recognition or widespread use they should have, especially outside academia. This is the same conclusion that Mumford herself reached, as expressed in her last published interview before her death a few years ago (Porra & Hirschheim, 2007). Mumford's career developed in parallel to the development of informatics. She in fact started out working in docks and mines and using participative approaches focused around industrial relations and job satisfaction. Having learnt anthropologically influenced methods from the Tavistock Institute, Mumford and her colleagues would do ethnographic type of research becoming deeply embedded or engaged with the actual work of clerks or miners at the operational level. You can picture her wearing a helmet and lying on the floor while interviewing a miner about his perception of his job, its conditions, management, etc. Her being a woman in a mostly male environment was difficult (not to mention risky) but it was also an asset in terms of gaining trust and appearing less threatening or less representative of management interests (which can of course hamper or downright impede interaction with workers).
Nonetheless, and despite having proposed ETHICS as one of the most influential and early contributions in participative design of ICT systems, Mumford still felt at the end of her career and life that the basic goal had still not been attained, that an ethical approach to IS development had yet to be realized or popularized. That early proposals (such as ETHICS or SSM) were still seen as utopian, or costly, or risky or something to be dealt with by "soft" professionals, but not be real hardcore engineers or designers. So despite the many advances in theoretical and methodological terms, the situation remains mostly unchanged and there are very few companies (whether big corporations, consulting firms, IT industry leaders) that have adopted participative ethical design as a core of their business or as the standard way to go about designing ICT. Mumford's recommendations are as valid today as they were more than forty years ago: (1) information systems should be designed to improve the quality of life for all; (2) individuals should participate in the design; (3) solutions to local problems have global consequences; and (4) all research should lead to problem-solving action. The connection to design science research is also evident and potentially a good sign that there might be a resurgence or strengthening of participative approaches. This might also be pushed forward by the popularization of agile development methods and the widespread use of IT that turns users into more skilled and knowledgeable co-designers.
Because community informatics is aimed at building systems for a specific group (NGOs, townships, etc.) and typically differentiates itself from business informatics, it is an especially rich setting for studying and applying participative design, as expressed in (Carrol & Rosson, 2007). For them, participatory design is supported by moral and pragmatic principles or propositions. The moral principle expresses the ethical compromise or responsibility that within a democratic society, people affected by a system should participate in its conception and design; that it is in fact their right to do so. The pragmatic principle simply states that participation is required in order for the systems to actually be effective, because there is no other way to design it in a situated context-dependent manner, under the understanding that there is no generic ICT for community informatics. Community informatics implies a diverse population, a diverse technological infrastructure, unpredictability in terms of future users and multiplicity of roles. On the other hand, it implies that whatever the intervention amounts to (a new ICT tool, a new system, a new process support), this will impact the lives of ordinary citizens. So it is evident that both the moral and the pragmatic principle hold for these types of interventions. While participatory design is aimed at inclusion (which we have already discussed), community informatics is aimed at self-actualisation (which we have also discussed as empowerment of individuals in line with Amartya Sen's ideas about capacities rather than resources). When combined, the result should be collective actualisation in the target community.
Now, in order for this to happen, there are a number of guidelines, issues and instruments available. First of all, motivation is a key ingredient for participation. If there are no community leaders to begin with, then the designer (as facilitator, not as expert) should contribute to generating the conditions for participation. On the one hand, the fear of technology should be reduced. This can be achieved, either by showing simple examples of easily applicable ICT tools (e.g. using Twitter or Doodle to quickly organize a meeting) or by previously training participants with an emphasis on appropriation and autonomous deployment of (web-enabled) tools. On the other hand, the participants should take control of the process through learning. Not just learning how to use or deploy ICT, but learning about participation through discussion on these issues. And more importantly, the community can share their learning experiences with other communities, helping them structure their own participative community informatics initiatives: this generates both recognition and reputation, which are fertile grounds for motivation and empowerment. Specifically, each intervention may be supported by observation, interviews, workshops, scenario-based design, case studies, forum or wiki based technology assessments, learning about simple open source or web authoring tools, and then on to cross-community workshops and training which may result in the emergence of sustainable steering committees through which the community adopts and takes charge of the design effort towards the future and in an extended network beyond the local context.
Nonetheless, and despite having proposed ETHICS as one of the most influential and early contributions in participative design of ICT systems, Mumford still felt at the end of her career and life that the basic goal had still not been attained, that an ethical approach to IS development had yet to be realized or popularized. That early proposals (such as ETHICS or SSM) were still seen as utopian, or costly, or risky or something to be dealt with by "soft" professionals, but not be real hardcore engineers or designers. So despite the many advances in theoretical and methodological terms, the situation remains mostly unchanged and there are very few companies (whether big corporations, consulting firms, IT industry leaders) that have adopted participative ethical design as a core of their business or as the standard way to go about designing ICT. Mumford's recommendations are as valid today as they were more than forty years ago: (1) information systems should be designed to improve the quality of life for all; (2) individuals should participate in the design; (3) solutions to local problems have global consequences; and (4) all research should lead to problem-solving action. The connection to design science research is also evident and potentially a good sign that there might be a resurgence or strengthening of participative approaches. This might also be pushed forward by the popularization of agile development methods and the widespread use of IT that turns users into more skilled and knowledgeable co-designers.
Because community informatics is aimed at building systems for a specific group (NGOs, townships, etc.) and typically differentiates itself from business informatics, it is an especially rich setting for studying and applying participative design, as expressed in (Carrol & Rosson, 2007). For them, participatory design is supported by moral and pragmatic principles or propositions. The moral principle expresses the ethical compromise or responsibility that within a democratic society, people affected by a system should participate in its conception and design; that it is in fact their right to do so. The pragmatic principle simply states that participation is required in order for the systems to actually be effective, because there is no other way to design it in a situated context-dependent manner, under the understanding that there is no generic ICT for community informatics. Community informatics implies a diverse population, a diverse technological infrastructure, unpredictability in terms of future users and multiplicity of roles. On the other hand, it implies that whatever the intervention amounts to (a new ICT tool, a new system, a new process support), this will impact the lives of ordinary citizens. So it is evident that both the moral and the pragmatic principle hold for these types of interventions. While participatory design is aimed at inclusion (which we have already discussed), community informatics is aimed at self-actualisation (which we have also discussed as empowerment of individuals in line with Amartya Sen's ideas about capacities rather than resources). When combined, the result should be collective actualisation in the target community.
Now, in order for this to happen, there are a number of guidelines, issues and instruments available. First of all, motivation is a key ingredient for participation. If there are no community leaders to begin with, then the designer (as facilitator, not as expert) should contribute to generating the conditions for participation. On the one hand, the fear of technology should be reduced. This can be achieved, either by showing simple examples of easily applicable ICT tools (e.g. using Twitter or Doodle to quickly organize a meeting) or by previously training participants with an emphasis on appropriation and autonomous deployment of (web-enabled) tools. On the other hand, the participants should take control of the process through learning. Not just learning how to use or deploy ICT, but learning about participation through discussion on these issues. And more importantly, the community can share their learning experiences with other communities, helping them structure their own participative community informatics initiatives: this generates both recognition and reputation, which are fertile grounds for motivation and empowerment. Specifically, each intervention may be supported by observation, interviews, workshops, scenario-based design, case studies, forum or wiki based technology assessments, learning about simple open source or web authoring tools, and then on to cross-community workshops and training which may result in the emergence of sustainable steering committees through which the community adopts and takes charge of the design effort towards the future and in an extended network beyond the local context.
Suscribirse a:
Comentarios (Atom)