Working around over time

Workarounds are:
1. Any way of tricking a system by using the system in a way it was not intended to, but that still gives you the desired outcome. This was first (according to my research) raised by Gasser in 1986. The idea being that in some systems you could enter, for example, incorrect data in order to arrive at the desired outcome. The need for odd initial data is based on infelicities in the system beit software or mechanical. 
2. By “jury rigging” the system wherein you haphazardly put something together, but you don’t expect it work well forever. Sometimes referred to as “make-shift,” it works well enough now — and this happens in computing all the time; you make a quick, often small, but necessary change in the system. Sometimes called a “kludge,” this is where the “permanence” issue is raised in research — how more or less permanent is a workaround, typically assumed to be of limited longevity. Of course, no matter what we make, nothing is permanent. Still, some things last longer than others, and more often than not with packaged software the “slightly-more-permanent” workarounds (in the form of system customization) are more common as compared to the often, but short-lived workarounds used in legacy systems [note: this may be a generalize too wide to bear evidence]. Still, this helps us to understand better the longevity of workarounds.
3. Literature on workarounds is now split on the idea that they are “freeing” employees from the confines of the system, and increasingly scholars ask if all this “freeing” (in research on ERP) creates its own subset of confines (suggesting that large numbers of expensive customizations to systems requires some administrative oversight, which effectively balances the freedom from the previous system with the new need to control those freedoms). This helps us understand the autonomy-producing or -restricting quality of workarounds.
This seems to be the cost of customization: it at once frees you from the confines of the system, but also hurdles the system toward eventual decay (as we have observed with legacy systems), and this is sometimes referred to as “drift.” The more control you exert on the system — in this case, in the form of workarounds — the more brittle it gets and drifts, in principle, from the control of those charged with maintaining the systems. In this way, workarounds are kind of like using a mulligan in golf; it helps you get a better chance in the short term, but in the end it keeps adding +1 to your score until you’ve lost it completely.
However, if one could follow a set of workarounds through the years (and I’ve never seen research like this), explicitly watching them “decay” or “cost,” then the analogy to golf might be observed. When did, in the short run, the workaround get the organization out of a jam? Conversely, when did the workaround, in the long run, costs the organization more than it was worth.
If you could understand the process deeply enough, one could explicitly estimate at which times workarounds “beat the system,” meaning that, you might be able to identify muligans (i.e., workarounds) worth taking (i.e., making) and others which ought to be avoided.

rethinking workarounds

At least since the work of Les Gasser (1986:216), the act of working-around (or jury rigging) and the resulting workaround (or kluge) has been good fodder for Science and Technology Studies (STS). In the transition from building administrative software in-house to purchasing packaged software solutions from private market vendors, the workaround has received renewed attention by scholars. And rightly so. These are pressing matters given the widespread use of packaged software, the near irreversibility of implementations projects once initiated, and the reported high probably of dissatisfaction following installation.

Workarounds are commonly employed to localize, maintain, and extend software programs, especially when it is necessary to coax occasionally suboptimal implementations into functioning properly as the systems age. Still, workarounds have their limitations; they grow brittle over time, but are crucial for freeing users from restrictive or incomplete systems. Research on packaged software, however, challenges the notion that systems are still worked-around. Designers of packaged software anticipate user modifications. Embedded and increasingly inter-organizational actors now determine when a work endeavor is or is not defined as a workaround. Pollock (2005), examining a case of package software being implemented in a university setting, shows how the boundary between users and designers is relationally dynamic rather than static. This is clear when, for instance, local users share code with software designers hired by vendors, but also when designers distance their responsibility over specific user problems by categorizing some problems as local (rather than the general concern of numerous implementing organizations) (505, 503). Because packaged software appears to presuppose that adopting organizations will participate in the design process by modifying the software for local use, Pollock seems to have updated Gasser’s (1986) notion of working-around. Pollock calls into question Gasser’s (1986:216) original formulation, specifically, that working-around implies “intentionally using computing in ways for which it was not designed,” given that the tools to work-around are already embedded-within packaged software.

Research also suggests that workarounds are not as freeing as previous literature indicates. Modifications do not only free local users from the constraints of technology; they also create tensions inside and between organizations. For example, modifications that are difficult and therefore slow to establish create tension between employees and their supervisors (Pollock 2005:507). Likewise, some modifications generate conflict between support desk operators and local programmers concerning who is responsible for coaxing the packaged software into operation (506). Also examining packaged software in higher education, Kitto and Higgins (2010), through the lens of governmentality, show how a university wrested control over their newly adopted ERP by modifying it. Surprisingly, once modified, the resulting system did not appear to create renewed autonomy for employees. Instead, control over the system simply shifted from the monolithic vendor to a more local supervisor charged with maintaining jurisdiction over the host of new modifications.

In the move from homegrown to packaged software in higher education, traditional interpretations of the workaround seem to be transforming, and with it, I imagine, the precursors of workarounds – the “gaps” in system operations that workarounds necessarily bridge … although there is scant research on where these workarounds come from.