Workarounds are:
1. Any way of tricking a system by using the system in a way it was not intended to, but that still gives you the desired outcome. This was first (according to my research) raised by Gasser in 1986. The idea being that in some systems you could enter, for example, incorrect data in order to arrive at the desired outcome. The need for odd initial data is based on infelicities in the system beit software or mechanical.
2. By “jury rigging” the system wherein you haphazardly put something together, but you don’t expect it work well forever. Sometimes referred to as “make-shift,” it works well enough now — and this happens in computing all the time; you make a quick, often small, but necessary change in the system. Sometimes called a “kludge,” this is where the “permanence” issue is raised in research — how more or less permanent is a workaround, typically assumed to be of limited longevity. Of course, no matter what we make, nothing is permanent. Still, some things last longer than others, and more often than not with packaged software the “slightly-more-permanent” workarounds (in the form of system customization) are more common as compared to the often, but short-lived workarounds used in legacy systems [note: this may be a generalize too wide to bear evidence]. Still, this helps us to understand better the longevity of workarounds.
3. Literature on workarounds is now split on the idea that they are “freeing” employees from the confines of the system, and increasingly scholars ask if all this “freeing” (in research on ERP) creates its own subset of confines (suggesting that large numbers of expensive customizations to systems requires some administrative oversight, which effectively balances the freedom from the previous system with the new need to control those freedoms). This helps us understand the autonomy-producing or -restricting quality of workarounds.
This seems to be the cost of customization: it at once frees you from the confines of the system, but also hurdles the system toward eventual decay (as we have observed with legacy systems), and this is sometimes referred to as “drift.” The more control you exert on the system — in this case, in the form of workarounds — the more brittle it gets and drifts, in principle, from the control of those charged with maintaining the systems. In this way, workarounds are kind of like using a mulligan in golf; it helps you get a better chance in the short term, but in the end it keeps adding +1 to your score until you’ve lost it completely.
However, if one could follow a set of workarounds through the years (and I’ve never seen research like this), explicitly watching them “decay” or “cost,” then the analogy to golf might be observed. When did, in the short run, the workaround get the organization out of a jam? Conversely, when did the workaround, in the long run, costs the organization more than it was worth.
If you could understand the process deeply enough, one could explicitly estimate at which times workarounds “beat the system,” meaning that, you might be able to identify muligans (i.e., workarounds) worth taking (i.e., making) and others which ought to be avoided.
There is a whole debate in the Sociology of Time (see for example the journal Time & Society) around the topic of re-specifying time: a common set of terms is clock time, task time, life time. This does not really capture the temporal order of workarounds: here we at least could distinguish between the time used to bury it into common sense (1), the time that has gone since that was started (2) and the temporal orderings that the workaround itself induces (3). Although they sound alike, (1) and (2) are not the same: a workaround can be quite old (in the sense of: users tried it first long ago), but still be unstable (in the sense of: it causes trouble all the time and therefore does not sink into the "sociotechnical unconsciousness"). We could really try to learn from the debates in the Sociology of Time, I guess…
LikeLike