US: D+


We have written about this previously, the state of American infrastructure and the problem that is not appealing to the masses, and we can report that not a lot has changed. According to ASCE, the US got a 2017 report card for infrastructure and the outcome is pretty static … D+ (same as it has been for the past half-decade or more). Part of that story has to do with the grading system in the first place, but most (near all) has to do with the dwindling state of infrastructure in the past decade of austerity policy that effectively kicks the proverbial can down the road such that the next generation inherits suboptimal infrastructure in the US.

Complexity management and the information omnivores-versus-univores dilemma


I recently had the opportunity to see the film Sully (2016), which recounts the 2009 emergency landing of a jetliner on New York’s Hudson River. Despite some critical flaws, the film is not only a thrill to watch but also provides much food for thought to those studying infrastructure. Even the flaws are instructive. One of them – certainly the most discussed – regards the portrayal of the National Transportation Safety Board (NTSB) that, as per protocol, investigated the accident. Whether due to Hollywood convention or directorial choice, the NTSB team are neatly cast as the villains, out to get the story’s hero by discrediting his decision-making process.

Continue reading

Post 1 of 3: The Architecture and Infrastructure of Memory (MAI)

Leviathan Monument

Hobbes’ Leviathan frontispiece revisited: Dingpolitik and object-oriented governance.


What is the connection between the Chinese National Offshore Oil Corporation (CNOOC) and the National Holocaust Monument currently being built in Ottawa, Canada? (Chalmers:105) Though this question seems rather peculiar at first, the answer is far less obscure when considered within the context of memory architecture and infrastructure (MAI). This is because MAI is intricately bound up in both remembrance and sovereignty.

The connection between memory and the authority or power to govern is nothing new: the correspondence between the two was established in early Greek mythology. According to Hesiod’s Theogony, the ability to rule over others was granted to certain favoured individuals by the Muses through their unique bond with their mother Mnemosyne, the goddess of memory and guardian over what should be remembered. As history would have it, memory would be stolen from Mnemosyne along with Hephaistos’ fire (thanks to our friend Prometheus) and humanity (led by the privileged few) became able to record their own past via material culture and technology. Mnemonic technologies (texts, film, photographs, commemorations, digital memory, the internet, etc.) have become increasingly complex, varied, and augmented as those responsible for filling the void left by Mnemosyne go about constructing our past(s).

However, though the figuration of memory has changed over time, the relationship has remained very similar: those who possess the ability to shape what is remembered and how it is re-collected are in an auspicious position to exercise sovereign rule, and inversely, those who wish to maintain such authority take a special interest in doing so. This is in part why memory studies scholars have written so extensively on both the more recent proliferation of commemorations (memorials, monuments, etc.) and their role as part of modern state attempts to reconstruct the past. The salience of state-sponsored memorials and monuments is particularly distinguishable in national capitals, where commemorative landscapes are often extremely composite and interconnected.

As a specific example of mnemonic technology, memorials and monuments are durable structures that have become delegates or heads of populations that are the punctualized result of previously formed assemblages composed of a multiplicity of actors (politicians, special interest groups, community organizations, artists, architects, city planners, academics, government organizations/departments, etc.). To say that these sites and their structures are delegates is to say that they ‘speak’ on behalf of the array of different actors who had gathered to establish them (and have since become ‘silent’ – an effect of punctualization), but it is also to say that they represent histories, specific events, ideologies and ideals, among other knowledges. Additionally, they participate in a discussion with a host of other such memorial delegates that exist within local, national, and international commemorative networks: with other delegates representing punctualized networks that then come together to form even larger commemorative networks.

It is these networks that form what is referred to here as memory infrastructure, or the organization of various punctualized assemblages that have been made durable (and to an extent more stable) through practices of art, design, and architecture.

Why is it important that we recognize MAI? Just like roads, sidewalks, trails, electricity, the internet, power plants, etc… MAI facilitates (and limits) possibilities and creates complex connections between these possibilities for both individuals and governments. This is how Canadian economic or foreign policy can be connected to a mass genocide in Europe during the 1940s (as well as a myriad of other seemingly unrelated issues). Memory infrastructure and architecture establish thoroughfares that align a variety of translated interests in order to guarantee (as much as possible) a certain range of agencies: in this case, the governments ability to successfully deploy policy decisions.

Infrastructural Lives, Reviewed

Add this one to your reading list: Steve Graham and Colin McFarlane have edited a book, which has just come out, Infrastructural Lives

Contributors include AbdouMaliq Simone, Maria Kaika, Vyjayanthi Rao, Mariana Cavalcanti, Stephanie Terrani-Brown, Omar Jabary Salamanca, Rob Shaw, Harriet Bulkeley, Vanesa Caston-Broto, Simon Marvin, Mike Hodson, Renu Desai, Steve Graham, and myself.  Arjun Appaduria kindly provided a thoughtful foreword for the book.

Continue reading

Wildlife’s infrastructure nightmare

Dear Friends,

I’m very honored and excited to be invited as a guest blogger on Installing (Social) Order and am hoping I can make some interesting and valuable contributions. And I start here on the issue of infrastructure, but from a completely different vantage point – that of wilderness and wildlife. One on my key interests going back more than 2 decades is wildlife conservation in South Asia, the part of the planet I belong to and have been living in since I was born. As part of my work with the environmental NGO Kalpavriksh, I have been editing a newsletter on wildlife for over 18 years now. It is called the Protected Area Update, and one issue that comes up repeatedly is the impact on wildlife, wilderness and the environment on account of our relentless drive to create more and more and more ‘infrastructure’. It is something I keep reporting and commenting on and here is one small editorial I wrote for the February 2012 issue of the newsletter. It’s a little old if one goes by the date, but the concerns on the ground today are just as real if not actually more acute. The piece is also a little India specific, but I think it captures the challenges and that is why I’m sharing it here.

And here’s where you can access the entire 24 page newsletter in case you are interested in reading the specific newsletter or the entire set check Protected Area Update – 2012


(Editorial, Protected Area Update Vol. XVIII, No. 1, February 2012)

More roads that penetrate deeper, railway lines that connect better and faster, dam projects for power and irrigation, coal mining for more electricity, high-tension power lines to evacuate that electricity…. This is one side of India’s infrastructure and constantly lauded growth story.

There is another side to that very story which reads something like the following: Roads that cut through rich forests, railways lines that regularly kill elephants, dam projects that drown pristine forests and wildlife habitats, coal mining that rips apart tiger corridors, high tension lines that kill elephants in Orissa and flamingoes in Gujarat…

From the Nallamalla forests of Andhra Pradesh to the valley of the Alaknanda in Uttarakhand; from the elephant forests of Orissa to the Great Rann of Kutch in Gujarat – the story is the same – what is unfolding is nothing short of a nightmare for India’s wildlife. The infrastructure for our automobiles, power and lifestyles is leaving nothing of the natural infrastructure that the wild denizens need. As we travel faster, longer, and deeper and as the GDP becomes the only mantra, the elephants, the tigers, the leopards and even the flamingoes are getting hemmed in more and more with every passing day.

The fate of the flamingoes in Gujarat highlights this starkly. Their only option on being disturbed at night by vehicular noise in the Great Rann was to fly into high-tension wires hanging above and get charred instantly. Between the vehicle and the wire, India’s beleaguered wildlife is getting sandwiched and slaughtered like never before.

One ‘eco’ – the economic is soaring as everything ecologic is being torn to shreds. The tragic irony is that the same system sells to us and to the world the prowling tiger, the gamboling elephant, the soaring birds and, yes, the dancing tribal as ‘Incredible India’. We at the PA Update are part of a small crowd that’s watching on with incredulity. And with despair.

Limn (4) on Food Infrastructures

Limn (“Limn is somewhere between a scholarly journal and an art magazine”), edited by Stephen J. Collier, Christoffer M. Kelty and Andrew Lakoff, just published its fourth issue on food infrastructures. Here is the opener, check it out:

Issue Number Four: Food Infrastructures

edited by Mikko Jauho, David Schleifer, Bart Penders and Xaq Frohlich
This issue of Limn analyzes food infrastructures and addresses scale in food production, provision, and consumption. We go beyond the tendency towards simple producer “push” or consumer “pull” accounts of the food system, focusing instead on the work that connects producers to consumers. By describing and analyzing food infrastructures, our contributors examine the reciprocal relationships among consumer choice, personal use, and the socio-material arrangements that enable, channel, and constrain our everyday food options.

With articles by Christopher Otter, Franck Cochoy, Sophie Dubuissson-Quellier,  Susanne Freidberg, Heather Paxson, Emily Yates-Doerr, Mikko Jauho, Kim Hendrickx, Bart Penders and Steven Flipse, Xaq Frohlich, David Schleifer and Alison Fairbrother, Javier Lezaun, Michael G. Powell, Makalé Faber-Cullen and Anna Lappé!

via Issue Number Four: Food Infrastructures | Limn.

Infrastructuring the City (and its Leftovers)

A few months ago we had a discussion (here and here) about olympic stadiums and the fact that they are the products of large infrastructuring projects that remain long after the project is over. That was an eye opener — at least for me: it seems as if our (STS) focus on stability and material durability is biased; we tend to think that by building buildings we build a world of things that stand for us, our wishes, dreams, prejudices or our moral classifications. The whole “politics by other means is going into that direction. And the ruins of the olympic stadium in Athens (the 2006 one, not the antique one turned into a soccer stadium) reminded me that durability sometimes is a burden: what is build in steel and concrete is going to stay unless we “deconstruct” it. And even then the marks of it stay, leftovers are hard to avoid. Two days ago now I saw this:

A city divided by light

A city divided by light (Photo by Chris Hadfield, Source:

After 23 years,the city of Berlin is still divided — infrastructurally. On the one hand, a lot of the western part of the city still has gas lights: a relict of the cold war era where gas was easier to manage because it can be made from coal and storing or even delivering that was easier than providing electricity in times of a lock-down of the city surrounded by the GDR. But that is not the reason for that: To increase efficiency (and officially to avoid “capitalist/imperialist wastefulness”, I suppose) the GDR changed their preferred system of electric lighting to Sodium-vapor lamps (with a warmer and darker light), the FRG continued to use Mercury-vapor lamps (with that bluish lucid light). So: leftovers of projects of infrastructural politics, but not disturbing ones like politically incorrect street names, memorizing ones like memorials, problematic ones like the Athens olympic stadium. But mundane ones. There in every corner, unnoticed. What do they tell us?

New paper on crowdsourcing

An interesting paper on crowdsourcing just came out in the “Computational & Mathematical Organization Theory” journal. “Maximizing benefits from crowdsourced data” by Geoffrey Barbier et al. explores how crowdsourcing can be used for purposes of collective action and problem-solving, for example, in disaster response and by relief organizations.
Here’s the abstract:

Crowds of people can solve some problems faster than individuals or small groups. A crowd can also rapidly generate data about circumstances affecting the crowd itself. This crowdsourced data can be leveraged to benefit the crowd by providing information or solutions faster than traditional means. However, the crowdsourced data can hardly be used directly to yield usable information. Intelligently analyzing and processing crowdsourced information can help prepare data to maximize the usable information, thus returning the benefit to the crowd. This article highlights challenges and investigates opportunities associated with mining crowdsourced data to yield useful information, as well as details how crowdsource information and technologies can be used for response-coordination when needed, and finally suggests related areas for future research.

Besides being a very useful reference piece by providing a state of coverage with respect to crowdsourced data – like where to find it and what to make of it -, the paper is also a nice illustration of how social scientists become more and more involved in leveraging “big data” from informational infrastructures and from web activity in general. Crowdsourced data but also initially a lot less directed, if not accidental, information flows appear to increasingly be data-mined for a variety of purposes, not at least by – oops – us.
Check out the paper here.

Infrastructural relics and ruins, or: is durability a good thing?

Since the old times of “inscription research” or maybe even longer one of the main frameworks for analyzing the social and cultural shaping of technology, infrastructure and socio-technical arrangements is build on the idea that material enactments of ideological or normative patterns are at least adding one specific (mostly valuable, sometimes problematic) feature to these otherwise quite instable phenomena: Technology is society made durable (Latour 1991). This “Durabilty Bias” has made its way straight from Winners “Moses´ Bridges” to Latours´ “Sleeping Policeman” 


When I walked the streets of Athens last summer and especially the modern ruins of the 2004 Olympic Games stadium complex I started thinking about an interesting issue of that durability bias that emerges once you turn the problem upside down. All these massive and nearly unused buildings, the immense work of finding (valuable?) ways of reusing this wasteland of steel and concrete – it appeared to me that it is not a case of creative appropriation, but that the sheer stability of this infrastructural setting localized in a greek suburb is creating the need for keeping it maintained and used (and if only in trivial ways). The backside of infrastructural stability seems to be that relics and ruins of abandoned infrastructure are just not going away, their stability is a problem, not a sollution.


What if that is far more common issue? We all know about some similar effects: technological pathways for example or technological and institutional lock-ins. But the issues we decribe with that concepts have one thing in common: The are still with us (like the QWERTY keyboard) and we want to explain why other arrangements are not accepted. But if we start searching for lefovers, ruins and abandonded technologies and infrastructures…what could we learn from them? Are we living in a world littered with of institutional waste?

Where’s the fun in infrastructure studies?


While this edited book came out in 2003, I only just learned about it today. The Infrastructure of Play is a book about building tourist locations in cities or “tourist friendly cities”. The editor is Judd, who also co-edited the The Tourist City in 1999.

Increasingly, city tourism plays an important role in urban economics and thus downtown areas and, in particular, waterfronts have been transformed from purely (if that was ever true) commerce/business oriented operations into pedestrian friendly spaces for “hanging out” and places to “take in”. The authors, and this is a strained metaphor, look at what it takes to turn a city into a tourist Mecca (of sorts). In the earlier book, an interesting, but not entirely explored idea was hidden in there; a kernel, really, and it goes like this: 

As cities become places to play, the authors show, tourism recasts their spatial form. In some cities, separate spaces devoted to tourism and leisure are carved out. Other cities more readily absorb tourists into daily urban life, though even these cities undergo transformation of their character.

You see the tension!? As the city recasts its form, planners must balance changing the city system enough to attract tourists but not so much that that which attracts tourists to the city (especially historical elements/places) is/are marred. Still, the newer book is all about North American cities, so this tension (given that US citites are just not that old) cannot be fully developed (in my opinion).

In a final comment, another issue that struck me while review these titles: they were about people having fun … and upon a little refelction, why is it that so little research on infrastructure is about fun?

4S/EASST Open Panels: Program Practice

Copenhagen is coming closer every day – only 263 days to go until the next 4S/EASST joint meeting will take place this October. Nicholas already announced that we are going to organize a so called Open Panel on “On states, stateness and STS: Government(ality) with a small ‘g’?” which can hold up to 15 papers. I repeat the call here: we are still looking for good contributions – please feel free to contact us!


(c) Photo: Tobias Sieben

As there are 106 panels in 10 thematic fields many of you might have noticed the flood of emails with CfPs on the various STS lists. Although the list of Open Panels is included in the overall CfP many (we are no exception) felt the necessity to individually post the call. You might have guesses that this has to with promoting our own work, but well…no, that is not the reasons. The reason is: software.

Here is why: To submit a paper to an Open Panel the submitted has to tick a checkbox – that is how a paper proposal gets linked to a panel. The problem for us organizers is the following: we do not know what has been submitted yet – there might already be 15 papers, there might be none. We simply cannot see that until the call is closed. So: the only way to get informed is getting individual notifications from those who submit. I guess that is the “secret” reason for individual calls: To remind potential submitters of who is organizing which panel individual calls are one opportunity…we still do not know the exact number of submissions until March 18th, but we can try to get an approximation. So: you would do us a real favor if you could send us a notice when you submit a paper to our panel. Thanks a lot!

Specifying infrastructures cont.

I was just revisiting the earlier post about the wikipedia page about infrastructures, and the sentiments expressed in the comments about the missing social science and STS references on that page, impressive and elaborate as it is. As far as this blog is concerned, the issue of specifying a common understanding of infrastructures has so far turned out to be, I think, one of its implicit continuous commitments, and one that perhaps merits re-addressing explicitly from time to time. So, very briefly, and slowly gearing up for the 4S meeting, some thoughts on where we are at this point.
On the one hand, there are lots of ressources and discourses about infrastructures drawing in participants that from all types of sources and disciplines. On the other hand, there is STS as a field in social science with some maturity, and with various kinds of theory able to bring infrastructures under the auspices of their concepts and terminologies. From time to time, STS scholars, like other social and political scientists, feel like intervening into public discourse by offering their own types of expertise about particular cases and problems of infrastructures. So far, we have not been satisfied that the conceptual work required for an appropriate understanding of infrastructures has already been done, and that we would merely need to extend the application of otherwise well-known concepts to the exploration of infrastructures. Infrastructures can clearly become “normal” cases of networks, assemblages, socio-technical orders etc., and there is nothing wrong with analyzing them as such. It may, however, also present a danger of locking analyses of infrastructures into foregone conclusions.
Here are a couple of possible lines for discussing specifications of the concept of infrastructures after taking another look at the wiki entry:
– Infrastructures as supporting something (“a society or enterprise”, “an enconomy” etc.). Clearly, the idea of an assemblage (network etc.) supporting something other than itself is worth noticing. General references to use or purpose are, of course, common when talking about all kinds of artefacts, but to speak of such heterogeneous sets of entities in terms of a general externally given purpose must be puzzling.
– References to a general public. Political issues and the state are very salient on the wiki page despite its focus on economics and engineering, and despite the fact that the definition of infrastructure is given in a way that takes great care to exclude political questions, e.g. speaking of “facilities necessary”, or “services essential” as if these qualifications were unproblematic.
– The differentation of hard vs. soft infrastructure – can we utilize this differentation at all? It rings like hard vs soft facts/science/knowledge, though the implied reference to deconstruction (or rather, the potential ease of it) may be more material, less epistemic in this case – if the connotation is not a straightforward military one. The hard vs soft differentiation clearly expresses a concern about stability and vulnerability but is this concern somewhat specific when worrying about infrastructure (rather than about truth)?
– Topographical references abound. Is infrastructure always about some association of artefacts and territories, or perhaps, more generally, about technology and place? Like the references to politics, the references to geography are ubiquitous in the wiki entry although they are not explicitly part of the definition at the top.
Would any of these aspects warrant a respecification of infrastructures in a way that would constitute them as a generic class of research objects? Would we even want to have such a class?

A Thought on Data and an Orbituary

This NYT article has been on my reading list for a while (some might have noticed that I posted it accientially before two times). I wanted to share it because first (of course) as an orbituary, as a bow before one of the last centuries most inspiring teacher of programming and computing. But I also wanted to share it because it points us who are interested in the assemblage of contemporary infrastructure to a figure that STS seems to like to forget after getting rid of the myth of the genius inventor: the programmer.

For years, Mr. McCracken was the Stephen King of how-to programming books. His series on Fortran and Cobol, a computer language designed for use in business, were standards in the field. Mr. McCracken was the author or co-author of 25 books that sold more than 1.6 million copies and were translated into 15 languages.

Well, of course not the individual, creative and inventive programmer – I sure we would step into the same explanatory traps again that were connected with the inventor-myth. But programming – the core acivity of building, connecting and maintaining IT infrastructure – is a cutural practice on its own, a mixture of play, craft and learned or trained skill. And as any practice, it gaines stability and cultural significance by the network of activities and things surrounding it: trainings, courses, guidelines, how-to-books, textbooks, journals and so on. Maybe it is time that we spend some thoughts on how this particular practice was shaped – an idea that struck me after reading this: 

In the early days, computer professionals typically fell into one of two camps — scientists or craftsmen. The scientists sought breakthroughs in hardware and software research, and pursued ambitious long-range goals, like artificial intelligence. The craftsmen wanted to use computers to work more efficiently in corporations and government agencies. (…) But his books are not like the how-to computer books of more recent years written for consumers. His have been used as programming textbooks in universities around the world and as reference bibles by practicing professionals.

Greatest thing to happen to STS since the Bijker/Pinch paper

New interest in the micro-foundations of institutions has got to be one of the best things to happen to STS since the Bijker/Pinch paper…

The new institutionalism in organizaitonal analysis has been a well-spring for research. A quick summary of neo-I that Fabio Rojas and I wrote (in a paper on museusms):

The hallmark of the ‘new institutional’ school is the relentless focus on how life inside organizations is regulated by stable social practices that define what is considered legitimate in the broader external environment in which an organization operates (DiMaggio 1987, 1991, DiMaggio and Powell 1991b, Meyer and Rowan 1991, Scott 2000). The influence of institutions on organizational behaviour is supposedly most obvious in organizations like museums – organizations that new institutional scholars label as ‘highly institutional and weakly technical’ (Scott and Meyer 1991: 124). By this, scholars usually mean the following: that the organization’s leadership is highly sensitive to the expectations and standards of its industry; that the organization of work within the bureaucracy depends on broader ideologies and cultural scripts found in modern societies; that managers are likely to copy the practices of other organizations, especially high-status organizations; that professional groups are the arbiters of organizational legitimacy; that rational organizational myths and rules structure work practices; and that the ultimate performance of an organization’s set of tasks does not depend much on tools like assembly lines, computers, and the like (see also DiMaggio and Powell (1991a, DiMaggio and Powell 1991b).

The new approach/point of emphasis for neo-I folks is laid-out by Walter Powell and Jeannette Colyvas in their 2008 chapter in “the big green book” of organizations and institutions — copy of the paper is available in draft form at right here.

And so the story goes:

1. Older research is cast as calling for “the need to make the microfoundations of intitutional theory more explicit” (p.276). This is something that institutional theorists have had much success with — positioning papers to create the feeling that this idea is both something new and exciting but also that the call for micrcofoundations is an old one (that we need to now make good on). The opening lines of D&P’s 1983 paper does a good job of saying “that was then” and “this is now.”

2. The upshot: “much analytical purchase can be gained by developing a mirco-level component of institutional analysis” (p.276) which would link “micro-concepts, e.g. identity, sense making, typifications, frames, and categories with macro-processes of institutionalization, and show how these processes rachet upwards” (p.278).The invocation of “hierarchy” or “upward” levels is somewhat disconcerting for those of us set on flatter analysis, but there is likely room to show (and convince) that even the tallest, most stable actors and actions occur locally and laterally on a flat surface of interactions.

3. How can we, in STS, get some purchase on this?

A. Emphasize the interpretations of contexual factors (p.277) rather than assuming them (as has happened now and again in organizational theory devoted to field-level analysis — these are assumptions that occasionally must be made in order to do the diffusion studies so common in neo-I).

B. Display the on-going micro-maintenances of apparently stable institutional forms in daily practice AND/OR discover how stable institutional forms in daily practice result in change over time such that they transform the forms they are intended (in the behavioralist sense) to prolong.

C. Enliven analysis of actors — old new institionalism (let’s say) emphasized two types of actors, “cultural dopes” or “heroric ‘change agents'” the reason being that action was essentially assumed to operate at a level unnecessary to fully capture during large-scale field studies (i.e., so managers simply sought legitimacy at all costs, we assumed, and mimicked their peers) OR in the move to caputre the actions of real actors (instead of assuming organizational entitivity) the studies overwhelmingly invovled entrepreueurs and celebrated/worshipped their field-altering accomplishments, respectively. The new emphasis (of, let’s say, new new institutionalism) sort of smacks of STS lab studies where we saw the how the mundane facets of scientists’ behaviors in labs resulted in field-altering science. Now, neo-I wants to avoid momentous events, or, at minimum, show how seeming huge events were a long time in the making and like all experiments involved loads of failure, which demands of writers the ability to show how local affairs prompt shifts in conventions (locally or broadly) (p.277).

Why is this so good for STS? We have already done much of this type of work, and have oodles of folks committed to these axioms for analysis. The only thing we really need now is a bridge between these two camps — while STS could not break into neo-I on the topic of technology, Powell and Colyvas might have just opened the door to an new institutionalism in STS…


the role of reviews in the social science

All this discussion of new infrastructuralism and the infrastructural relation between “black-boxing” and “institutionalization” makes me think about the role of reviews in social science.

The question is: is a concept what it is, or does how it has been used constitute what it (now) is?

I think this for the following reason: Latour, in Actor Network Theory and After goes into something of a littany regarding the ways that ANT has been used and abused over the years since he and Callon (and Woolgar, honestly) thought of it. Of his many points, a meta-point matters for this post: he basically states that as his ideas spread, they increasingly got used in ways he did not expect and then Latour makes something of a value judgment in suggesting that some research, which appears to be relatively more current as compared to his original works, don’t do ANT right. Of course, Latour takes some blame in saying that perhaps the entire moniker including, A, -, N, and T were not perfect, it still seems like an odd point to hear from Latour. About 120-ish pages into Science in Action, Latour reviews as part of the translation model of how things spread (i.e., diffuse, though he considers this a dirty word) he demands that spread requires change — that a technology, for example, must change as it enters into new hands. This was a counterpoint to diffusion of innovation literature (that he hardly cites) and their supposed assumption that diffusion, as an idea and model, only works so long as we assume the innovation is “constant” over time (meaning that it does not and will not change). Getting to the point: ANT was going to have to change to spread so widely, and the ideas would necessarily be used in ways unintended and perhaps unacceptable to its originators.

Again, then, the question is: is a concept what it is, or does how it has been used constitute what it (now) is?

Latour contributed to the notion of “black-boxing” as much as perhaps any scholar of the last 30-ish years, and given his disappointment with how some of us have used his concepts, does it really matter? (i.e., this value judgment) Or, does it matter more for science not to judge how concepts have been used and instead document how they have been used because the way they have been used is effectively what they are?

Returning full circle, in all this discussion of new infrastructuralism and the infrastructural relation between “black-boxing” and “institutionalization”, what would make the best review paper? Review the terms as if they are not artifacts changing hands in order to conceivably arrive at some core meaning of these concepts, or review how the terms have been used and that this will tell us more about the operational meaning of the terms? 

Institutionalism and Infrastructuralism – some first thoughts on differences

In a recent post following Nicholas’s thoughts about blackboxing and taken-for-grantedness and about what that could mean for discussing the benefits of STS and neo-institutional theory, I asked: what are the difference between institutions and infrastructure? Nicholas and I discussed that today for the first time in detail and we thought it might be worth to post it to see if it makes sense.

Neo-Institutional theory is – to tell a very long story short – based on the question of how many different things (organizations, models, cultural forms) become similar over time. This is the basic problem in DiMaggio/Powell (1983): understanding the institutional isomorphisms if the impetus of rationalization is taken away. It is the problem that Strang and Meyer works on when studying the institutional conditions of diffusion (1993). It´s central focus was – like Powell argued in 2007 – on “the field level, based on the insight that organizations operate amidst both competitive and cooperative exchanges with other organizations.” (2007). DiMaggio (1988) and Powell (1991) both noted that this was a bit too smooth and that institutional arguments would need a more detailed perspective on contestation, fragility and struggles. Nevertheless the framework provided a fresh and new way to understand institutions – so productive that it framed a discipline or two.

Infrastructure studies, on the contrary, focussed on how things can appear systematic and highly-integrated but are actually implemented in many heterogenous, historically contingent local processes (Bowker/Star 1996/1996; Star/Ruhleder 2996). In some ways, diffusion becomes less important as implementation takes a more central role. Infrastructures are not build by system makers, but screwed together loosely by complex arrangements of interfaces, gateways and work-arounds, as Edwards has shown in 2003 and in his fabulous book on climate models (2010). However there seems to be a tendency to focus on normalizing and standardizing effects of classification systems implemented in large infrastructural settings – this is something like the Weberian “iron cage” of infrastructure studies visible already in “Sorting Things Out” and very strong in the works of Hanseth and Monteiro (1997, Monteiro 1998).

The link seems obvious, doesn´t it? Neo-Institutionalism starts looking at heterogenous stuff and finds it similar – too similar perhaps, so that it is missing the complexity of the social world sometimes. But it is a great framework for strong explanations. Infrastructure studies look at systems and find them fragile and fragmented inside. But they seem to lack the “big explanatory” power, which leads to giving up the focus on local multiplicity and emphasizing standardization/normalization instead. Could the strengths of both be added to get a good grasp at the installation of social order under (high) modern conditions?

Technologies/Black-Boxing and Institutions/Taken-for-granted – A question of levels?

This post started as a comment to Nicholas´ post on the museum but became so long that I decided to make it a new post. The debate on black-boxing and “taken-for-grantedness” (or STS & New-Institutional Theory) takles some very important points. They remind me of T.Pinch´s (2008) paper on Technologies and Institutions. Pinch´s focus is on the problem of “skill” and he argues that (new) institutional theory  – for example in its micro form as in the works of Fligstein – is basically only focussing on a very small aspect of ways to make an institution materially stable. Technology, he argues, adds at least a second way because it is black-boxed, not just taken for granted.

The reason why I think this is only partially true is that proposing such an argument is only possible by conflating levels of “taken-for-grantedness”: Sociology knows a whole spectrum of ways to make social order become taken for granted: from the taken-for-granted stream of everyday routines and interactions that make up Schütz´ life world to Mauss/Bourdieu´s techniques of the body that constitute the habitus, from B&L´s (or also Gehlen´s) processes of institutionalization to Foucault´s epistemé, Polanyi´s tacit knowledge and Ryles “knowing how”.Technology, if we follow this route, could be added to the book of tactics to make practice become taken-for-granted – through a very distinct process that has been described as “black-boxing” in which some aspects are packaged and sealed away, others delegated to specialists (for example for maintenance and repair). It is distinct from at least two other tactics exactly because of the form of this process. Embodying habits and skills for example is a process of becoming taken for granted by routine and repetition. Discursive closure is a matter of rhetoric, persuasion and concealment. 

Institutions and Infrastructures I suppose are strategies of “taken-for-grantedness” on a different level: they are hardy stabilized by just one of the discursive, habitual or technologcial tactics just described. Neither can an institution be based just on skills, nor on legitimizing and reglulating discourse, nor on technology. Hey, we know from a long time of STS research that not even technology alone can rely on technology alone. Institutions and infrastructures are complex installations – hybrids or monsters if you will. They both rely on a fragile architecture of “taken-for-grantedness” – plug-ins. What is the difference, then?

Social significance of gap analysis

Although I’m not entirely sure of the implications for infrastructure, gap analysis is commonly used and seems promising as a research site — and yet, despite widespread use in management and implementation of software, gap analysis is an untapped and unappreciated workflow analysis technique in research.

In general, gap analysis takes three forms, which document the gaps between two states: current versus future, expected versus actual, perception versus delivered. The difference between the two states defines the gap, and from such assessments others are possible such as benchmarking (Boxwell 1994).

The first form is a map. Cartographic representations are mainly utilized in lean management to chart flows of raw materials – including information – currently necessary to make a product or service available to consumers so that they can be assessed for flow and waste. Once areas for improved flow and reduced waste are identified, analysts draw them into a future state value stream map. The differences between the two states define the gaps, which orient work toward that future condition. The map gap was designed at Toyota (Rother and Shook 1999).

The second form is a step chart. Temporality is built-into the step chart, which also identifies and compares current practice and desired future state for the performance of a service or product. Brown and Plenert (2006:319) provide a good example of where a step chart might solve the gap between expected and actual states: “customers may expect to wait only 20 minutes to see their doctor but, in fact, have to wait more than thirty minutes.” Step charts chart the steps necessary to move from current practice to future practice (Chakrapani 1999).

The third form, which is most appropriate for working-around packaged software, is a cross-list. Such analyses are most routinely undertaken in consumer research wherein gap analysis refers to the:

methodological tabulation of all known requirements of consumers in a particular category of products, together with a cross-listing of all features provided by existing products to satisfy existing requirements. Such a chart shows up any gaps that exist (n.a. 2006).

Once cross-listed in table format gaps make themselves obvious and their analysis points to unmet consumer demand which new or poorly marketed products might fulfill. However, prior to the establishment of a cross-list, consumer expectations and experiences must be gathered, for example, by focus-group interviews. Once collected and made to populate a cross-listed table, according to Brown and Plenert (2006:320), “gaps can be simply calculated as the arithmetic difference between the two measurements for each attribute.”

Games with a purpose – a new role for human web users?

Just coming back from a few days of fieldwork (preparing ethnographic research in the field of semantic software) I could not help but share something I just learned. It fits quite nicely to what I have written before on the masses of non-human actors that populate the web today (crawlers, spiders, bots) and how the interdepencies between “them” and others (like us) change with the implementing of new web technologies.


Semantic technologies are build to process large numbers of unstructured documents and to automatically find (and tag) meaningful entities. And while these frameworks of crawlers, transforming tools and mining algorithms are actually quite good at finding structure in data, they are still (at least initially, they learn quickly) quite bad in assigning meaningful labels to it. They are quick and good in understanding that a text is about something, but they are bad ans slow at judging ambiguitive terms – they fail at understanding. But a recent trend called “gamification” (which is around for a while but was until recently used mainly for encouraging users to fill out boring forms) now is a good example how the configuration of agency changes on the web today. Human users are asked to play games that help annotating and matching ambiguitive patterns – for tagging pictures, texts, music, etc. So not machines are doing tasks for humans – humans are working for machines. 

For those who want to try working for them, check out the “Games with a Purpose” Website. A paper that described what exactly they do can be found here.