Intrastructure — before and under?

Do infrastructure studies suffer from a before-and-under bias?

The term infrastucture, per my understandings of the term, but also reflected in the wiki site, takes on two basic meanings.

First, infrastructure gets one of its meanings, which also double as conceptualizations, by being what precedes whatever system it supports, hence, infrastructure comes before the system it facilitates. This might be called “infrastructure as antecedent” (or pre-structure).

Second, infrastructure, perhaps this time in a more etymological sense, is also meant to denote the undergird that holds a system in place, for example, the layers of earth, sand, and rocks which create the bed under which roadways can be built and, hence, the interlinking arteries of transit can be laid. This might be called “infrastructure as support” (like a craddle, or sub-structure).

Of course, the notion of “sub-structure” is not foreign territory for sociologists. In fact, in numerous lines of scholarship far outside of sociology, there is a belief that something “real” or “raw” maybe even primordial can be discovered in the depths or beneath. Likewise, there is precedent in nearly any historical analysis, some being executed better than others on this measure, that that which occurs before a given event in need of explanation is often seen as powerfully influencing its later form. In this sense, infrastructrure almost becomes akin to “pre-structure.”

These are sometimes used in infrastructure studies to justify their reason for being done in the first place; some version of “it came before and powerfully shaped X” or “it is beneath and powerfully supports X”. Sometimes these claims are explicit, but often they are more implicit such that were the author, one would imagine, to get a review who writes “why study this at all?” the author would be almost dumbfounded as if to say “duh.”

Now, infrastructure that comes before the system/material it supports, I suspect, operates according to different dynamics as compared to (a) infrastructure that comes after the system/material it supports (i.e., a system that is imposed on pre-existing materials such as security infrastructure developed in response to and not before serial crimes) or (b) infrastructure that support from above rather than below (i.e., a system that is not under us, but on us, although an obvious infrastructureal equivalent of this illudes me now — imagine the infrastructural equivalent of glasses on a human face, which does not support from below, and instead supports on top of or above the person).

Do infrastructure studies suffer from a before-and-under bias? This might be a nice empirical question.

Endre’s first post

Once again, many thanks to the Installing (Social) Order team for inviting me as a guest blogger! Let me start this first post with a short introduction that hopefully helps to situate my research within science and technology studies.

Sociologists and anthropologists of science know a lot about laboratories, innovation centres, museums, design studios, hospitals, and the politics of related material practices, but curiously there’s hardly any STS work that focuses on explicitly political institutions. Perhaps the most notable exception is the thousand page long Making Things Public: Atmospheres of Democracy catalogue, edited by Bruno Latour and Peter Weibel. As many readers of this blog probably know, the catalogue was published in 2005 as a companion to a fascinating exhibition, held at the Karlsruhe-based Center for Art and Media (http://on1.zkm.de/zkm/stories/storyReader$4581#), but I stumbled upon it only a year later, in the library of Lancaster University. It was the very beginning of my PhD at the Department of Sociology (http://www.lancs.ac.uk/fass/sociology/), and I was looking for studies on political technologies when I discovered the massive blue book on one of the shelves. My idea at the time was to compare three or four distinct political configurations or arrangements (street demonstrations, public debates, election campaigns), but flipping through the essays in the catalogue made me realise that it would be much more interesting to focus on the entity that in one way or another coordinates these arrangements: the parliament. (The term ‘arrangement’ comes from Andrew Barry’s Political Machines.)

I can’t say I immediately had a clear idea about what an STS-informed research of a parliament would look like, but I knew where it could take place. As someone who grew up in Hungary, I remembered that the parliament building in the centre of Budapest was once the largest (and arguably the most impressive) of its kind – quite bizarre for a country that is not only small, but in most political scientists’ view also counts as a ‘new democracy’. Either they are right, I thought, and then props really don’t matter in politics, or the idea that liberal democracy in Central and Eastern Europe fell from the sky in 1989 – like in Peter Sloterdijk’s thought experiment (http://www.g-i-o.com/pp1.htm) – needs to be rethought.

So there was a problem, there was a site, and thanks to a friend from undergraduate times, who started his second term as a Member of Parliament in 2006, soon there was access. The fourth component, funding, came from The Leverhulme Trust, which generously supported a larger research project entitled Relocating Innovation: Places and Material Practices of Future Making (http://www.sand14.com/relocatinginnovation/). I’ll write more about my MP friend and the research project that involved Lucy Suchman, Laura Watts and myself in subsequent posts. For now, let me just say that my fieldwork began in Budapest in 2008 and – somewhat surprisingly – ended in Berlin in 2011. The main idea was very simple: instead of treating the Hungarian Parliament as a local manifestation of liberal democracy as a universal concept, I wanted to understand what liberal democracy was by focusing on the Hungarian Parliament. In practice, however, the research very quickly became very complex. As a sociologist, all of a sudden I had to find ways of relating to architecture, Hungarian history, constitutional theory, political science and political philosophy, while constantly keeping an eye on STS. It was overwhelming.

Between 2008 and 2011 I spent four extended periods doing ethnographic and historical research in and around the Hungarian parliament building, and a longer period as a visiting researcher at the Institute for European Ethnology at Humboldt University (http://www.euroethno.hu-berlin.de/), trying to make sense of my empirical material. Finally, less than two weeks ago I submitted my dissertation, which is entitled Parliament Politics: A material semiotic analysis of liberal democracy. My plan in this space within the Installing (Social) Order blog is not to provide a summary of the dissertation, but to offer some sort of a problem map. First I will focus on architecture, and discuss what we can learn about liberal democracy if we concentrate on the construction of the Hungarian parliament building in the end of the 19th century. Then I will briefly recount what happened to this building (and the political reality it was supposed to hold together) in the 20th century in order to highlight some tensions related to the definition of a political community. I’ll then concentrate on the parliament’s role in the current political regime – the Republic of Hungary – and examine some of the most important aspects of the legislative process. After this, I’ll (re-)introduce my MP friend and summarise what I’ve learned from him about political representation, which sometimes takes place in the parliament building, but some other times in TV studios, party congresses, street demonstrations, and various other places. All of my stories will be full of political objects, but the picture wouldn’t be complete if I remained silent about political subjects. This part will be a little complicated, because I don’t think STS is very well equipped to deal with questions related to citizenship, but I might be wrong. In the end, I’ll say something about the implications of my research, and the Relocating Innovation project in general, which will probably coincide with a workshop I’m going to attend at MEDEA at Malmö University in the end of October (related to this event: http://medea.mah.se/2011/09/medea-talks-presents-lucy-suchman/).

I hope these posts will be useful and entertaining, and generate some interesting discussions! If you have any questions and/or suggestions, please don’t hesitate to post them as comments or send them in an email to edanyi -at- gmail.com.

Guest Blogger: Endre Danyi

Endre Dányi is going to join the blog for the month of October. He is the student of Lucy Suchman and John Law at Lancaster University’s Department of Sociology, and he writes on what I shall dare say “the parliment multiple.” Like Anna Marie Mol’s work on the “body multiple,” Endre’s work aims to capture the ontologies of parliment, and he does this with some attention to the interface of the future and the past.

Join me in welcoming Endre Danyi to installing social order!

NOTE: A short snippet about his Ph.D. work:

What is a parliament? And how does it work? In order to answer these questions I suggest that we consider ‘the parliament’ not as a general metaphor for democratic politics, but a specific site that lies at the intersection of distinct political imaginaries. Following a material semiotic approach my research focuses on the Hungarian Parliament – a hundred-year-old socio-technical assemblage that at the time of its opening was the largest parliament in the world. Building upon recent works in science and technology studies (STS) and cultural anthropology that conceive of politics as a set of located material practices, I argue that this seemingly singular iconic site in Budapest sometimes functions as an historical monument, sometimes as a professional organisation, and sometimes as an elaborate set for politicians. Based mainly on ethnographic and archival research, I examine the ways in which versions of a national past, the workings of a political regime, and acts of decision-making get materialised in the Hungarian Parliament, and the political futures that these narratives render real(istic) while keeping others invisible.

Kathryn Furlong on infrastructure

A new paper by Kathryn Furlong is out in Sage’s “Progress in Human Geography” titled “Small technologies, big change: Rethinking infrastructure through STS and geography

The abstract reads:

Infrastructure tends to be conceived as stabilized and ‘black-boxed’ with little interaction from users. This fixity is in flux in ways not yet fully considered in either geography or science and technology studies (STS). Driven by environmental and economic concerns, water utilities are increasingly introducing efficiency technologies into infrastructure networks. These, I argue, serve as ‘mediating technologies’ shifting long-accepted socio-technical and environmental relationships in cities. The essay argues for a new approach to infrastructure that, by integrating insights from STS and geography, highlights its malleability and offers conceptual tools to consider how this malleability might be fostered.

While the author might be a little hard on STS, stating:

STS tends to privilege the technical and thus often exhibits less refined approaches to social, political, and economic processes, has little to say on the production of nature, and exhibits ‘a rather generic notion of space’ and place (Truffer, 2008: 978)

It is still well worth the read, especially given the necessity to consider geographic issues, which might be a way to consider the matters of scale we so recently discussed here.

States in the news

This morning’s New York Times on-line features an article titled “New State Laws Are Limiting Access for Voters” and it presents or conceptualizes the infrastructural entitiy of “the state” in a couple of interesting ways.

On the one hand, states are presented in the journalist protrayal as active agents, in this case, passing laws.

Five states passed laws this year scaling back programs allowing voters to cast their ballots before Election Day, the Brennan Center found.

On the other hand, this hard work was the networked outcome of competing representatives with diverse interests.

Republicans, who have passed almost all of the new election laws, say they are necessary to prevent voter fraud, and question why photo identification should be routinely required at airports but not at polling sites. Democrats counter that the new laws are a solution in search of a problem, since voter fraud is rare. They worry that the laws will discourage, or even block, eligible voters — especially poor voters, young voters and African-American voters, who tend to vote for Democrats.

More in the middle, we see “the state” as both an actor, capable of passing laws, but also an effect of networked practices and representations, as evidenced by the now law-enforced presentation of government-issued identification cards at voting booths (the state being more like the effect, rather than the cause).

The biggest impact, the Brennan Center said, will be from laws requiring people to show government-issued photo identification to vote. This year, 34 states introduced legislation to require it — a flurry of activity that Jennie Bowser, a senior fellow at the National Conference of State Legislatures, called “pretty unusual.”

In reflecting on these issues, I am reminded of a dichotomy in the literature about states and statehood. Sometimes the state is presented as an empty signifier capable of action, as evidenced in their ability “pass laws this year.” In contrast, sometimes states are defined by their effect, as evidenced by the now law-enforced presentation of government-issued identification cards at voting booths. In our final example, we see one of two things: either an actor-network (where the state is conceptualized as an actor because it is a network) or a register-shift, meaning that the state registers as an actor during certain actions as a shortcut in presenting ideas, but also as an networked entity composed of competing actors incapable of concerted effort that might otherwise be called “state action.”

There will be an infrastructure.

Check out this recent video from the FuturICT group in which Paul Lukowicz presents his take on the project from the perspective of a computer scientist.
A lot of the issues we have been discussing come up very explicitly in this bit. Particularly interesting, I think, is how emergent structures on the one hand and purposefully built infrastructure on the other are being renegotiated conceptually, and how both are finally associated with the use of a platform by an potentially infinite and yet unkown set of users: “there will be an infrastructure” which all kinds of people may contribute to, may use to run their own projects on, built their own apps, and so on. This particular aspect of infrastructure as platform may be worth exploring further, as it has all kinds of conceptual implications, and an interesting political undertone.

A couple of more videos are linked at the site, for example a ten-minute promo of the project.

Coordination as an enduring infrastructure problem

Many thanks to my mentor, Alice Robbin of Indiana University’s School of Library Sciences, for turning me onto this interesting new paper by Nancy C. Roberts.

Beyond Smokestacks and Silos: Open-Source, Web-Enabled Coordination in Organizations and Networks

What accounts for coordination problems? Many mechanisms of coordination exist in both organizations and networks, yet despite their widespread use, coordination challenges persist. Some believe the challenges are growing even more serious. One answer lies in understanding that coordination is not a free good; it is expensive in terms of time, effort, and attention, or what economists call transaction and administrative costs. An alternative to improving coordination is to reduce its costs, yet there is little guidance in the literature
to help managers and researchers calculate coordination costs or make design decisions based on cost reductions. Th is article explores two cases—the U.S. Patent and
Trademark Offi ce’s Peer-to-Patent pilot program and the online relief eff ort in Haiti following the devastating earthquake there in 2010—to illustrate the advantages and constraints of using Web 2.0 technology as a mechanism of coordination and a tool for cost reduction. The lessons learned from these cases may offer practitioners and researchers a way out of our “silos” and “smokestacks.”

Now, I’m not totally convinced that the author is suggesting that infrastructure is the answer to reducing transaction costs associated with coordination efforts. However, the claim, which seems well substantiated to me, that the challenges facing those attempting to coordinate are growing “even more serious” amid ever complex webs of people, places and things seems like a valuable position to take for those of us writing on infrastructure.

The position can be used to justify infrastructure research. Why is this needed? All too often, I see papers on infrastructure that must justify their raison d’être and their justification is little better than “duh, its infrastructure”, “its the reason other stuff can work,” or “its the stuff that civilization is made of.” However, I am dissatisfied with all of these reasons, even though I share the personal sentiment, esp. “duh, its infrastructure.”

Of course, there are a variety of reasons that we might want to invetigate/examine infrastructure, especially for theoretical purposes. However, scholars tend to fail to justify their research on a more general or social level, and this position on transaction costs associated with coordination is probably a decent position to start from.

Write tilt-shift

Seeing this intersting tilt-shift video of a city, I was reminded that scale is a salient issues regarding infrastructure as infrastructural entities often exist in at such massive scale that it is difficult to “humanize” or make it “knowable” to people.

Capture

Similarly:

Regarding numbers, sometimes numbers are so large that their meaning (or reality) is somehow compromised by their sheer size/scale and they are somehow unknowable.

Regarding art, sometimes a painting or sculpture can be so large that it fails to relate to human viewers.

All of this reminds me of the first time I read de Certeau’s “The Practice of Everyday Life” and, in particular, the section on walking in cities. The experience of walking in a city is quite different as compared to maps of the city or arial views. Unlocking the scale issue for infrastructure is quite important, especially emphasis on the massiveness, but, as this video indicates, although only through implication and extension: as scholars, we may need to find a way to write tilt-shift about infrastructure.

How brains work in imaginary worlds

A colleague of mine, Eric Charles (psychology), recently posted some thoughts about how brains work in the world of Marvel X-men. It occurs to me that the modest role of expertise and cognitive psychology that oftentimes make it into our classes on STS might be meaningfully enhanced if we teach a lesson like this about X-men to students.

Explicitly how memory works might be a fruitful avenue, the role of memory and what it means to “know” something, and what expertise might mean if we consider the brain/body relationship, and of course, one gets to talk about Wolverine during class in the process.

"There will be an infrastructure."

Check out this recent video from the FuturICT group in which Paul Lukowicz presents his take on the project from the perspective of a computer scientist.
A lot of the issues we have been discussing come up very explicitly in this bit. Particularly interesting, I think, is how emergent structure on the one hand and purposefully built infrastructure on the other are being renegotiated conceptually, and how both are finally raised to the offer of a platform: “there will be an infrastructure” which all kinds of people may contribute to, may use to run their own projects on, and built their own apps.

A couple of more videos are linked at the site, for example a ten-minute promo of the project.

Specifying infrastructures cont.

I was just revisiting the earlier post about the wikipedia page about infrastructures, and the sentiments expressed in the comments about the missing social science and STS references on that page, impressive and elaborate as it is. As far as this blog is concerned, the issue of specifying a common understanding of infrastructures has so far turned out to be, I think, one of its implicit continuous commitments, and one that perhaps merits re-addressing explicitly from time to time. So, very briefly, and slowly gearing up for the 4S meeting, some thoughts on where we are at this point.
On the one hand, there are lots of ressources and discourses about infrastructures drawing in participants that from all types of sources and disciplines. On the other hand, there is STS as a field in social science with some maturity, and with various kinds of theory able to bring infrastructures under the auspices of their concepts and terminologies. From time to time, STS scholars, like other social and political scientists, feel like intervening into public discourse by offering their own types of expertise about particular cases and problems of infrastructures. So far, we have not been satisfied that the conceptual work required for an appropriate understanding of infrastructures has already been done, and that we would merely need to extend the application of otherwise well-known concepts to the exploration of infrastructures. Infrastructures can clearly become “normal” cases of networks, assemblages, socio-technical orders etc., and there is nothing wrong with analyzing them as such. It may, however, also present a danger of locking analyses of infrastructures into foregone conclusions.
Here are a couple of possible lines for discussing specifications of the concept of infrastructures after taking another look at the wiki entry:
– Infrastructures as supporting something (“a society or enterprise”, “an enconomy” etc.). Clearly, the idea of an assemblage (network etc.) supporting something other than itself is worth noticing. General references to use or purpose are, of course, common when talking about all kinds of artefacts, but to speak of such heterogeneous sets of entities in terms of a general externally given purpose must be puzzling.
– References to a general public. Political issues and the state are very salient on the wiki page despite its focus on economics and engineering, and despite the fact that the definition of infrastructure is given in a way that takes great care to exclude political questions, e.g. speaking of “facilities necessary”, or “services essential” as if these qualifications were unproblematic.
– The differentation of hard vs. soft infrastructure – can we utilize this differentation at all? It rings like hard vs soft facts/science/knowledge, though the implied reference to deconstruction (or rather, the potential ease of it) may be more material, less epistemic in this case – if the connotation is not a straightforward military one. The hard vs soft differentiation clearly expresses a concern about stability and vulnerability but is this concern somewhat specific when worrying about infrastructure (rather than about truth)?
– Topographical references abound. Is infrastructure always about some association of artefacts and territories, or perhaps, more generally, about technology and place? Like the references to politics, the references to geography are ubiquitous in the wiki entry although they are not explicitly part of the definition at the top.
Would any of these aspects warrant a respecification of infrastructures in a way that would constitute them as a generic class of research objects? Would we even want to have such a class?

Job offer: Amherst: Science and Technology Policy

And a job offer that sounds interesting:

The Department of Political Science at the University of Massachusetts Amherst (http://polsci.umass.edu/) seeks to fill a full-time tenure-track position at the rank of Assistant Professor in science and technology politics to start in September 2012. The Department welcomes applications from political science, public policy, public administration, as well as from related disciplines.  Geographic, methodological and
science and technology specializations are open.

In recent years, the Department has nearly doubled in size largely through a Faculty Hiring Initiative. This search continues the department’s efforts to add to the strength of its diverse and growing faculty with scholars whose work addresses broad political
questions arising in one or more of the department’s thematic emphases on a) global forces; b) governance and institutions; and c) democracy, participation and citizenship.

The successful candidate will contribute to this trajectory, adding to our current strengths while broadening our reach into new areas. The faculty hire will teach four courses in the Department’s graduate and undergraduate programs. Successful candidates must have the Ph.D. in hand by September
2012. Salary and credit toward tenure will be commensurate with qualifications and experience.

The deadline for applications is October 15, 2011, but acceptance will continue until the position is filled.  The department strongly prefers that applicants submit their cover letter, curriculum vitae, and writing samples in electronic form through the Academic Jobs Online website athttps://academicjobsonline.org/ajo/jobs/917   and arrange for electronic transmission of three letters of recommendation to the same site.
Alternatively, printed versions of the application materials can be sent to Stephen Marvell, Office Manager, Department of Political Science/UMass, 322 Thompson Tower, Amherst, MA 01003-9277. Those who apply online should not also submit paper materials.  Inquiries about the position may be directed totechnology@polsci.umass.edu.

The University of Massachusetts Amherst is an Affirmative Action/Equal Opportunity employer.  It and the Department are strongly committed to increasing the diversity of faculty, students, and curriculum, and encourage applications from women and minorities

FuturICT – an epistemic infrastructure in the making

An interesting endeavour was recently brought to my attention that, I think, is well worth checking out: the FuturICT project. I will just give you a sample of quotes from the website and you will immediately see that this project in more than one way relates to the topic of this blog:
“FuturICT wants science to catch up with the speed at which new problems and opportunities are arising in our changing world as consequences of globalization, technological, demographic and environmental change, and make a contribution to strengthening our societies’ adaptiveness, resilience, and sustainability.  It will do so by developing new scientific approaches and combining these with the best established methods in areas like multi-scale computer modeling, social supercomputing, large-scale data mining and participatory platforms. (…) The FuturICT Knowledge Accelerator is a previously unseen multidisciplinary international scientific endeavour with focus on techno-socio-economic-environmental systems. (…) Revealing the hidden laws and processes underlying societies probably constitutes the most pressing scientific grand challenge of our century and is equally important for the development of novel robust, trustworthy and adaptive information and communication technologies (ICT), based on socially inspired paradigms. We think that integrating ICT, Complexity Science and the Social Sciences will create a paradigm shift, facilitating a symbiotic co-evolution of ICT and society. Data from our complex globe-spanning ICT system will be leveraged to develop models of techno-socio-economic systems. In turn, insights from these models will inform the development of a new generation of socially adaptive, self-organized ICT systems. (…) The FuturICT flagship proposal intends to unify hundreds of the best scientists in Europe in a 10 year 1 billion EUR program to explore social life on earth and everything it relates to.”
Basically, as it appears to me, the FuturICT is a call to arms of sorts for social scientists of all persuasions to do something with the myriad of data our current ICT systems are producing. The aim is to build an epistemic infrastructure, or rather a range of infrastructures that would put all these data to use. One of the interesting things is that everybody can at this point is invited to join in, though the emphasis is clearly on building a large network of institutions, the current state of which you can see here. It very probably though will not hurt to leave your name, affiliation and expertise, if only to be updated as things progress. Some of the information provided at the website does sound kind of sci-fi, some of it kind of eery, but believe me, as I happen to know some of the people involved, these people are very serious – and they are very capable. So, I am very curious what this will grow into.
One thing this made we wonder about with respect to the exploration of infrastructures in general was whether we have been giving quantity enough thought. The impetus for the FuturICT initiative is the mass of data already available, and the rationale is that the very fact of having these data not only will support an epistemic infrastructure but that is also constitutes an outright demand for it. Is this not something which dintinguishes infrastrutures (e.g. infrastructures for traffic, services or electric power) from other types of networks and socio-technical assemblages: that there is some input or throughput, that it comes in high numbers, and that developer-entrepreneurs try to establish infrastructures as complements or as purpose-giving or profit-generating tools with respect to the throughput?

Innovation in book reviews

A few months ago, Jan-Hendrik and I were discussing the utility of writing book reviews. One concern we had was that book reviews basically do nothing for one’s academic standing, but more than that, in thinking about the book reviews themselves, we were frustrated with them because unlike journal articles, they rarely reference other book reviews for the same book.

So, we wrote a book review that did, to test if there was any value to this. We enlisted a student of mine, Alexander Kinney, and we set to work writing a book review that included other reviews of the same book.

We wrote our editors:

To the editors,

Please see a book review of Latour’s “Reassembling the Social.” While my co-authors realize that the length is somewhat past the desired 1000 words, we hope that you find the document satisfactory. It employs a somewhat unorthodox approach where other book reviews are cited where appropriate so that we can essentially “review what has not yet been sufficiently reviewed by other reviewers.” Additionally, we ask for a small editing consideration for adding a small “box” around a subset of identified text (this mirrors what was done in Latour’s book). I know that this is an unorthodox review, and hope that the innovation is tolerated. Still, we are prepared to make amends if this document does not meet the standards of the journal.

best,
Nicholas J. Rowland

So, it has gone through a couple rounds of editing and is now in the proofs stage (please note that we realize that Latour’s book was written in 2005 [not 2007, as the title currently states]). Also, this new approach to book reviews also requires that one reviews a book a few years after publication rather than soon after publication. This is so that the other reviews can be written and, to some extent, responded to.

Here is the document below, feel free to comment on the approach, style, or content:

Teaching STS: Reinvention and Modification

I saw this in a student presentation yesterday about the role of adaptation in the process of diffusion, where we were discussing matters of re-invention and post-hoc modification/workarounds. I was somewhat stunned and the students in the class were mezmerized:

Capture

What you see in the image above is a C5 Russian missile launcher removed from its “aircraft source” and then adapted/modified for use on the rollbar of a jeep/truck. There is also a video too, below the image available at reposter here.

Another of these “DIY” wartime inventions is a hand-held grenade launcher modified for individual use (the source being a slew of them mounted on the bed of a truck).

Capture

All of these examples, along with the videos, could be used in lessons about diffusion and re-inventions, of course. My guess, however, is to ask the students: how does this make you rethink some of the ideas scholars have about diffusion and re-invention. Certainly, the old, fun ideas from STS about “using technologies in ways not originally intended by designers” is a good one here, but beyond that one could begin to rethink the, what one might call, “quick and easy” story of diffusion that seems to dominate the basic literature. I’m speaking here about the binary “1 for adopt, 0 for non-adoption” interpretation of spread. It becomes useless to think about C5 missile launchers in this way. Bringing up the old work of Akrich (1995, solar cells) and the newer work of De Laet and Mol (2002, hand-pump) leads to a much more nuanced vision of re-invention, modification, and localizatoin, but is even that enough? The role of “necessity” seems obviously right, but analytically weak as determining “moments of necessity” from conditions of non-necessity is a deadend for research. Taking a Weberian approach and forcing a claim like because of their geopolitical circumstances and cultural approach to the world around them, common Libyans are relatively more “resourceful” than their governmental/military counterparts also seems analytically weak. Is this a classic “drifting edges of global networks come together unintentionally and unexpectedly” making this outcome, as in, Soviet degeneration leading to the global sales of ersatz military resources (that almost nobody can maintain and) which are (therefore) cheap creates the conditions underwhich the only way to get additional utility out of these machines is in remaking their uses. I’m not even sure what one would call that sort of an analysis … “luck theory”? The motivation behind any modification, reinvention or workaround appears to be some combination of the need to localize and/or extend the utility of something (or a portoin of something). Trying to determine the motivation beyond mere “necessity” or “resourcefulness” is difficult to do. In this case, survival plays an obvious motivation factor; however, extending that to a broader framework seems foolhardy too. So, “where does reinvention come from?” ought to be an enduring question for our students and ourselves in STS…

Please note: reposter.net is a resposting site, so the original material comes from somewhere else, always:

Here are the videos, in order and linked to the original posts on alive.in/libya

Teaching STS: Challenging Technological Determinism in Caliente, NV

If you were raised on STS in America, then it is likely that you read about the death of a train town named Caliente, NV. This is:

Death by Dieselization: A Case Study in the Reaction to Technological Change
W. F. Cottrell
American Sociological Review
Vol. 16, No. 3 (Jun., 1951), pp. 358-365
(article consists of 8 pages)
Published by: American Sociological Association
This is not a bad read, and easy for instructors to challenge on the grounds of “technological determinism” on two accounts:
1. the town did not die because of the out-of-control technological advance of locomotives, and instead the government-military complex invested heavily in diesel locomotoves as part of mid-century war time efforts (potentially even linking technological advance with patriotism such that any resistance to the technology was seen as anti-American).

 

2. the town did not die because of the out-of-control technological advance of locomotives because like so many towns of this age and this sort, it had a uni-dimensional economy such that the town was susceptible to new technology that challenged the source of their economic security.
I like to emphasize on the account during steam train advances, however, as they are even more telling about this “technological determinism” that seems so easy to swallow for students. Sure, Cottrell shows how Caliente, NV, was run asunder by the advent and subsequent quickened spread of diesel trains on the American landscape.

 

However, during advances to the steam train, and I am referring to low-tensile boilers as compared to high-tensile boilers (and this is somewhat simplistic of train buffs, so please forgive me), it was towns like Caliente, NV, that gained the most! A student and I created this set of PowerPoint slides to explain this (you’ll have to download it to see the animation — the small white dots are “towns” set every 100 miles from the port town): check it out here (note, you’ll have to download it to see the cool animation).

 

As some of you know, I work in Altoona, PA, which was once a heart of the Pennsylvania Rail Road. Altoona, to some extent, suffered a similar death as Caliente, NV, to use Cottrell’s words.

Call for Papers: Performing ANT ??? Socio-Material Practices of Organizing, 17-18 February 2012, St. Gallen

Just got that a few days ago and forgot to post it here – now as I am preparing for three weeks of “off-time” (meaning: a bit of traveling and weeks of being online only once every few days) I had to post it.

Reading that I thought: what does it mean that workshops that specifically use “ANT” in their title are mostly workshops for younger scholars? Just wonder…

Teaching STS with "A fist full of quarters"

One way I teach students the philosophy of science is by using the documentary “The King of Kong: A fist full of quarters.”

King-of-kong-a-fistful-of-quarters-poster-1

Storyline

In the early 1980s, legendary Billy Mitchell set a Donkey Kong record that stood for almost 25 years. This documentary follows the assault on the record by Steve Wiebe, an earnest teacher from Washington who took up the game while unemployed. The top scores are monitored by a cadre of players and fans associated with Walter Day, an Iowan who runs Funspot, an annual tournament. Wiebe breaks Mitchell’s record in public at Funspot, and Mitchell promptly mails a controversial video tape of himself setting a new record. So Wiebe travels to Florida hoping Mitchell will face him for the 2007 Guinness World Records. Will the mind-game-playing Mitchell engage; who will end up holding the record? Written by <jhailey@hotmail.com>

The film is full of ideas from the philosophy of science. For example, logical positivists were obsessed with (1) establishing theories only from data and (2) considering what evidence either falsifies or verifies a theory. In the film, Steve Weibe, the up and comer in the world of competitive gaming, sends a score into Walter Day, the guy that runs the world record center, but the score is ultimately rejected because while the video tape recording appeared legitimate, the machine he was playing on was questionable. This one is good for the falsificationists too: the score he had could not be verified because of questions concerning the video game machine he used; however, because there was no concrete evidence — merely a hunch — of tampering, the score could not be entirely falsified either. Consensus among a group of experts emerged upon reviewing the evidence of Steve’s claim to have the new highest score on Donkey Kong. This nicely emphasizes the role of experts and how consensus over reality is as important as “reality” itself.

Now, thinking all the way back to Shapin’s work on early laboratories and experiments, Steve is invited to attend an annual competition where he can achieve his highest score “live” so that all the other experts can witness first hand his skill at Donkey Kong. He does, and the entire community of competitive gamers more or less warms to the newcomer. This is not a bad lesson in the role of social connections and acceptance of newcomers in science. This is a place to begin discussions of Merton’s norms of science, and, in particular, disinterestedness. However, there is much more to say about functionalism. His competitor, Billy Mitchell, the previous record holder and longstanding insider, sends in, at the last possible moment, a video tape of a score that beats the score Steve just accomplished in person. Merton reminds us that what is good for science tends to advance it. In this case, what’s good for Walter Day and competitive gaming also happens to be what’s good for Billy Mitchell. Bill’s sketchy video score is accepted and immediately posted on-line for the world of competitive gamers to see. Additionally, and in violation of the norm of communism, Billy’s tape is not shared with Steve, even thought Steve’s original tape, which was rejected, was shared with Billy.

The documentary is also funny in places, and it does a nice job showing how a group of gaming experts arrive at conclusions about the nature of reality through norm following, norm violation, and, importantly, consensus. If you teach STS, check it out; I’ve even got a sheet prepared for students to follow along (write me at njr12@psu.edu if you’d like to see it). Also, if you’re just interested, then check it out too.

One closing remark: those old games like Donkey Kong required a very different skill set as compared to contemporary games like Halo or Neverwinter Nights. It is nice to remind new students that games used to be hard in a much different way.

Personal Health Records and patient- oriented infrastructures

International workshop on Personal Health Record 

Personal Health Records and patient- oriented infrastructures 

Empowering, involving, and enrolling patients through information systems: 

Trento, Faculty of Sociology 

via Verdi, 26 

12-13 December 2011

Deadline for abstracts submission: September 30th 2011

Notification to authors: October 15th 2011

Personal Health Record (PHR) has become a popular label to refer to a wide range of patient-controlled information systems aimed at allowing laypeople to access, manage, share and supplement their medical information. Launched in the US at the beginning of the new millennium, PHRs are spreading in Europe (especially in the UK and Scandinavia), where one witnesses an increasing number of experimental systems that vary to suit the local healthcare context. Nevertheless, these technologies appear to be in their infancy, as clearly demonstrated by the low rate of PHR actually implemented in real-life settings compared with the (relatively) high numbers of trials.

Whilst there is still little evidence that PHRs may affect healthcare, they are regarded by different actors (policymakers, healthcare managers, patients’ association, doctors) as “holding out great promise” to revolutionize it by reducing medical errors, cutting costs, increasing patient awareness and control over their health, and providing physicians with information in emergency situations – to mention only some of the potential benefits. This new ‘patient role’, proactive and characterized by greater control and responsibility over one’s health, is reinforced by the very existence of an electronic tool, suggesting that these new activities require an information system somehow similar to those used by doctors. The name itself, PHR, recalls the acronyms for the standard healthcare systems – EHR (Electronic Health Record) and EPR (Electronic Patient Record) – and thus affirms that it belongs within the semantic space of professional tools.

PHR systems are becoming the point of convergence among different visions concerning the future of healthcare systems characterized by the (desired) emergence of ‘new patients’ willing to share the burden of care and to reshape their relationships with doctors and institutions. Accordingly, PHR can

be considered an interesting lens through which social informatics researchers can examine the tentative transformation of different dimensions of the healthcare sector.

We believe that the time has come to engage in debate on these technologies, which are increasingly presented by policymakers and healthcare systems managers as the “next big thing” in healthcare. It is necessary to move away from a mere technocentric perspective (like the one sometimes provided by medical informatics) in order to bring the actors, their work/daily practices, and the meanings attached to them, back into play.

The purpose of this workshop is to gather together scholars, practitioners and professionals who reflect and work on PHR from different perspectives in different countries. Whilst some interesting socially-informed studies have been already presented and published, to our knowledge no attempt has yet been made to create an opportunity for dialogue among them.

We welcome contributions about, but not limited to, the following themes:

·         the design of patient-centered IS and their integration with professional ones;

·         new forms of computer-mediated doctor-patient or patient-to-patient communication;

·         the evolution of healthcare infrastructures and organizations, and the creation of new representations of health/illness;

·         new forms of alignments and conflicts between self-care practices and institutional treatment;

·         the redefinition of responsibilities and roles within the network of patient-doctors-institution-caregivers.

·         the extent to which patients use PHRs to generate data for use in patient-doctor and patient-patient communication

·         the extent to which health professionals make use of patient-generated data from PHRs

Abstracts (max. 1500 words) should be sent to phr@unitn.it

More information is available at http://events.unitn
.it/en/phr2011
or can be obtained by contacting the organizers at phr@unitn.it

We plan to select the best abstracts and presentations and invite their development into full papers to be submitted for a special issue on the topic. Further information will be given during the workshop or before it on the website.

Organizers:

Silvia Gherardi, Faculty of Sociology silvia.gherardi@unitn.it

Enrico Maria Piras, Fondazione Bruno Kessler piras@fbk.eu

Alberto Zanutto, Faculty of Sociology alberto.zanutto@unitn.it

 

Teaching STS: Controversies

Teaching controversies is a mainstay of STS; if you need a good film to show, check out “Judgment Day: Intelligent Design on Trial” replete with Steve Fuller weighing-in on intelligent design…

Also, I have a handout already made to help students to navigate the documentary. Write me if you you’d like a copy or if you’ve used this clip for your own courses (send to: njr12 at psu.edu).

Science and Technology Studies: Opening the Black Box

Somatosphere just posted a link to a set of video recordings from the STS – The Next Twenty Years conference in Harvard last April. I would have loved to go there, but unfortunately poor european scholars only have money to travel abroad when they are participating actively. But, luckily, the whole conference was on live-stream back then. I was not able to watch all of it so I am so very happy to be able to watch them now. Trevor Pinch´s “provocations” are STS at its rhetorical best – so watch, laugh and think.

Should STS articles have methods sections?

It has come to my attention that a good number of STS case studies contain no methods section, and some no mention of method at all (typically utilizing a case study approach). So, I asked today:

Should STS adopt the traditional social scientific methods/data/analysis sections, or is the implied case study methods acceptable, or perhaps a critique of science “as usual”?

So, should, for example, SSS or STHV require a methods section?

Sergio Sismondo on black-boxing and taken-for-grantedness

Concern over the relationship between processes of black-boxing and gradual taken-for-grantedness has been expressed a bunch of times on this blog — here, here, and here.

Gearing-up to teach STS to mainly engineering students today has me reading Sismondo’s intro text — and in Chapter 11, on the topic of “controversies”, he lays out the terms as follows:

Science and technology produce black-boxes, or fact and artifacts that are taken for granted; in particular, their histories are usually seen as irrelevant after good facts and successful artifacts are established (2010:120).

It is nice that the world of ideas in science is not auotomatically labeled “taken for granted” (when facts are momentarily settled) and the world of things in engineering is not automatically labeled “black boxed” (when artifacts are momentarily settled) so that the distinction is not reified (i.e., that facts are only taken for granted and that machines are only black boxed).

However, the two terms seem to be synonyms to Sismondo — do you agree with Sergio?

A Thought on Data and an Orbituary

This NYT article has been on my reading list for a while (some might have noticed that I posted it accientially before two times). I wanted to share it because first (of course) as an orbituary, as a bow before one of the last centuries most inspiring teacher of programming and computing. But I also wanted to share it because it points us who are interested in the assemblage of contemporary infrastructure to a figure that STS seems to like to forget after getting rid of the myth of the genius inventor: the programmer.

For years, Mr. McCracken was the Stephen King of how-to programming books. His series on Fortran and Cobol, a computer language designed for use in business, were standards in the field. Mr. McCracken was the author or co-author of 25 books that sold more than 1.6 million copies and were translated into 15 languages.

Well, of course not the individual, creative and inventive programmer – I sure we would step into the same explanatory traps again that were connected with the inventor-myth. But programming – the core acivity of building, connecting and maintaining IT infrastructure – is a cutural practice on its own, a mixture of play, craft and learned or trained skill. And as any practice, it gaines stability and cultural significance by the network of activities and things surrounding it: trainings, courses, guidelines, how-to-books, textbooks, journals and so on. Maybe it is time that we spend some thoughts on how this particular practice was shaped – an idea that struck me after reading this: 

In the early days, computer professionals typically fell into one of two camps — scientists or craftsmen. The scientists sought breakthroughs in hardware and software research, and pursued ambitious long-range goals, like artificial intelligence. The craftsmen wanted to use computers to work more efficiently in corporations and government agencies. (…) But his books are not like the how-to computer books of more recent years written for consumers. His have been used as programming textbooks in universities around the world and as reference bibles by practicing professionals.

Greatest thing to happen to STS since the Bijker/Pinch paper

New interest in the micro-foundations of institutions has got to be one of the best things to happen to STS since the Bijker/Pinch paper…

The new institutionalism in organizaitonal analysis has been a well-spring for research. A quick summary of neo-I that Fabio Rojas and I wrote (in a paper on museusms):

The hallmark of the ‘new institutional’ school is the relentless focus on how life inside organizations is regulated by stable social practices that define what is considered legitimate in the broader external environment in which an organization operates (DiMaggio 1987, 1991, DiMaggio and Powell 1991b, Meyer and Rowan 1991, Scott 2000). The influence of institutions on organizational behaviour is supposedly most obvious in organizations like museums – organizations that new institutional scholars label as ‘highly institutional and weakly technical’ (Scott and Meyer 1991: 124). By this, scholars usually mean the following: that the organization’s leadership is highly sensitive to the expectations and standards of its industry; that the organization of work within the bureaucracy depends on broader ideologies and cultural scripts found in modern societies; that managers are likely to copy the practices of other organizations, especially high-status organizations; that professional groups are the arbiters of organizational legitimacy; that rational organizational myths and rules structure work practices; and that the ultimate performance of an organization’s set of tasks does not depend much on tools like assembly lines, computers, and the like (see also DiMaggio and Powell (1991a, DiMaggio and Powell 1991b).

The new approach/point of emphasis for neo-I folks is laid-out by Walter Powell and Jeannette Colyvas in their 2008 chapter in “the big green book” of organizations and institutions — copy of the paper is available in draft form at www.orgtheory.net right here.

And so the story goes:

1. Older research is cast as calling for “the need to make the microfoundations of intitutional theory more explicit” (p.276). This is something that institutional theorists have had much success with — positioning papers to create the feeling that this idea is both something new and exciting but also that the call for micrcofoundations is an old one (that we need to now make good on). The opening lines of D&P’s 1983 paper does a good job of saying “that was then” and “this is now.”

2. The upshot: “much analytical purchase can be gained by developing a mirco-level component of institutional analysis” (p.276) which would link “micro-concepts, e.g. identity, sense making, typifications, frames, and categories with macro-processes of institutionalization, and show how these processes rachet upwards” (p.278).The invocation of “hierarchy” or “upward” levels is somewhat disconcerting for those of us set on flatter analysis, but there is likely room to show (and convince) that even the tallest, most stable actors and actions occur locally and laterally on a flat surface of interactions.

3. How can we, in STS, get some purchase on this?

A. Emphasize the interpretations of contexual factors (p.277) rather than assuming them (as has happened now and again in organizational theory devoted to field-level analysis — these are assumptions that occasionally must be made in order to do the diffusion studies so common in neo-I).

B. Display the on-going micro-maintenances of apparently stable institutional forms in daily practice AND/OR discover how stable institutional forms in daily practice result in change over time such that they transform the forms they are intended (in the behavioralist sense) to prolong.

C. Enliven analysis of actors — old new institionalism (let’s say) emphasized two types of actors, “cultural dopes” or “heroric ‘change agents'” the reason being that action was essentially assumed to operate at a level unnecessary to fully capture during large-scale field studies (i.e., so managers simply sought legitimacy at all costs, we assumed, and mimicked their peers) OR in the move to caputre the actions of real actors (instead of assuming organizational entitivity) the studies overwhelmingly invovled entrepreueurs and celebrated/worshipped their field-altering accomplishments, respectively. The new emphasis (of, let’s say, new new institutionalism) sort of smacks of STS lab studies where we saw the how the mundane facets of scientists’ behaviors in labs resulted in field-altering science. Now, neo-I wants to avoid momentous events, or, at minimum, show how seeming huge events were a long time in the making and like all experiments involved loads of failure, which demands of writers the ability to show how local affairs prompt shifts in conventions (locally or broadly) (p.277).

Why is this so good for STS? We have already done much of this type of work, and have oodles of folks committed to these axioms for analysis. The only thing we really need now is a bridge between these two camps — while STS could not break into neo-I on the topic of technology, Powell and Colyvas might have just opened the door to an new institutionalism in STS…

 

One Plug to Charge Them All

A friendly fight over standards in the plug market for electric cars appears to be brewing, according to a NYT article this morning.

WITH electric cars and plug-in hybrids at last trickling into the showrooms of mainstream automakers, the dream of going gasoline-free is becoming a reality for many drivers. Cars like the Nissan Leaf and the Chevrolet Volt can cover considerable distances under electric power alone — certainly enough for local errands and even most daily commutes — while enabling their owners to shun gas stations.

The multi-media portoin of the article is good stuff — I suspect similar pictures will be featured in an STS article sometime soon…

What does the "knowledge myth" mean for SKAT/STS?

A colleague of mine wrote recently about the “myth of knowledge” in a nice blog post. Perhaps one of the most interesting and controversial (and most [overly] generalized) points was about Akido:

Because I am a behaviorist-leaning kind of guy, I would additionally point out that when behavior, talking, and thinking come into conflict, behavior wins. In my article trying to connect ecological and social psychology, I used an example out of Aikido, the martial art that prefers not to hurt people unnecessarily. Indulging in horrible generalizations: In the Western cultures – steeped in dualism and the myth of knowledge – we thinking that ‘knowing’ is about ‘thinking’, but in Eastern cultures this is not so. In Aikido, one of your goals is to blend with your opponent’s movements so you inflict minimal harm. Your goal is not to think about blending, not be able to explain how to blend, nor to be able to accurately imagine blending, rather your goal is to actually blend when the time comes. A person ‘knows’ how to blend when they do it without thinking, and regardless of whether they can teach how to blend or explain what they did after the fact. (By the way, that article is part of a 7 article discussion, including my latest addition now available online.)

One of the main points was the link between “knowing” and “doing” and from a behaviorist perspective in psychology, this is an interesting position to take on such matters. He provides a number of examples such as “how can a legless football coach know how to kick a football?”

Knowledge — beit tacit or explicit, fact-searching or its role in training scientists and engineers — plays a central role in SKAT and STS; however, I’m not entirely sure we’ve jumped on the behaviorist bandwagon just yet.

The ending question: what would STS look like without “knowledge” as a crutch during analysis?

Game theory and society, and infrastructures

I recently attended a conference on “Game theory and society” at the ETH in Zürich. It was a very productive conference with a good mixture of plenary sessions with people like Brian Skyrms and Herbert Gintis and the usual host of more work-in-progress oriented panel sessions. Speakers and attendants had backgrounds in sociology, philosophy, economics, biology, even in physics. If there was a common and unifying interest, this interest was in modeling elementary forms of cooperation. All the more striking was the nearly complete absence of people from sociological theory. Game theory, it appears, has been largely abandoned by sociological theory, leaving it to colleagues specializing in formal modeling or generally versed in quantitative methods. It happens that I found this to be quite a pleasant bunch of people to be around.
A couple of questions with respect to our interest in infrastructures have been bugging me since:
– I might start with the issue brought up by Nicholas a couple of posts ago whether there is a problem in sociological theory of addressing questions of efficiency. After working through some of contemporary game-theoretical research and comparing it to the state of the art in sociological theory, how could I not agree? Game theory could be one, if not THE weapon of choice for sociologists discussing questions of efficiency in an analytical manner, and evolutionary approaches have been demonstrating that the use of game theory need not be congenial to either rationalistic or economistic reconstructions of effiency. Evolutionary game theory is particulary good at showing how inefficent equilibria come about and turn out to be stable.
– Closely related are questions of utility that tend to be treated with a similar kind of disregard by many sociologists. One does not need to adopt a utilitarian perspective to see that analyses how relationships and structures develop, how artefacts evolve and diffuse, etc. are correlated with (mostly implicit) ideas about utility. We may of course treat such ideas about the utility of contacts, associations, or tools as mere background assumptions of our observations of infrastructures, or we may broadly consider them as taken care of by looking at practice pragmatically. Seeing what can be accomplished by taking a more analytic approach to utility, I suspect though that we can do better than just telling utility stories (either with respect to particular cases or in the exposition of theory).
– Which brings me to the more general question of research orientation. Why is there so little modeling in STS and in the emergent field of studies of infrastructures? Researchers have been investigating broadly and writing quite generously about how complex forms of modeling are utilized in the construction of truths and technological artefacts but have been making little use of these methods and tools themselves. It is surely great to have so many sound STS case studies and ethnographies at our disposal in discussing our theoretical concepts and ideas about infrastructures, but again I suppose we could do much better with a less restrictive choice of methods and approaches. If there is a unilateral bias in favor of qualitative methods, story-telling and small-n studies, systemic problems in aggregating empirical data (if not, in the end, a constant recycling and re-invention of theoretical concepts with little progress in accumulating empirical intelligence) are likely to result.
Should we therefore not try to engage more with formal models of cooperation, social order and infrastructures? In Zürich, I found the doors to be generally open, and that there is a lot to learn in terms of concepts and methods. And I find myself encouraged to look into this in a more sustained manner.

Working around over time

Workarounds are:
1. Any way of tricking a system by using the system in a way it was not intended to, but that still gives you the desired outcome. This was first (according to my research) raised by Gasser in 1986. The idea being that in some systems you could enter, for example, incorrect data in order to arrive at the desired outcome. The need for odd initial data is based on infelicities in the system beit software or mechanical. 
2. By “jury rigging” the system wherein you haphazardly put something together, but you don’t expect it work well forever. Sometimes referred to as “make-shift,” it works well enough now — and this happens in computing all the time; you make a quick, often small, but necessary change in the system. Sometimes called a “kludge,” this is where the “permanence” issue is raised in research — how more or less permanent is a workaround, typically assumed to be of limited longevity. Of course, no matter what we make, nothing is permanent. Still, some things last longer than others, and more often than not with packaged software the “slightly-more-permanent” workarounds (in the form of system customization) are more common as compared to the often, but short-lived workarounds used in legacy systems [note: this may be a generalize too wide to bear evidence]. Still, this helps us to understand better the longevity of workarounds.
3. Literature on workarounds is now split on the idea that they are “freeing” employees from the confines of the system, and increasingly scholars ask if all this “freeing” (in research on ERP) creates its own subset of confines (suggesting that large numbers of expensive customizations to systems requires some administrative oversight, which effectively balances the freedom from the previous system with the new need to control those freedoms). This helps us understand the autonomy-producing or -restricting quality of workarounds.
This seems to be the cost of customization: it at once frees you from the confines of the system, but also hurdles the system toward eventual decay (as we have observed with legacy systems), and this is sometimes referred to as “drift.” The more control you exert on the system — in this case, in the form of workarounds — the more brittle it gets and drifts, in principle, from the control of those charged with maintaining the systems. In this way, workarounds are kind of like using a mulligan in golf; it helps you get a better chance in the short term, but in the end it keeps adding +1 to your score until you’ve lost it completely.
However, if one could follow a set of workarounds through the years (and I’ve never seen research like this), explicitly watching them “decay” or “cost,” then the analogy to golf might be observed. When did, in the short run, the workaround get the organization out of a jam? Conversely, when did the workaround, in the long run, costs the organization more than it was worth.
If you could understand the process deeply enough, one could explicitly estimate at which times workarounds “beat the system,” meaning that, you might be able to identify muligans (i.e., workarounds) worth taking (i.e., making) and others which ought to be avoided.

Some shameless self-promotion: On Technology and Society

Nicholas’s public question if there is a book on what the old theorists thought about technology offers a tempting opportunity for some “shameless self-promotion” that I was nearly too modest to seize. But in the pre-ASA mode that nearly every sociology blog I read is in at the moment…well, I´ll jump at the chance: I wrote a book similar to the one that Nick requested – only (sorry) in german and not outlined as a list of old scholars thoughts, but as a sociologized concetual history of explaining the relationship between technology and society.

The usual story is that there was first technological determinism, then social constructivism – a story of a big STS success. But a closer look reveals that the two underlying modes of explanation – technicism and cuturalism – are with us for at least 150 years. This conceptual dichotomy, already established in philosophy and early social theory (Kapp, Marx, Durkheim, Weber), enforces during a first crisis of modernity in the first decades of the 20th century a first explicit version of technicism (Veblen, Dessauer) and a first version of culturalism as a reaction to it from the 1930s (Spengler, Gilfillan, Mumford) on. As once stabilized theoretical artifacts these modes of explanation deal with the social and technical transformations of modernity by attributing them either to an inherent logic of technological development or to major and minor changes of modern society. This leads to pessimistic versions of technicism (Ellul and Jünger) and a critical version of culturalism (Adorno, Horkheimer, Heidegger) after World War II, an anthropological version of technicism (Freyer, Gehlen, Schelsky) and a rationalist culturalism (Marcuse, Habermas) that accompany the stabilization of organized modernities until the 1960s. As a reaction to a second crisis of modernity from the 1970s up to today two versions of technicism and a radical relativist culturalism emerged: while new media technology and digital computing enforces a revival of deterministic thoughts (McLuhan, Postman, Flusser), a large number of empirical work focused on technology assessment was based on modest versions of technicism (Ogburn, Heilbronner, Rapp). The sociology of scientific knowledge (Barnes, Bloor) fosters first a moderate empirical micro-constructivist culturalism (Latour/Woolgar, Knorr-Cetina), then a historical macro culturalism (Hughes, Constant, Dosi) and finally a radical social constructivist culturalism (Bijker, Pinch, Law).

From the 1960s on these theoretical and conceptual differences have been additionally stabilized by bringing them in theory-political as well as real political opposition. By this the basic conceptual distinction between technology and society has been virtually naturalized, it has not been seriously drawn into question since the 1930s. But from the 1980s on a number of attempts have been made to wipe the slate clean in social science theories of technology. These new approaches understand both dynamics and stability of society and technology as entangled and interrelated phenomena in need of explanation. Actor-Network-Theory (Latour, Callon, Law), neo-pragmatist technology studies (Star, Fujimura) and systems theory (Luhmann) are just three of theses new approaches. Despite their differences they teach us to ask and answer questions about the relevance of materiality for the emergence and transformation of the social, about the material and technical mediation of agency and communication, about the importance of artifacts for the formation and change of social institutions and ideas and about the role of technological developments in transforming modernity. To ask and maybe answer them, the discourse on social science theories of technology will have to be connected to the general discourse on social theory, on theories of society and modernity.

 

Public Question: What did the old theorists think about technology?

A while back I asked “does anyone know if there is a good paper or book about what Weber thought about technology?” which is an interesting question in light of new STS work. Marx has been paid some attention by scholars, but here comes the public question:

Is there a book that tackles, one chapter at a time, what the old theorists thought about technology?

This seems like a great edited book or mini-conference or mini-conference that turns into a great edited book.

So, next question:

Is there any interest in a book that tackles, one chapter at a time, what the old theorists thought about technology?

the role of reviews in the social science

All this discussion of new infrastructuralism and the infrastructural relation between “black-boxing” and “institutionalization” makes me think about the role of reviews in social science.

The question is: is a concept what it is, or does how it has been used constitute what it (now) is?

I think this for the following reason: Latour, in Actor Network Theory and After goes into something of a littany regarding the ways that ANT has been used and abused over the years since he and Callon (and Woolgar, honestly) thought of it. Of his many points, a meta-point matters for this post: he basically states that as his ideas spread, they increasingly got used in ways he did not expect and then Latour makes something of a value judgment in suggesting that some research, which appears to be relatively more current as compared to his original works, don’t do ANT right. Of course, Latour takes some blame in saying that perhaps the entire moniker including, A, -, N, and T were not perfect, it still seems like an odd point to hear from Latour. About 120-ish pages into Science in Action, Latour reviews as part of the translation model of how things spread (i.e., diffuse, though he considers this a dirty word) he demands that spread requires change — that a technology, for example, must change as it enters into new hands. This was a counterpoint to diffusion of innovation literature (that he hardly cites) and their supposed assumption that diffusion, as an idea and model, only works so long as we assume the innovation is “constant” over time (meaning that it does not and will not change). Getting to the point: ANT was going to have to change to spread so widely, and the ideas would necessarily be used in ways unintended and perhaps unacceptable to its originators.

Again, then, the question is: is a concept what it is, or does how it has been used constitute what it (now) is?

Latour contributed to the notion of “black-boxing” as much as perhaps any scholar of the last 30-ish years, and given his disappointment with how some of us have used his concepts, does it really matter? (i.e., this value judgment) Or, does it matter more for science not to judge how concepts have been used and instead document how they have been used because the way they have been used is effectively what they are?

Returning full circle, in all this discussion of new infrastructuralism and the infrastructural relation between “black-boxing” and “institutionalization”, what would make the best review paper? Review the terms as if they are not artifacts changing hands in order to conceivably arrive at some core meaning of these concepts, or review how the terms have been used and that this will tell us more about the operational meaning of the terms? 

Institutionalism and Infrastructuralism – some first thoughts on differences

In a recent post following Nicholas’s thoughts about blackboxing and taken-for-grantedness and about what that could mean for discussing the benefits of STS and neo-institutional theory, I asked: what are the difference between institutions and infrastructure? Nicholas and I discussed that today for the first time in detail and we thought it might be worth to post it to see if it makes sense.

Neo-Institutional theory is – to tell a very long story short – based on the question of how many different things (organizations, models, cultural forms) become similar over time. This is the basic problem in DiMaggio/Powell (1983): understanding the institutional isomorphisms if the impetus of rationalization is taken away. It is the problem that Strang and Meyer works on when studying the institutional conditions of diffusion (1993). It´s central focus was – like Powell argued in 2007 – on “the field level, based on the insight that organizations operate amidst both competitive and cooperative exchanges with other organizations.” (2007). DiMaggio (1988) and Powell (1991) both noted that this was a bit too smooth and that institutional arguments would need a more detailed perspective on contestation, fragility and struggles. Nevertheless the framework provided a fresh and new way to understand institutions – so productive that it framed a discipline or two.

Infrastructure studies, on the contrary, focussed on how things can appear systematic and highly-integrated but are actually implemented in many heterogenous, historically contingent local processes (Bowker/Star 1996/1996; Star/Ruhleder 2996). In some ways, diffusion becomes less important as implementation takes a more central role. Infrastructures are not build by system makers, but screwed together loosely by complex arrangements of interfaces, gateways and work-arounds, as Edwards has shown in 2003 and in his fabulous book on climate models (2010). However there seems to be a tendency to focus on normalizing and standardizing effects of classification systems implemented in large infrastructural settings – this is something like the Weberian “iron cage” of infrastructure studies visible already in “Sorting Things Out” and very strong in the works of Hanseth and Monteiro (1997, Monteiro 1998).

The link seems obvious, doesn´t it? Neo-Institutionalism starts looking at heterogenous stuff and finds it similar – too similar perhaps, so that it is missing the complexity of the social world sometimes. But it is a great framework for strong explanations. Infrastructure studies look at systems and find them fragile and fragmented inside. But they seem to lack the “big explanatory” power, which leads to giving up the focus on local multiplicity and emphasizing standardization/normalization instead. Could the strengths of both be added to get a good grasp at the installation of social order under (high) modern conditions?

The new infrastructralism?

Jan-Hendrik and I were discussing this yesterday: “what would it take to create the new infrastructuralism?” In a way, it would be analogous to the new institutionalism, but with a different (although overlapping) set of topics, etc.

We’ll try to post the beginnings of our ideas soon wherein we will see if such a theory could be meaningfully erected.

Technologies/Black-Boxing and Institutions/Taken-for-granted – A question of levels?

This post started as a comment to Nicholas´ post on the museum but became so long that I decided to make it a new post. The debate on black-boxing and “taken-for-grantedness” (or STS & New-Institutional Theory) takles some very important points. They remind me of T.Pinch´s (2008) paper on Technologies and Institutions. Pinch´s focus is on the problem of “skill” and he argues that (new) institutional theory  – for example in its micro form as in the works of Fligstein – is basically only focussing on a very small aspect of ways to make an institution materially stable. Technology, he argues, adds at least a second way because it is black-boxed, not just taken for granted.

The reason why I think this is only partially true is that proposing such an argument is only possible by conflating levels of “taken-for-grantedness”: Sociology knows a whole spectrum of ways to make social order become taken for granted: from the taken-for-granted stream of everyday routines and interactions that make up Schütz´ life world to Mauss/Bourdieu´s techniques of the body that constitute the habitus, from B&L´s (or also Gehlen´s) processes of institutionalization to Foucault´s epistemé, Polanyi´s tacit knowledge and Ryles “knowing how”.Technology, if we follow this route, could be added to the book of tactics to make practice become taken-for-granted – through a very distinct process that has been described as “black-boxing” in which some aspects are packaged and sealed away, others delegated to specialists (for example for maintenance and repair). It is distinct from at least two other tactics exactly because of the form of this process. Embodying habits and skills for example is a process of becoming taken for granted by routine and repetition. Discursive closure is a matter of rhetoric, persuasion and concealment. 

Institutions and Infrastructures I suppose are strategies of “taken-for-grantedness” on a different level: they are hardy stabilized by just one of the discursive, habitual or technologcial tactics just described. Neither can an institution be based just on skills, nor on legitimizing and reglulating discourse, nor on technology. Hey, we know from a long time of STS research that not even technology alone can rely on technology alone. Institutions and infrastructures are complex installations – hybrids or monsters if you will. They both rely on a fragile architecture of “taken-for-grantedness” – plug-ins. What is the difference, then?

ASA blogger party and other ways to meet Nicholas Rowland and Jan Passoth

I want to note the announcement of the annual ASA ScatterPlot Blogger Party! Details here. Short story: Sunday, August 21, 4:30pm at the Seahorse Lounge at Caesar’s Palace. I hope to see many of you there!

Otherwise, I present a paper with Jan-Hendrik Passoth on state theory and another roundtable about state power (power being a dirty little word). Come say “hi” — I’ll be wearing the functionalism t-shirt and have even made one for Jan!

Capture_a

NOTE: *This message was playfully plaigarized from my mentor and friend, Fabio.*

The "lighthouse" (re: Coase) in new institutionalism is the museum

Per recent discussions of black-boxing and institutionalization, a paper that Fabio and I wrote seems useful to remember. It was a piece we wrote to interrogate the use of “the museum” by new institutionalists of the organizational analysis-bent.

Sociologists that study organizations often analyze the museum from a cultural
perspective that emphasizes the norms of the museum industry and the larger
society. We review this literature and suggest that sociologists should take into
account the technical demands of museums. Drawing on insights from social
studies of technology, we argue that museums are better understood as
organizations that must accomplish legitimate goals with specific technologies.
These technologies impact museums and the broader museum field in at least
three ways: they make specific types of art possible and permit individuals and
organizations to participate in the art world; they allow actors to insert new
practices in museums; and they can stabilize or destabilize museum practices.
We illustrate our arguments with examples drawn from the world of contemporary
art.

The black-boxing of a technology is different than the incremental emergence of “taken-for-grantedness”

Below is an excerpt about black boxing and taken-for-grantedness, which I wrote with Fabio Rojas years ago:

The black-boxing of a technology is different than the incremental emergence of “taken-for-grantedness,” which comes from writers such as Schutz (1967) and Berger and Luckmann (1966). They argue that knowledge in everyday life is taken for granted by individuals as reality, “but [that] not all aspects of reality are equally unproblematic” (p. 24).  They provide an example germane to this discussion:

  • …suppose that I am an automobile mechanic who is highly knowledgeable about all American-made cars. Everything that pertains to the latter is a routine, unproblematic facet of my everyday life. But one day someone appears in the garage and asks me to repair his Volkswagen. I am now compelled to enter the problematic world of foreign-made cars (p. 24).

Technologies, they contend, like those related to car repair, get taken for granted over time and through expertise. But by looking at technologies as black-boxes, scholars can gain a fresh perspective on the institutionalization of technology by emphasizing how stable technologies stabilize human networks, rather than how routinization results in a technology’s disappearance for organizational actors. Returning to Berger and Luckmann’s example, as a black-box, automobiles and the networks of dependence and exchange built-up around them are concealed (or ignored). Further, the patterns of human behavior that make a mechanic’s garage the one place to fix broken cars is missed because of the emphasis on how technologies get “taken-for-granted.”

Seems Berger and Luckmann’s (1966) old work on the social construction of reality might find new use distinguishing blacking-boxing from institutionalization. Also, please note: If there was one thing that early Latourian thinking and the new institutionalism in organizational analysis were looking to unlock was how something gets sealed-up and stabilized over time to the point of being taken-for-granted as real, true, or rational. Look back at the early pages of Latour’s book on Pasteur — it opens with the image of Rue Pasteur and asks how did we get this … a good question, no?

Black box" and "taken-for-granted

I recently asked:

When are the processes that bring about a black box the same as those that bring about — in the institutionalist frame — the notion of taken-for-grantedness … and, when are the processes that bring about either of these notions incapable of producing the other?

Seems, upon further reflection, to be an obvious paper, which might bridge some of the thinking about technology and institutional arrangements. Restated as a couple of thesis statements, it would go:

Q1. What circumstances/processes do the concepts of “black box” and “taken-for-granted” both meaningfully capture?

Q2. What circumstances/processes does the concept of “black box” meaningfully capture that the concept “taken-for-granted” cannot?

Q3. What circumstances/processes does the concept of “taken-for-granted” meaningfully capture that the concept “black box” cannot?

Seems like an interesting review piece to see where organizational theorists and STSers have historically overlapped and where they have diverged, with the caveat that each might learn something if orthogonal points of divergence where re-considered in the respective lines of research.

Timmermans has done it again — this time about failures!

I have always enjoyed reading Stephan Timmermans’s research, and his new piece in STHV is no exception.

The abstract, which is below, is not only a good reversal on an old idea, but also solid prose  — worth the read.

Abstract

Sociologists of science have argued that due to the institutional reward system negative research results, such as failed experiments, may harm scientific careers. We know little, however, of how scientists themselves make sense of negative research findings. Drawing from the sociology of work, the author discusses how researchers involved in a double-blind, placebo, controlled randomized clinical trial for methamphetamine dependency informally and formally interpret the emerging research results. Because the drug tested in the trial was not an effective treatment, the staff considered the trial a failure. In spite of the disappointing results, the staff involved in the daily work with research subjects still reframed the trial as meaningful because they were able to treat people for their drug dependency. The authors of the major publication also framed the results as worthwhile by linking their study to a previously published study in a post hoc analysis. The author concludes that negative research findings offer a collective opportunity to define what scientific work is about and that the effects of failed experiments depend on individual biography and institutional context.

Belgian STS network kick-off event * Sept 30th, 2011

For those of you in Europe this might be an interesting opportunity to travel, meet great people and strengthen the international network of STS: Scholars in Belgium are gathering to have a first meeting of the Belgian Science, Technology and Society (BSTS) – a network that started

“… in 2008 as an ad-hoc academic platform, the BSTS network enables STS researchers in Belgium to share with one another their research interests and disciplinary perspectives and to foster collaboration across different fields and locales. The network now extends its hand beyond academia and beyond Belgium to engage an international community consisting of people from research centres, industry, policy making and other professionals with an interest in cross-disciplinary learning and knowledge sharing.”

Here is some more information on the Belgian STS Network.

 

 

 

Evading efficiency arguments is what sociology is good at

Why is sociology so affraid of efficiency arguments?

After re-reading this great old piece …

Oberschall, Anthony, and Eric M. Leifer. 1986. “Efficiency and Social Institutions: Uses and Misuses of Economic Reasoning in Sociology.” Annual Review of Sociology 12:233-253.

… I was reminded that sociology has made something of a history of explicitly avoiding extant arguments regarding efficiency.

Marx, for example, rejected efficiency and emphasized exploitation of labor by the bourgeoisie. Given Marx’s economic theory of value and labor, exploitation was the only way to get more value than was invested by fairly paid labor (e.g., the wage from six hours a day is enough to feed and clothe a family of four for a day; however, without the means of production workers might work eight hours per day rather than six for the same wage since they have no bargaining power). Thus, the creation of surplus (i.e., profit). However, a falling rate of profit was expected as capitalists competed with each other in hopes of attracting more and more laborers, which ultimately cut into profit margins. Enter machines. The primary problem, however, for Marx was that machines could bring no real efficiency or profit; machines are incapable of producing profit (or only for a short time) because all competitors will soon have them. At this point, each capitalist is back to “square one.” Simlutaneously, the price of machines goes up and the price of products goes down. Thus, profit has to fall and efficiency is lost (however, according to contemporary economics: profits fall within the business cycle, but not across cycles, showing some flaw in Marx’s thinking). Still, as it happens, “Machinery and improved organization provide … [enhanced efficiency] too, because they increase the productivity of labor” (p. 42, Collins and Makowsky 1998).

Also writing at a time of great scientific and industrial progress, Durkheim, in contradiction to rationalists, finds “society … a ritual order, a collective conscience founded on the emotional rhythms of human interactions” (p. 102, Collins and Makowsky 1998). Even though specialization (in the form of organic solidary) hold society together (despite the loss of mechanical solidarity), efficiency seems to play a lowly role in Durkheim’s models of integration.

Weber seems the closest for allowing efficiency some room to breath. Still, above efficiency was his deep-seeded concern over organizational stability. The organization of groups stabilized through strong personal ties (patrimonialism) or by setting rules (bureaucracy), which follows broadly from Tönnies (Gemeinschaft and Gesellschaft, respectively). Domestic or personalistic organizations, like that of a family estate, wherein close friends and family members made-up the bulk of enterprise employees and related services (be they war, trading, tax collecting, etc.). Of course,  personalistic forms of organization are not easy to control and seemingly inefficient (as compared to, for instance, a bureaucracy). The organization of communications is poor—what starts as a direct order at the top chain of command ends up a rumor, a whisper, or nothing at the bottom rungs. Under certain circumstances, innovation is ignored or resisted falling back on tradition—doing as was done the last time or as far back as can be remembered for sake of personal ease and safety from criticism from above. Authority from the top dissipated over time as their top assistants grew in power and potentially ceded.The bureaucracy would fix all that by establishing rules and regulations to guide individual behavior even in the absense of authoritative oversight. While bureaucracy can be interpreted as an efficiency argument, Weber’s focus on cultural underpinnings of groups like Protestants as shaping historical achievements along with his works on Judaism, China, India, etc., the library of work leads me to believe that culture, rather than efficiency, was the root of his arguments.

There are no doubt many more — certainly the old functionalists like Selznick and Merton (who showed the disfunctions of bureaucracy) would fit right in…

Social significance of gap analysis

Although I’m not entirely sure of the implications for infrastructure, gap analysis is commonly used and seems promising as a research site — and yet, despite widespread use in management and implementation of software, gap analysis is an untapped and unappreciated workflow analysis technique in research.

In general, gap analysis takes three forms, which document the gaps between two states: current versus future, expected versus actual, perception versus delivered. The difference between the two states defines the gap, and from such assessments others are possible such as benchmarking (Boxwell 1994).

The first form is a map. Cartographic representations are mainly utilized in lean management to chart flows of raw materials – including information – currently necessary to make a product or service available to consumers so that they can be assessed for flow and waste. Once areas for improved flow and reduced waste are identified, analysts draw them into a future state value stream map. The differences between the two states define the gaps, which orient work toward that future condition. The map gap was designed at Toyota (Rother and Shook 1999).

The second form is a step chart. Temporality is built-into the step chart, which also identifies and compares current practice and desired future state for the performance of a service or product. Brown and Plenert (2006:319) provide a good example of where a step chart might solve the gap between expected and actual states: “customers may expect to wait only 20 minutes to see their doctor but, in fact, have to wait more than thirty minutes.” Step charts chart the steps necessary to move from current practice to future practice (Chakrapani 1999).

The third form, which is most appropriate for working-around packaged software, is a cross-list. Such analyses are most routinely undertaken in consumer research wherein gap analysis refers to the:

methodological tabulation of all known requirements of consumers in a particular category of products, together with a cross-listing of all features provided by existing products to satisfy existing requirements. Such a chart shows up any gaps that exist (n.a. 2006).

Once cross-listed in table format gaps make themselves obvious and their analysis points to unmet consumer demand which new or poorly marketed products might fulfill. However, prior to the establishment of a cross-list, consumer expectations and experiences must be gathered, for example, by focus-group interviews. Once collected and made to populate a cross-listed table, according to Brown and Plenert (2006:320), “gaps can be simply calculated as the arithmetic difference between the two measurements for each attribute.”