I once asked what the “knowledge myth” might mean for STS, but this was really just a ploy to raise the issue of what it would take to generate a post-humanist form of behaviorism in STS.
How might we talk about the behavior of humans and non-humans parsimoniously without getting too bogged-down with endless debates of “who/what is congizant?” and “what role do intentions play?” This well-drodden issue was raised once again during our 4S sessions on the state.
One of the primary issues in behaviorism regarding the knowledge myth is “do you need to know how to do something to do it?” with the important follow-up that “if you do it, then you have shown (a) that you know how to do it, by benefit of doing it, or, if not that, (b) then knowing and doing are not nearly as related as we might otherwise demand in our social science accounts.”
Gary Marx, who wrote a great paper in Surveillance & Society, insists that surveillance in our high-tech age is different in non-trivial ways from traditional Foucauldian imagery of the somewhat distant past of “white hot pincers” during torture or the grand Panopticon. In particular, Marx’s analysis focuses on “unintended” data collection that is likely to be amassed by automated machines; think: data about data, for example, the location of a purchase, or the time stamp of a facebook post (Marx 2002:15). He goes into a, upon first glance, cool example: there is suspicion that a university building was going to fall prey to arson after a gatorade bottle full of explosive material was found on site. By cross-listing the keycard entry registries and the shipping code on the gatorade bottle, the culprit was found, and upon being found, s/he confessed. This seems like a straightforward case where data was collected and then used to capture a miscreant, but these data were never collected with the direct and explicit intention to be used for the purpose of catching criminals or criminal acts. These data were not intended to result in this end (I’d prefer a different term than “unintended,” but that will be another post).
Drawing on these materials liberally is Michalis Lianos (2003:412), in another paper in the same journal about Post-Foucauldian studies, who adopts and develops the idea that diverse technologies at “points of use” (let’s call them) result in data, which then contributes to what is referred to as “unintended control” which is not really intended to promote any values in particular, but that can be used in matters of control after collected.
The technologies so often utilized in Post-Foucauldian analyses are many and diffuse, but they rarely have any intentional politics, according to Lianos.This was quite a surprise, as I had always read these Post-Foucauldians as having an almost unanimous position that armies of little technologies were “out there” doing the dirty work of making neoliberalism a reality.
Back to Lianos: The data these automated machines collect might be used in political ways, big and small, but in a shrewd move, Lianos demands that in studies of control and surveillance, we must “break this correspondence between motive and outcome” or, put more exactly, “the intention to control is not a necessary precondition for effectively producing serious consequences for the sphere of control” (424). And, thus, we are left with an odd behaviorial post-humanist vision of technology where the “intentions” of the designer or user drift from primacy in analysis, and instead we observe what is collected or made, what is done with it, and what this contributes to local levels and beyond.
For us in STS, I have always been concerned that Foucauldians place so much emphasis on dispotif and governmentality when their analyses so often hinge on diffuse, micro-levels of technological use for the purpose of voluntary self-regulation. I am not referring to micro-physics or the art of government either. Instead, we get a nuanced view from Lianos of how the, to borrow a beloved phrase from Bruno, the “missing masses” do all the hard work in Post-Foucauldian Governmentality studies … although not intentionally.
Indeed, behaviorism is also just a ploy or way to engage these issues. I don’t think behaviorism is the answer because intentions get us into a mess, especially on the issues of data collected on data. Because, at some point, the data was "intended" to be collected. We’re just talking about different uses. That does not get us very far. .What might however, is thinking not of "unintended control" but what I will call here and now "orthogonal control" where data that are only orthogonally related can be used or triangulated in order to arrive at a "control situation" or "environment" .I much prefer orthogonal to unintended.