Teaching STS: Where iPhones come from…


Anyone teaching STS or related areas knows that a good reading is sometimes hard to find, especially if you’re not teaching graduate classes. Let’s face it, while you or I might love to discuss Law’s Portuguese ships or Akrich’s photovoltaic cells, students probably would rather hear about cell phones or electric cars (although, probably not Callon’s 1987 paper about them).

One solution I’ve come to is the “listening assignment” and here is why: readings in STS are geared almost entirely to advanced students, hence, we need more introductory materials that are direct, dynamic, and, per my preference, not second-hand regurgitation of more complex materials (even thought that is valuable for other reasons), and so I’ve started to incorporate “listening assignments” in place of a few reading assignments. Surely, students do read in my courses, but I’ve been trying this out for the last few semesters, and it has been sort of neat.

In listening assignments, students listen to a radio show or a pod cast, and that becomes the “baseline” for the day’s lesson and discussion.

I just found this about where iPhones come from, which will make a great listening assignment given that its well done and that students have an almost unending curiosity about (and attention span for) phones. This opens the door for discussion about the ethics of consumption, multinational corporations, conflict minerals, etc.

If you use it or try out a listening assignment, let me know, I’d love to discuss it over e-mail: njr12@psu.edu


Games with a purpose – a new role for human web users?

Just coming back from a few days of fieldwork (preparing ethnographic research in the field of semantic software) I could not help but share something I just learned. It fits quite nicely to what I have written before on the masses of non-human actors that populate the web today (crawlers, spiders, bots) and how the interdepencies between “them” and others (like us) change with the implementing of new web technologies.


Semantic technologies are build to process large numbers of unstructured documents and to automatically find (and tag) meaningful entities. And while these frameworks of crawlers, transforming tools and mining algorithms are actually quite good at finding structure in data, they are still (at least initially, they learn quickly) quite bad in assigning meaningful labels to it. They are quick and good in understanding that a text is about something, but they are bad ans slow at judging ambiguitive terms – they fail at understanding. But a recent trend called “gamification” (which is around for a while but was until recently used mainly for encouraging users to fill out boring forms) now is a good example how the configuration of agency changes on the web today. Human users are asked to play games that help annotating and matching ambiguitive patterns – for tagging pictures, texts, music, etc. So not machines are doing tasks for humans – humans are working for machines. 

For those who want to try working for them, check out the “Games with a Purpose” Website. A paper that described what exactly they do can be found here.

Humans TXT: We Are People, Not Machines.


Do you know who your readers are? I just recently met a reader of our blog from Lancaster at a conference in Berlin and I was very happy to finally have a face to remember when posting (ok, I of course know Nick´s, Hendrik´s and Antonia´s faces). But guess who are the most frequent readers of this site? Machines! The Google-Bot, Posterous-Indexer, Feedburner and their pals harvest websites and it seems they are the most faithful readers of what we write.

As I tried to argue in a german paper on media change and interobjectivity, the specific separation of labour between humans and machines is what is at stake in some of the most interesting innovation processes in the field of web technologies. Who should have to do most of the work? A few of you might remember the hard days of the ongoing browser wars: a web designer these days had to build three or more versions of her site just to please the different web browsers. Or look at the struggle about RSS or, more recently, semantic technologies: who should add all the meta data, who should try to make sense of this mess of interconnected data? Us? Or them?

And now I just stumbled upon a stange idea. It goes like this: if there are files on a website that are for bots only (the “robots.txt” file that asks search engines to please not index a site, a funny example of this is the one on youtube.com), why not create an equivalent just for human readers? That is the basic idea behind “humans.txt”. And there are huge stakeholders involved. Google already jumped on, this is their file:

Google is built by a large team of engineers, designers, researchers, robots, and others in many different sites across the globe. It is updated continuously, and built with more tools and technologies than we can shake a stick at. If you’d like to help us out, see google.com/jobs.

Wait, what? Google? After wondering for a while what sense it could make to duplicate the stuff that is already on your “about page” in a textfile without layout and eyecandy I suddenly realized. Guess who likes plain textfiles? Guess who would like to find meta data about a web site always at the same place? Yes. Bots. Those will be the most likely readers. So: who do we write for?


Ethics: IRB approval and article reviews

Data collection in the social sciences must typically pass through an institutional review board’s (IRB) human subjects committee (HSC) before being conducted in order to ensure comformity with IRB regulation. This we all know.

I once reviewed an article and while doing so got the sneaking suspicion that the author (whomever s/he was) had not gotten their study passed through the IRB. The reason was that data in the article came in the form of casual conversations and e-mail correspondence. Quotations from data sources identified the speaker/e-mail-writer by name, not pseudonym. There was little written about the length of these conversations, how many there were, where they were conducted, etc. or any mention of the methodological strategy employed or method of analysis. This sort of methodological sloppiness is reprehensibe in its own right, but the idea that this research study might not have passed through the proper research channels started to bothered me.

And then it bothered me a little more.

As I read, article in hand, I increasingly felt like I was holding a soiled garment.

And so, I wrote the editor and, without making any accusations, expressed my “sneaking suspicion” and provided the evidence that encouraged me to think so.


1. Has anyone been asked for proof of IRB approval for articles when social/human subject data used?

2. Has anyone read a paper and wondered if it had passed IRB?

3. What would you do if you read a paper like this during reivew?