Humans TXT: We Are People, Not Machines.

Media_httphumanstxtor_bgpsi

Do you know who your readers are? I just recently met a reader of our blog from Lancaster at a conference in Berlin and I was very happy to finally have a face to remember when posting (ok, I of course know Nick´s, Hendrik´s and Antonia´s faces). But guess who are the most frequent readers of this site? Machines! The Google-Bot, Posterous-Indexer, Feedburner and their pals harvest websites and it seems they are the most faithful readers of what we write.

As I tried to argue in a german paper on media change and interobjectivity, the specific separation of labour between humans and machines is what is at stake in some of the most interesting innovation processes in the field of web technologies. Who should have to do most of the work? A few of you might remember the hard days of the ongoing browser wars: a web designer these days had to build three or more versions of her site just to please the different web browsers. Or look at the struggle about RSS or, more recently, semantic technologies: who should add all the meta data, who should try to make sense of this mess of interconnected data? Us? Or them?

And now I just stumbled upon a stange idea. It goes like this: if there are files on a website that are for bots only (the “robots.txt” file that asks search engines to please not index a site, a funny example of this is the one on youtube.com), why not create an equivalent just for human readers? That is the basic idea behind “humans.txt”. And there are huge stakeholders involved. Google already jumped on, this is their file:

Google is built by a large team of engineers, designers, researchers, robots, and others in many different sites across the globe. It is updated continuously, and built with more tools and technologies than we can shake a stick at. If you’d like to help us out, see google.com/jobs.

Wait, what? Google? After wondering for a while what sense it could make to duplicate the stuff that is already on your “about page” in a textfile without layout and eyecandy I suddenly realized. Guess who likes plain textfiles? Guess who would like to find meta data about a web site always at the same place? Yes. Bots. Those will be the most likely readers. So: who do we write for?

 

Advertisements
This entry was posted in STS, Uncategorized and tagged , by Nicholas. Bookmark the permalink.

About Nicholas

Associate Professor of Sociology, Environmental Studies, and Science and Technology Studies at Penn State, Nicholas mainly writes about understanding the scientific study of states and, thus, it is namely about state theory. Given his training in sociology and STS, he takes a decidedly STS-oriented approach to state theory and issues of governance.

One thought on “Humans TXT: We Are People, Not Machines.

  1. Yes indeed, they do. To turn one of Latour??s points upside down: Because it looks like a network (to some), try treating it as one 🙂 Latour once argued that we should never think that what looks like a network on the first glance (like: the web) should be analyzed as a network. That still holds true. But try to treat parts of it as heterogenous arrangements and you will soon notice that its networks look very different from what you first thought they would…By the way – I am preparing an english version of it …

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s