Scholars from “MIT’s Media Lab, [in a group] called the Moral Machine,” are testing “a thought experiment that seeks answers from humans on how a driverless car with malfunctioning brakes should act in emergency situations.” Here is the piece.
These situations are bound to happen with self-driving cars. In this case, “The situations all involve the same scenario, where a self-driving car is traveling toward a crosswalk, and it needs to choose whether to swerve and crash into a barrier or plow through whoever’s at the crosswalk. The test is basically to determine what humans would do in these rare, life-or-death situations.”
hey isn’t that what this is?
he was going on about polling and reading all sorts of things back into the studies that weren’t there part of why I can’t bear to read much in academic soc-sci these days, but more aggravating in particular was that instead of presenting his findings in terms of representing what peoples’ attitudes/responses where to questions like his he kept acting as if they had studied instead what people actually do in their day to day lives, that sort of leap away from reality drives me loco, who knew it would be radical to insist that folks should go and see what’s going on.
LikeLike
Maybe you and me start a support group!!! ahahahaha
LikeLike
ugh don’t get me started.
fell back into old bad habits the other day at a lecture I attended and was in effect berating the head of our local pol-sci dept during the Q&A for peddling such ridiculous leaps of interpretation and shoddy methods, someday I’ll get a grip on it…
LikeLike
wonder if one of yer local high-schools has one of those trailers full of cheapo simulators for driver’s ed that could serve as a test site?
in an ideal world you could work with depts in engineering, psychology, education, pol-sci. comp-sci, etc to come up with a seminar that would use all the disciplines together…
LikeLike
That is crazy right!? The assumption that micro-decisions scale is itself also amazing.
LikeLike
I might try to use that in an assignment — good idea (Haraway in action, not just as an idea)
LikeLike
someone should look into the cyborgs: http://www.wbur.org/onpoint/2016/08/24/distracted-driving-self-driving-cars
LikeLike
indeed, could start by seeing if they are using highly suspect psychology methods/results as a basis for any of their engineering, the big deal these days in algorithm crit is the allttohuman biases getting amped up by machines, can’t tell you how many big data types i’ve run into who are just applying all their computing power to act on their unquestioned biases…
https://twitter.com/FrankPasquale
LikeLike
The idea, however, that the algorithms that will “man” these autonomous vehicles will have to be generated by humans and, what’s more, based on (thanks to MIT, at least in this example) human responses to similar situations. That is where some pay dirt is hit for scholars of STS, I would guess, in teasing-out the assumptions and, with luck, presenting them back to the lab.
LikeLike
Also, this is on facebook getting some pretty interesting play: the discussion is about “safety” (everybody saw that this is just a recreation of old “trolly problems”) and claim that autonomous vehicles are going to be ultimately safer (no matter who dies, I guess): http://cargroup.org/?module=Publications&event=View&pubID=87 and https://www.enotrans.org/wp-content/uploads/2015/09/AV-paper.pdf (pretty good reading, really)
LikeLike
I know! The applications are terrific for STS and ethics, values, etc.
LikeLike
“MIT’s Media Lab, [in a group] called the Moral Machine”
say no more…
LikeLike