Complexity management and the information omnivores-versus-univores dilemma

hudson-river-plane-800cropped

I recently had the opportunity to see the film Sully (2016), which recounts the 2009 emergency landing of a jetliner on New York’s Hudson River. Despite some critical flaws, the film is not only a thrill to watch but also provides much food for thought to those studying infrastructure. Even the flaws are instructive. One of them – certainly the most discussed – regards the portrayal of the National Transportation Safety Board (NTSB) that, as per protocol, investigated the accident. Whether due to Hollywood convention or directorial choice, the NTSB team are neatly cast as the villains, out to get the story’s hero by discrediting his decision-making process.

In fact, in recent years the NTSB has generally avoided single causes in favor of a complex systems approach to accident analysis. Not so this film. Instead of the title “Miracle on the Hudson”, as the incident is colloquially known, the film is named after the flight’s captain, Chesley “Sully” Sullenberger. Instead of delving into this stranger-than-fiction real life story, the film tells a (contrived) great-man story. And therein lies its other major flaw: it misses the opportunity to conjure up the full cast of characters -human and machine- that had to work together for this improbable triumph over adversity to come to pass. After all, airplane cockpits are prime settings for capturing distributed cognition in action. Clive Irving drives the point home:

In the final seconds before the airplane hits the water you’ll see Sully’s left hand (or rather Tom Hanks’s hand playing Sully’s gifted hand) on the sidestick controlling the airplane. He appears to be pulling hard back to keep the nose up. In fact, Sully’s command was being overridden by the Airbus’s own brain. It reduced the nose-up angle by two-and-a-half degrees. Sully wasn’t pulling back too hard, he wanted all the angle he could get to soften the impact on the water. But he knew that the airplane itself was computing how to preserve control when at the limits of its ability to keep flying, and that it would know how to do that better than he did. This turned out to be an extraordinary, exquisite moment when a machine and a man, together, got it exactly right.

Although frustratingly oblivious to human-machine infrastructure, the film excels at fleshing out the human side of risk management. We get a front-and-center view of the dizzying intensity and variety of (cognitive, emotional, material, organizational, social) cues through which decision makers must navigate to successfully carry out their job. Watching Sully at work, I was very much reminded of the kind of “disciplined improvisation” I observed with operational forecasters at the Weather Service. Once again, it is not strict adherence to protocol but drawing on (and trusting!) one’s lived, embodied experience that saves the day. Importantly, however, whereas operational forecasters are culturally primed to be what I have called “weather observation omnivores,” airplane pilots are primed to be information univores. Forecasters have developed an appetite for a veritable smorgasbord of cues about the weather, while pilots (echoed by Sully in the film) regard any cue external to the predefined task as a (non-essential or essential) “distraction”. Such is the magnitude of risk and error proneness associated with operating an aircraft, that the aviation industry has instituted a “sterile cockpit rule” and incorporated increasing levels of automation in aircraft design and use. Yet, while highly effective, this approach to managing complexity and installing order is no less fraught with pitfalls. The tendency, as Sully laments in the film, to take “the humanity out of the cockpit” has also translated in inadequate and unrealistic air crew training. The majority of recent accident and incident reports identify pilot complacency and lack of situational awareness as a primary culprit.

The ever increasing task and workflow automation of decision-making infrastructures has forced adaptive complex systems to constantly reinvent the role of their human operators – indeed, to question the need for any human operators at all. Both the weather forecasting and the aviation industries struggle with this dilemma, albeit currently from opposite sides of the information omnivores-univores spectrum. It bears keeping in mind, however, that it took a skilled human as well as a skilled machine for the Miracle on the Hudson to happen. If we cannot afford to remove humans from the hot seat, then it is time we designed infrastructures that treat human judgment and decision making as an asset rather than a liability, as a distinct skill set to be nurtured and empowered rather than subordinated to the powers of the machine.

15 thoughts on “Complexity management and the information omnivores-versus-univores dilemma

  1. Pingback: Complexity management and the information omnivores-versus-univores dilemma | deer hunting

  2. that’s one of my favorite talks but haven’t looked him up beyond that, any recommendations?

    Like

  3. it’s more of a general overview/call-to-arms than a thickish STS book of case studies but I think an effective one judging by the public reception so far, the themes that resonate with my own work with big data folks is the applications of big computing power in the service of unscientific/un-reflected biases (and no real sense of complexitiy/feedback/etc), the junk (like personality tests) data in junk out, the putting numbers to qualities that aren’t easily quantifiable, the lack of follow up testing, and as Phaedra notes the overestimation of machines (and by extension math, quants, etc) on the one hand while the people in management don’t really change their missions/outlooks, no reflexivity to speak of.
    http://backdoorbroadcasting.net/2011/06/peter-miller-the-calculating-self/

    Liked by 1 person

  4. my only additional thought (and it might be too large a line of inquiry for an essay) is how in your call to a new ethos we can make links to work by Haraway and others on the need to think through cyborg-ology as environmentality, from Gibsonian affordances/resistances to:

    Andy Clark: Trusting the New Cyborg You


    also seems to call for some new recognition/foregrounding of not-knowing a vital aspect of cybernetics that is all to often left out of the loop.

    Liked by 1 person

  5. FYI, I have been asked to write an expanded piece on “Sully” for the 4S blog. If you have already seen the film and/or if you have any reactions to my post above, please feel free to share! I would especially appreciate thoughts regarding how “Sully” relates/speaks to STS research on infrastructure, ANT, or expertise. Thanks!

    Liked by 1 person

  6. indeed as we see with “self” driving cars, war drones, algorithms (see Weapons of Math Destruction), etc , we need better understandings of why the machines are seen as desirable replacements in particular situations, you are on the radar of Frank Pasquale ‏@FrankPasquale Oct 25
    On the paradox of relatable expertise:
    how to communicate severe weather forecasts when 1 in 12 will never act on them?

    Liked by 1 person

Comments are closed.