I recently had the opportunity to see the film Sully (2016), which recounts the 2009 emergency landing of a jetliner on New York’s Hudson River. Despite some critical flaws, the film is not only a thrill to watch but also provides much food for thought to those studying infrastructure. Even the flaws are instructive. One of them – certainly the most discussed – regards the portrayal of the National Transportation Safety Board (NTSB) that, as per protocol, investigated the accident. Whether due to Hollywood convention or directorial choice, the NTSB team are neatly cast as the villains, out to get the story’s hero by discrediting his decision-making process.
In fact, in recent years the NTSB has generally avoided single causes in favor of a complex systems approach to accident analysis. Not so this film. Instead of the title “Miracle on the Hudson”, as the incident is colloquially known, the film is named after the flight’s captain, Chesley “Sully” Sullenberger. Instead of delving into this stranger-than-fiction real life story, the film tells a (contrived) great-man story. And therein lies its other major flaw: it misses the opportunity to conjure up the full cast of characters -human and machine- that had to work together for this improbable triumph over adversity to come to pass. After all, airplane cockpits are prime settings for capturing distributed cognition in action. Clive Irving drives the point home:
In the final seconds before the airplane hits the water you’ll see Sully’s left hand (or rather Tom Hanks’s hand playing Sully’s gifted hand) on the sidestick controlling the airplane. He appears to be pulling hard back to keep the nose up. In fact, Sully’s command was being overridden by the Airbus’s own brain. It reduced the nose-up angle by two-and-a-half degrees. Sully wasn’t pulling back too hard, he wanted all the angle he could get to soften the impact on the water. But he knew that the airplane itself was computing how to preserve control when at the limits of its ability to keep flying, and that it would know how to do that better than he did. This turned out to be an extraordinary, exquisite moment when a machine and a man, together, got it exactly right.
Although frustratingly oblivious to human-machine infrastructure, the film excels at fleshing out the human side of risk management. We get a front-and-center view of the dizzying intensity and variety of (cognitive, emotional, material, organizational, social) cues through which decision makers must navigate to successfully carry out their job. Watching Sully at work, I was very much reminded of the kind of “disciplined improvisation” I observed with operational forecasters at the Weather Service. Once again, it is not strict adherence to protocol but drawing on (and trusting!) one’s lived, embodied experience that saves the day. Importantly, however, whereas operational forecasters are culturally primed to be what I have called “weather observation omnivores,” airplane pilots are primed to be information univores. Forecasters have developed an appetite for a veritable smorgasbord of cues about the weather, while pilots (echoed by Sully in the film) regard any cue external to the predefined task as a (non-essential or essential) “distraction”. Such is the magnitude of risk and error proneness associated with operating an aircraft, that the aviation industry has instituted a “sterile cockpit rule” and incorporated increasing levels of automation in aircraft design and use. Yet, while highly effective, this approach to managing complexity and installing order is no less fraught with pitfalls. The tendency, as Sully laments in the film, to take “the humanity out of the cockpit” has also translated in inadequate and unrealistic air crew training. The majority of recent accident and incident reports identify pilot complacency and lack of situational awareness as a primary culprit.
The ever increasing task and workflow automation of decision-making infrastructures has forced adaptive complex systems to constantly reinvent the role of their human operators – indeed, to question the need for any human operators at all. Both the weather forecasting and the aviation industries struggle with this dilemma, albeit currently from opposite sides of the information omnivores-univores spectrum. It bears keeping in mind, however, that it took a skilled human as well as a skilled machine for the Miracle on the Hudson to happen. If we cannot afford to remove humans from the hot seat, then it is time we designed infrastructures that treat human judgment and decision making as an asset rather than a liability, as a distinct skill set to be nurtured and empowered rather than subordinated to the powers of the machine.