Home>I’m sorry Dave, I’m afraid I can’t do that

I’m sorry Dave, I’m afraid I can’t do that

In Stanley Kubrick’s 1968 epic 2001: A Space Odyssey, the mission’s unnervingly sentient ‘Heuristically programmed ALgorithmic’ computer known as HAL, self-assuredly proclaims himself “by any practical definition of the word, fool proof and incapable of error”.

Just 43% of employees globally acknowledge that ‘learning from others’ is an important work activity*. Tim Minshall, Head of the Institute for Manufacturing at the University of Cambridge, describes Knowledge Transfer as a “contact sport; it works best when people meet to exchange ideas – it’s all about the transfer of tangible and intellectual property, expertise, learning and skills”.

HAL’s declaration is of course flawed and in his eventual downfall, Kubrick and collaborator Arthur C Clarke were almost certainly exploring what we might today refer to as algorithmic or machine learning bias, a phenomenon that occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions in the machine learning process. Fifty-one years later, the balance between science fact and fiction is no less blurred. Even an innocuous-seeming software or systems upgrade can have irritating, disruptive, or at worst, devastating effects. Take, for example, Leesman’s recent experience:

“The balance between science fact and fiction is still blurred. . . even an innocuous-seeming software or systems upgrade can have irritating, disruptive, or at worst, devastating effects.”

Friday April 5th: an internal email from our Dr Peggie Rothe warns all client-facing team members that our client relationship management (CRM) and workflow application had developed an anomaly. Team members create a project ‘card’ on receipt of a client enquiry and we track its development and velocity towards becoming a live project. As a go-live project date develops greater certainty, a ‘close date’ is added to the card, directly informing our workflow and cashflow analysis.

But the Doc, feared and famed for her attention to the finest numeric detail, had spotted that all new cards were suddenly including close dates seemingly of their own design. Her hypothesis was that the application’s developers had added new functionality that would predict a close date based on the characteristics of the project card; client name, card value, project complexity, person creating the card etc. all presumably based on historic patterns.

In an application features list, this might appear a cool new bit of predictive functionality. But it left us having to investigate how to manually override or cancel something others thought we’d appreciate, as it suddenly dramatically threw our management information dashboards off course.

The email came at the end of a bad week in the news for the designers of similar, though admittedly hugely more complex, systems. Ethical hackers at the Tencent Keen Security Lab had exposed that software flaws in Elon Musk’s Tesla S – one of the most advanced cars on the road – had enabled them to confuse the vehicle’s lane recognition system into thinking the straight road ahead actually curved, and in so doing, switch lanes directing the vehicle into the path of oncoming traffic.

Most interestingly, the ‘fake lane attack’ was an example of a new type of incredibly low-tech hack, comprising of nothing more than a series of strategically placed ‘interference stickers’ on the road surface, so that perversely the very image recognition systems designed to keep the vehicle in the centre of its lane, would do the very opposite.

Musk’s typically tactical response complimented the discovery as “solid work” that would help accelerate the advancement of such systems. Then dismissed it as somewhat irrelevant, since the ‘autopilot’ could be overridden at any moment and that the systems were never intended to offer an automatous experience that would replace the need for an adult behind the controls of the vehicle.

But this came in the same week that Ethiopian air accident investigators announced that the captain and first officer of the Ethiopian Airlines Boeing 737 Max that had crashed three weeks prior, had indeed correctly taken manual control of their aircraft and followed all of the specified emergency procedures laid out by Boeing. Despite their efforts, all 157 lives on board were lost.

Initial investigations centre on the role of the aircraft’s Manoeuvring Characteristics Augmentation System (MCAS). These systems were already under industry scrutiny following the loss of Indonesia’s Lion Air 737 Max flight JT610, 13-minutes after taking off from Jakarta in October 2018, killing all 189 people on board. The last moments of both flights exhibited strong similarities.

Boeing’s MCAS is a software protocol unique to the 737’s Max variant, introduced principally in response to different inflight handling characteristics of the Max resulting from the specification of new 10% more-efficient engines. These new engines display ‘non-linear lift’ characteristics, which will mean very little to most, and at the risk of grossly over simplifying the issue, the aircraft has a different centre of gravity and that the physical instructions the pilot is inputting through the yoke and the feedback he is getting back from it, can put the pilot at risk of pitching the aircraft at too steep an angle during take-off.

MCAS will kick in if sensors detect the aircraft’s Angle of Attack (AOA), or climb, is too aggressive to prevent the aircraft stalling. It automatically adjusts the ‘elevators’ at the tail of the aircraft to point the nose downward. The MCAS should only engage when the autopilot is disengaged – that predominantly being when the pilot is ‘hand flying’ at take-off or landing.

The yoke is designed to shake violently if a stall is detected. However, evidence from the Lion Air flight shows this happening despite the aircraft at that point not being at risk of stall. The flight crew were also contending with incorrect altitude and airspeed readings. Flight data shows the Lion Air aircraft dipped and dropped 700ft of altitude moments later before pilots halted the drop. This pattern continued.

Details from the initial Ethiopian Airlines investigation suggest that faced with immediate difficulty controlling the aircraft’s initial climb, perhaps as a result of the non-linear lift characteristics, the pilot sought the support of the autopilot and engaged it almost immediately. Cockpit voice recordings are reported to confirm that the captain called out three times to “pull up”, and seconds after instructed the first officer to tell Air Traffic Control that they had a flight control problem.

With autopilot failing to help the situation, it was disengaged by the flight crew. But as the subsidiary MCAS automation appears to kick in, it each time appears to worsen the problem. Reuters was reporting that cockpit data confirmed pilot and first officer had then both correctly followed emergency checklist protocols and manually disabled the MCAS, taking direct mechanical control of the tail stabilisers. This should have immediately brought the nose of the aircraft back level. Yet the MCAS system may have repeatedly reactivated itself without a direct command from either crew members. Reuters: “Investigators are studying whether there are any conditions under which MCAS could reactivate itself automatically”. Perhaps like Kubrick’s HAL, MCAS believed it knew better.

Further information also points to erroneous data from airflow and angle of attack sensors on the outside of the aircraft contributing to systems’ confusion. Based on pre-investigation evidence, the battle in the cockpit seems to have been between aircraft handling, the flight crew’s ability to decipher those anomalies, the various computerised systems ability to correct or take over situation management, or for those systems’ to recognise they were acting on flawed data.

When the aircraft eventually hit the ground, it did so well in excess of its designed maximum airspeed, and with engines still at 90% thrust, causing some commentators to question whether ‘mode confusion’ (see Leesman Review issue 26) also played a part or whether computerised control and sensor systems were fighting it out in a digital battle, despite the flight crew’s best actions.

As the sophistication and complexity of the systems designed to make our lives simpler and safer magnifies, so too does the potential impact if they fail. Our CRM system’s ‘functionality enhancement’ wasn’t the result of a sentient computer’s insistence it knew better, but instead the result of the insistence of a developer/engineer/data scientist somewhere that they knew better.

“AI is designed by humans with their own limitations.”

It is put properly into perspective when you consider the last moments of those 346 passengers and crew on board flights JT610 and EA302. Both scenarios should surely move to remind us that while AI is the broad science of machines mimicking human abilities, machine learning is a specific subset of AI that trains a machine how to learn. And both are designed by humans with their own limitations.

As the sophistication of those systems increases, so too should the thoroughness of the testing and licensing of those systems and our understanding of the unintentional bias have built into them; virtue of the individuals and/or the design/engineering systems and steps involved in creating them.

So whilst other journals revel in how many of these systems smooth our daily lives and fuel our business futures, this issue of the Leesman Review has taken an alternative look at different aspects of the advancement of systems. As we put together this issue one thing became very clear: it could well be that the organisations that will rise to the top are not the ones who invest most heavily in AI, but those who invest most heavily in the human employees managing, implementing and controlling AI.

Back to Leesman Review