When Automation Bites Back

The business of dishonest automation and how the engineers, data scientists and designers behind it can fix it

The pilots fought continuously until the end of the flight“, said Capt. Nurcahyo Utomo, the head of the investigation of Lion Air Flight 610 that crashed on October 29, 2018, killing the 189 people aboard. The analysis of the black boxes had revealed that the Boeing 737’s nose was repeatedly forced down, apparently by an automatic system receiving incorrect sensor readings. During 10 minutes preceding the tragedy, the pilots tried 24 times to manually pull up the nose of the plane. They struggled against a malfunctioning anti-stall system that they did not know how to disengage for that specific version of the plane.

That type of dramatic scene of humans struggling with a stubborn automated system belongs to pop culture. In the famous scene of the 1968 science-fiction film “2001: A Space Odyssey”, the astronaut Dave asks HAL (Heuristically programmed ALgorithmic computer) to open a pod bay door on the spacecraft, to which HAL responds repeatedly, “I’m sorry, Dave, I’m afraid I can’t do that“.

1. The commodification of automation

Thankfully, the contemporary applications of digital automation are partial and do not take the shape of an “artificial general intelligence” like HAL. However, the computational tasks that once were exclusively applied to automate human jobs in critical environments like a cockpit have reached people’s everyday lives (e.g. automated way-finding, smart thermostat) and the techniques often deployed for more frivolous but yet very lucrative objectives (e.g. targeted advertisements, prioritizing the next video to watch on YouTube).

“What concerns me is that many engineers, data scientists, designers and decision-makers bring digital frictions into people’s everyday life because they do not employ approaches to foresee the limits and implications of their work”

The automated systems that once relied on programmed instructions based on their author’s understanding of the world now also model their behavior from the patterns found in datasets of sensors and human activities. As the application of these Machine Learning techniques becomes widespread, digital automation is becoming a commodity with systems that perform at Internet scale one task with no deep understanding of human context. These systems are trained to complete that “one” job, but there are evidences that their behavior, like HAL or a Boeing 737 anti-stall system, can turn against their user’s intentions when things do not go as expected.

2. The clumsy edges

Recent visual ethnographies at Near Future Laboratory like  #TUXSAX and Curious Rituals uncovered some implications of that commodification of automation. In a completely different scale of dramatic consequences that brought down Lion Air Flight 610, these observations highlight how some digital solutions leave people with a feeling of being “locked in” with no “escape” key to disengage from a stubborn behavior. A wide majority of these digital frictions provoke harmless micro-frustrations in people’s everyday lives. They manifest themselves through poorly calibrated systems and a design that disregards edge cases. For instance, it is common to experience a voice assistant unable to understand a certain accent or pronunciation or a navigation system that misleads a driver due to location inaccuracies, obsolete road data or incorrect traffic information.

Curious rituals is a fiction that showcases the gaps and junctures that glossy corporate videos on the “future of technology” do not reveal. Source: Curious Rituals.

These clumsy automations can be mitigated but will not disappear because it became impossible to design contingency plans for all unexpected limitations or consequences. However, other types of stubborn autonomous behaviours are intentionally designed as the core of business models that trades human control for convenience.

3. The business of dishonest automation

Many techniques to automate everyday tasks allow organizations to reduce costs and increase revenues. Some members of the tech industry employ these new technological capabilities to lock customers or workers into behaviors for which they have no legitimate need or desire. Those systems are typically designed to resist from their user’s demands AND are hard to disengage. Let me give you a couple of examples of what I call “dishonest automations”:

3.1. Data obesity

Automatic cloud backup systems have become a default feature of operating systems. They externalize the storage of personal photos, emails, contacts and other bits of digital life. Their business model encourages customers to endlessly accumulate more content without a clear alternative that promotes a proper hygiene with their data (i.e. nobody has yet come up with “Marie Kondo for Dropbox ™”). Regardless of the promises of the providers, it becomes harder for people to declutter their digital lives from a cloud storage service.

Upgrade your storage to continue backing up: an automatic cloud backup system that locks in its user, leaving no alternative to the accumulation of content.

3.2. Systemic obsolescence

Today’s apps automatic updates often increase the demand of resources and processing power for cosmetic improvements almost in a deliberate attempt to make a hardware obsolete and the software harder to operate. After years of impunity, there is now a bigger conscience against systemic obsolescence because it is wasteful and exploits customers.

3.3. Digital attention

As content grows exponentially on the Internet, (social) media companies rely increasingly on automation to filter and direct information to each one of their users. For instance, YouTube automates billions of videos to play next for 1.5 billion users. These algorithms aim at promoting content for higher engagement and tend to guide people against their interest.


In the light of these examples of clumsy and dishonest automation, what concerns me is that many engineers, data scientists, designers and decision-makers bring these frictions into people’s everyday life because they do not employ approaches to foresee the limits and implications of their work. Apart from the engineering of efficient solutions, automation requires professionals to think about the foundations and consequences of their practice that transcend any Key Performance Indicator of their organization.

4. The design for humane automation

The design of automation is not about removing the presence of humans. It is about the design of humane, respectful and trustful systems that automate some aspects of human activities. When working with data scientists, designers and engineers in that domain, we envision systems beyond the scope of the “user” and the “task” to automate. I encourage teams to a) learn from the past b) critique the present and c) debate the future. Let me explain:

4.1. Learn from the past

When it comes to automation, the acquisition of knowledge in academia and in the industry are not separate pursuits. Over the last 50 years, there has been an extensive body of work produced in research institutions on the implications of automating manual tasks and decision-making. The key findings have helped save money in critical environments and prevent numerous deadly errors (e.g. in cockpits).

Today, that knowledge is not translated into everyday tasks. For instance, many engineers or data scientists do not master concepts like automation bias (i.e. the propensity for humans to favor suggestions from automated decision-making systems) or automation complacency (i.e. decreased human attention to monitor automated results) theorized by research communities in Science and Technology Studies or Human-Computer Interaction. Sadly, only a few organizations promote platforms that gather academics, artists, engineers, data scientists and designers. Industries in the process of digitization would greatly profit from this type cross-pollination of professionals who learn from considerations that already emerged outside of their discipline.

4.2. Critique the present

I believe that the professionals involved in the business of automating human activities should be persistent critical reviewers of the solutions deployed by their peers. They should become stalkers of how people deal today with the clumsy, the dishonest, the annoying, the absurd and any other awkward emerges of digital technologies in their modern lives.

#TUXSAX is an invitation to engage with these knotty, gnarled edges of technology. It provides some raw food for thoughts to consider the mundane frictions between people and technologies. Do we want to mitigate, or even eliminate these frictions? Source: Documenting the State of Contemporary Technology.

When properly documented, these observations offer a complementary form of inspiration to the multitude of “naive optimism” and glamorous utopian visions of the tech industry. They provide material for professionals to question arguably biased goals of automation. Moreover, they set the stage to define attainable objectives in their organization (e.g. what does smart/intelligent mean?, how to measure efficiency?, what must become legible?).

4.3. Debate the future

In today’s Internet, the design of even the most simple application or connected object has become a complex endeavour. They are built on balkanized Operating Systems, stacks of numerous protocols, versions, frameworks, and other packages of reusable code. The mitigation of digital frictions goes beyond the scope of a “Quality Assurance” team that guarantees the sanity of an application. They are also about documenting implications on the context the technologies live, unintended consequences and ‘what if’ scenarios.

It’s easy to get all Silicon Valley when drooling over the possibility of a world chock-full of self-driving cars. However, when an idea moves from speculation to designed product it is necessary to consider the many facets of its existence - the who, what, how, when, why of the self-driving car. To address these questions, we took a sideways glance at it by forcing ourselves to write the quick-start guide for a typical self-driving car. Source: The World of Self-Driving Cars.

Typically, Design Fiction is an approach to spark a conversation and anticipate the larger questions regarding the automation of human activities. For instance, we produced Quick Start Guide of Amazon Helios: Pilot, a fictional autonomous vehicle. In that project, we identified the key systems that implicate the human aspects of a self-driving car and we brought to life such experiences in a very tangible, compelling fashion for designers, engineers, and anyone else involved in the development of automated systems. Through its collective production, the Quick Start Guide became a totem through which anybody could discuss the consequences, raise design considerations and shape decision-making.

5. The business of trust

Like many technological evolution, the automation of everyday life does not come without the frictions of trading control for convenience. However, the consequences are bigger than mitigating edge cases. They reflect human, organization or society choices. The choice of deploying systems that mislead about their intentions in conflict with people and society’s interests.

In his seminal work on Ubiquitous Computing in the 90s, Mark Weiser strongly influenced the current “third wave” in computing, when technology recedes into the background of people’s lives. Many professionals in the tech industry (including me) embraced his description of Calm technology that “informs but doesn’t demand our focus or attention.” However, what Weiser and many others (including me) did not anticipate is an industry of dishonest automation or solutions that turn against their user’s intentions when things do not go as planned. Nor did we truly anticipate the scale in which automation can bite back the organizations that deploy them with backslashes from their customers, society as well as policymakers.

View this post on Instagram

#curiousrituals #classic #vendingmachine

A post shared by nicolas nova (@nicolasnova) on

These implications suggest an alternative paradigm that transcend the purely technological and commercial for any organization involved in the business of digital automation. For instance, a paradigm that promotes respectful (over efficient), legible (over calm) and honest (over smart) technologies. Those are the types of values that emerge when professionals (e.g. engineers, data scientists, designers, decision-makers, executives) wander outside their practice, apply critical thinking to uncover dishonest behaviors, and use fictions to take decisions that consider implications beyond the scope of the “user” and the “task” to automate.

I believe that the organizations in the business of automation that maintain the status-quo and do not evolve into a business of trust might eventually need to deal with a corroded reputation and its effects on their internal values, the moral of employees, the revenues and ultimately the stakeholders trust.


Design Fiction Workshop: Failures

Saturday October 30 05:04

I’ve been away for awhile so obviously I’m just now catching up with some notes for the events and activities of the last few weeks. One thing I want to make a note about is the fun workshop that Nicolas and I facilitated at the Swiss Design Network conference in Basel Switzerland late last month. The workshop was largely Nicolas’ organization and we took advantage of the conference theme of “Design Fiction” to consider the topic of failure in design — failure as a guide and approach and provocation together with the considerations that design fiction can offer.

Saturday October 30 02:02

Nicolas has posted the notes from the workshop

It was a relatively short workshop — a couple of hours in total. Initially I was nervous that there would be not enough guidance to allow the participants to grab onto the material enthusiastically. That proved to be wrong. After an initial presentation that went over the topic of design fiction and failures that Nicolas had prepared, we broke the approximately 30 or so participants into groups of four or five individuals. There were three assignments that we had prepared that each group was meant to conduct. After completing each assignment — which lasted from 20-25 minutes each — the group turned inward and shared some summary insights, results and conclusions. They didn’t know all the assignments ahead of time.

Saturday October 30 02:24

The first assignment was to consider where and when failure happens in design. Without a specific definition of what constitutes failure, the assignment was meant to warm things up by creating a debate and set of examples as to what failure was and when and how it occurs. From Nicolas’ notes (my notebook has escaped me temporarily):

  • #Wrong hair color, not the one that was expected
  • #Help-desk calls in which you end up being re-reroute from one person to another (and getting back to the first person you called)
  • #Nice but noisy conference bags
  • #Toilet configuration (doors, sensors, buttons, soap dispensers, hand-dryers…) in which you have to constantly re-learn everything.
  • #Super loud and difficult to configure fire alarms that people disable
  • #Electronic keys
  • #Garlic press which are impossible to clean
  • #On-line platforms to book flights for which you bought two tickets under the same name while it’s “not possible” from the company’s perspective (but it was technically feasible).
  • #Cheap lighter that burn your nose
  • #GPS systems in the woods
  • #Error messages that say “Please refer to the manual” but there is not manual
  • #Hotel WLAN not distributed anymore because hotel had to pay too many fines for illegal downloads
  • #Refrigerators that beep anxiously to indicate the door is open, but do so even when you’re busily loading groceries.

This assignment was useful to begin the thinking about failure. The goal was less about creating a definitive or definitional list and more about thinking beyond and using examples as motivators and things to think with.

Saturday October 30 02:39

The next assignment was essentially the first but to create examples that one might anticipate as a typical failure in the future — the design fiction failures. Things that could occur given that everything fails to meet our highest expectation or (as I’m particularly interested in) the highest of the hype that surrounds new designed stuff. Epic failures, or just routine annoyances were all open for consideration. How might the cloud computing promise fail in both the major disaster ways — as well as the small, wtf!? sort of ways.

Again, from Nicolas’ notes:

  • #Identity and facial surgery change, potentially leading to discrepancies in face/fingerprint-recognition,
  • #Wireless data leaking everywhere except “cold spots” for certain kind of people (very rich, very poor),
  • #Problems with space travelling
  • #Need to “subscribe” to a service as a new person because of some database problem
  • #People who live prior to the Cloud Computing era who have no electronic footprint (VISA, digital identity) and have troubles moving from one country to another,
  • #3D printers accidents: way too many objects in people’s home, the size of the printed objects has be badly tuned and it’s way too big, monster printed after a kid connected a 3D printer to his dreams, …
  • #Textiles which suppress bad smells also lead to removal of pheromones and it affects sexual desire (no more laundry but no baby either)…
  • #Shared electrical infrastructure in which people can download/upload energy but no one ever agreed on the terms and conditions… which lead to a collapse of this infrastructure
  • #Clothes and wearable computing can be hacked so you must now fly naked (and your luggage take a different flight)

I was particularly taken by the 3D printer example. There’s of course lots of excitement about the possibilities of 3D printers in the home so that everyone makes their own stuff that they need. But, making stuff is hard and inevitably open to all kinds of crazy failures such as described here. Also — what do people do with the materials when they mess something up? How is the plastic (or whatever it ends up becoming — maybe noxious nasty stuff) get recycled? Will there have to evolve an entire system of rematerializing the goop? What about the equivalent of the print failures we often experience where one document ends up printing one letter per page, after page after page and we don’t notice until fifty sheets of paper have been used? Or when we scale something wrongly and the machine blindly goes ahead and prints something at 3 meters when we meant 3 millimeters? All these sorts of things will happen — can we use these insights to help make decisions about what and how to design? Can we start to communicate these failures as a way to design not with the expectation that the world is perfect — but that the results of designs have chinks and kinks in them?

Saturday October 30 02:39

The final activity was to think about possible taxonomies for designed failures — what are the types and kinds of failures as we’ve discussed them in the previous two assignments?

  1. #Short sightedness/not seeing the big pictures
  2. #Failures and problems that we only realize ex-post/unexpected side-effects
  3. #Excluding design
  4. #Bad optimization
  5. #Unnoticed failures
  6. #Miniaturization that doesn’t serve its purpose
  7. #Cultural failures: what can be a success in one country/culture can be a failure in another
  8. #Delayed failures (feedback is to slow)
  9. #When machines do not understand user’s intentions/technology versus human perception/bad assumptions about people (”Life has more loops than the system is able to understand”)
  10. #Individual/Group failure (system that does not respond to individuals, only to the group)
  11. #System-based failures versus failures caused by humans/context
  12. #Natural failures: leaves falling from trees considered as a problem… although it’s definitely the standard course of action for trees)
  13. #Good failures: Failure need interpretation, perhaps there’s no failure… alternative uses, misuses
  14. #Inspiring failures
  15. #Harmless failures

Why do I blog this? Well — just mostly to get some notes from the workshop up to share. I’m learning quite a bit from Nicolas on the failures theme, and perhaps its a way to answer a question that Chairman Bruce has lofted — now that we “get” the idea of design fiction and it seems to be inspirational for folks and useful in that regard — witness the theme of the Swiss Design Network conference this year. But..okay. We get it. Now what? How does the idea of design fiction either operationalize or become part of specific sorts of design practices in some informal or formal ways? It’s happening of course — all over the place and not because of this idea of design fiction itself, either how I’ve discussed it over the last 18 months or so, or how it’s been described and enacted by many people and agents. It’s not just about science fiction of course — and this was the topic of a paper at the conference that I may have the energy to describe in an upcoming post. But it is useful in very direct ways with the activities and goals of design generally speaking, that much is clear.

Design for Failure

04102008_100526

With regrets to Aaron for the blurry, noisy photo of himself..Taken in Montreal Canada at Design Engaged 2008.

For no particular reason — perhaps a salute to Nicolas who will be presenting his work on design for failure at IxDA this week — I bring you this image taken during DE2008 in which Aaron Straup Cope discusses designing engineering systems with failure contingency as the critical path.

Why do I blog this? I find this perspective intriguing — it assumes system meltdown, anticipates it and delivers appropriate data to indicate when it might happen. If I remember correctly, there is no specific interest in being exact about failure, just that it will happen and you might be told roughly how long until it happens. So the effort is to help stave it off by various means — adding more servers to spread activity loads around, optimize queries, increase caching, whatever you need to do. This makes me think of the intractability of designing for deletion. If someone wants to extricate themselves from the databases of a service or system, there is almost certainly no quick and easy way — in fact, I doubt there is anyway at all, and most services are not obligated to handle these situations. If I told Google that I wanted to check out fully and completely, even if they wanted to do this, it is doubtful they could. Would someone have to run through all the backup *whatever — tapes? — wherever they may be? It’s not just the live systems, and its not just purging caches and so on. All of our data is on its own, like orphaned snapshots of moments in our lives, somewhere. I don’t necessarily find this chilling or anything like that. I’m just curious about this notion — designing for intractable, ugly, messy circumstances, like failure or deletion. Things that run counter to the intuition — we usually design for the beautiful, full, glorious 32-bit conditions.
Continue reading Design for Failure

Anticipating Failure

Sign

I swear to GOD this is a friend’s “sig” line in her emails (she’s the hardest working IT person at a university department with lots of whining, illegal-software-downloading, computer-breaking-and-never-fixing, softdrink-drinking-right-by-fancy-computer-equipment students, so I have complete sympathy.)

Various Disclaimers:
MAD AT ME? My email load is heavy and some things end up in spam folders. If you think I have forgotten about you, re-send your email or send me an SMS.
ERRORS? I use a TabletPC. (Handwriting recognition is almost perfect.)

Why do I blog this? It occurs to me that these are all ways of anticipating failures of various sorts. Failures in the handwriting recognition software (inevitable..); failures to respond and anticipation of people getting upset because there’s no response inevitably resulting in a follow-up email with thinly veiled expressions of piss-offedness, etc. What are the ways that our technology forces anticipated failure? Does anticipating failure lessen the consequences? Can anticipated failure become part of specifications so we get out of the land of fantasy-advertised-feature-richness and get back to the pragmatics of how things actually work out in the wilds of normal, human real social practices?
Continue reading Anticipating Failure