When Automation Bites Back

The business of dishonest automation and how the engineers, data scientists and designers behind it can fix it

The pilots fought continuously until the end of the flight“, said Capt. Nurcahyo Utomo, the head of the investigation of Lion Air Flight 610 that crashed on October 29, 2018, killing the 189 people aboard. The analysis of the black boxes had revealed that the Boeing 737’s nose was repeatedly forced down, apparently by an automatic system receiving incorrect sensor readings. During 10 minutes preceding the tragedy, the pilots tried 24 times to manually pull up the nose of the plane. They struggled against a malfunctioning anti-stall system that they did not know how to disengage for that specific version of the plane.

That type of dramatic scene of humans struggling with a stubborn automated system belongs to pop culture. In the famous scene of the 1968 science-fiction film “2001: A Space Odyssey”, the astronaut Dave asks HAL (Heuristically programmed ALgorithmic computer) to open a pod bay door on the spacecraft, to which HAL responds repeatedly, “I’m sorry, Dave, I’m afraid I can’t do that“.

1. The commodification of automation

Thankfully, the contemporary applications of digital automation are partial and do not take the shape of an “artificial general intelligence” like HAL. However, the computational tasks that once were exclusively applied to automate human jobs in critical environments like a cockpit have reached people’s everyday lives (e.g. automated way-finding, smart thermostat) and the techniques often deployed for more frivolous but yet very lucrative objectives (e.g. targeted advertisements, prioritizing the next video to watch on YouTube).

“What concerns me is that many engineers, data scientists, designers and decision-makers bring digital frictions into people’s everyday life because they do not employ approaches to foresee the limits and implications of their work”

The automated systems that once relied on programmed instructions based on their author’s understanding of the world now also model their behavior from the patterns found in datasets of sensors and human activities. As the application of these Machine Learning techniques becomes widespread, digital automation is becoming a commodity with systems that perform at Internet scale one task with no deep understanding of human context. These systems are trained to complete that “one” job, but there are evidences that their behavior, like HAL or a Boeing 737 anti-stall system, can turn against their user’s intentions when things do not go as expected.

2. The clumsy edges

Recent visual ethnographies at Near Future Laboratory like  #TUXSAX and Curious Rituals uncovered some implications of that commodification of automation. In a completely different scale of dramatic consequences that brought down Lion Air Flight 610, these observations highlight how some digital solutions leave people with a feeling of being “locked in” with no “escape” key to disengage from a stubborn behavior. A wide majority of these digital frictions provoke harmless micro-frustrations in people’s everyday lives. They manifest themselves through poorly calibrated systems and a design that disregards edge cases. For instance, it is common to experience a voice assistant unable to understand a certain accent or pronunciation or a navigation system that misleads a driver due to location inaccuracies, obsolete road data or incorrect traffic information.

Curious rituals is a fiction that showcases the gaps and junctures that glossy corporate videos on the “future of technology” do not reveal. Source: Curious Rituals.

These clumsy automations can be mitigated but will not disappear because it became impossible to design contingency plans for all unexpected limitations or consequences. However, other types of stubborn autonomous behaviours are intentionally designed as the core of business models that trades human control for convenience.

3. The business of dishonest automation

Many techniques to automate everyday tasks allow organizations to reduce costs and increase revenues. Some members of the tech industry employ these new technological capabilities to lock customers or workers into behaviors for which they have no legitimate need or desire. Those systems are typically designed to resist from their user’s demands AND are hard to disengage. Let me give you a couple of examples of what I call “dishonest automations”:

3.1. Data obesity

Automatic cloud backup systems have become a default feature of operating systems. They externalize the storage of personal photos, emails, contacts and other bits of digital life. Their business model encourages customers to endlessly accumulate more content without a clear alternative that promotes a proper hygiene with their data (i.e. nobody has yet come up with “Marie Kondo for Dropbox ™”). Regardless of the promises of the providers, it becomes harder for people to declutter their digital lives from a cloud storage service.

Upgrade your storage to continue backing up: an automatic cloud backup system that locks in its user, leaving no alternative to the accumulation of content.

3.2. Systemic obsolescence

Today’s apps automatic updates often increase the demand of resources and processing power for cosmetic improvements almost in a deliberate attempt to make a hardware obsolete and the software harder to operate. After years of impunity, there is now a bigger conscience against systemic obsolescence because it is wasteful and exploits customers.

3.3. Digital attention

As content grows exponentially on the Internet, (social) media companies rely increasingly on automation to filter and direct information to each one of their users. For instance, YouTube automates billions of videos to play next for 1.5 billion users. These algorithms aim at promoting content for higher engagement and tend to guide people against their interest.


In the light of these examples of clumsy and dishonest automation, what concerns me is that many engineers, data scientists, designers and decision-makers bring these frictions into people’s everyday life because they do not employ approaches to foresee the limits and implications of their work. Apart from the engineering of efficient solutions, automation requires professionals to think about the foundations and consequences of their practice that transcend any Key Performance Indicator of their organization.

4. The design for humane automation

The design of automation is not about removing the presence of humans. It is about the design of humane, respectful and trustful systems that automate some aspects of human activities. When working with data scientists, designers and engineers in that domain, we envision systems beyond the scope of the “user” and the “task” to automate. I encourage teams to a) learn from the past b) critique the present and c) debate the future. Let me explain:

4.1. Learn from the past

When it comes to automation, the acquisition of knowledge in academia and in the industry are not separate pursuits. Over the last 50 years, there has been an extensive body of work produced in research institutions on the implications of automating manual tasks and decision-making. The key findings have helped save money in critical environments and prevent numerous deadly errors (e.g. in cockpits).

Today, that knowledge is not translated into everyday tasks. For instance, many engineers or data scientists do not master concepts like automation bias (i.e. the propensity for humans to favor suggestions from automated decision-making systems) or automation complacency (i.e. decreased human attention to monitor automated results) theorized by research communities in Science and Technology Studies or Human-Computer Interaction. Sadly, only a few organizations promote platforms that gather academics, artists, engineers, data scientists and designers. Industries in the process of digitization would greatly profit from this type cross-pollination of professionals who learn from considerations that already emerged outside of their discipline.

4.2. Critique the present

I believe that the professionals involved in the business of automating human activities should be persistent critical reviewers of the solutions deployed by their peers. They should become stalkers of how people deal today with the clumsy, the dishonest, the annoying, the absurd and any other awkward emerges of digital technologies in their modern lives.

#TUXSAX is an invitation to engage with these knotty, gnarled edges of technology. It provides some raw food for thoughts to consider the mundane frictions between people and technologies. Do we want to mitigate, or even eliminate these frictions? Source: Documenting the State of Contemporary Technology.

When properly documented, these observations offer a complementary form of inspiration to the multitude of “naive optimism” and glamorous utopian visions of the tech industry. They provide material for professionals to question arguably biased goals of automation. Moreover, they set the stage to define attainable objectives in their organization (e.g. what does smart/intelligent mean?, how to measure efficiency?, what must become legible?).

4.3. Debate the future

In today’s Internet, the design of even the most simple application or connected object has become a complex endeavour. They are built on balkanized Operating Systems, stacks of numerous protocols, versions, frameworks, and other packages of reusable code. The mitigation of digital frictions goes beyond the scope of a “Quality Assurance” team that guarantees the sanity of an application. They are also about documenting implications on the context the technologies live, unintended consequences and ‘what if’ scenarios.

It’s easy to get all Silicon Valley when drooling over the possibility of a world chock-full of self-driving cars. However, when an idea moves from speculation to designed product it is necessary to consider the many facets of its existence - the who, what, how, when, why of the self-driving car. To address these questions, we took a sideways glance at it by forcing ourselves to write the quick-start guide for a typical self-driving car. Source: The World of Self-Driving Cars.

Typically, Design Fiction is an approach to spark a conversation and anticipate the larger questions regarding the automation of human activities. For instance, we produced Quick Start Guide of Amazon Helios: Pilot, a fictional autonomous vehicle. In that project, we identified the key systems that implicate the human aspects of a self-driving car and we brought to life such experiences in a very tangible, compelling fashion for designers, engineers, and anyone else involved in the development of automated systems. Through its collective production, the Quick Start Guide became a totem through which anybody could discuss the consequences, raise design considerations and shape decision-making.

5. The business of trust

Like many technological evolution, the automation of everyday life does not come without the frictions of trading control for convenience. However, the consequences are bigger than mitigating edge cases. They reflect human, organization or society choices. The choice of deploying systems that mislead about their intentions in conflict with people and society’s interests.

In his seminal work on Ubiquitous Computing in the 90s, Mark Weiser strongly influenced the current “third wave” in computing, when technology recedes into the background of people’s lives. Many professionals in the tech industry (including me) embraced his description of Calm technology that “informs but doesn’t demand our focus or attention.” However, what Weiser and many others (including me) did not anticipate is an industry of dishonest automation or solutions that turn against their user’s intentions when things do not go as planned. Nor did we truly anticipate the scale in which automation can bite back the organizations that deploy them with backslashes from their customers, society as well as policymakers.

View this post on Instagram

#curiousrituals #classic #vendingmachine

A post shared by nicolas nova (@nicolasnova) on

These implications suggest an alternative paradigm that transcend the purely technological and commercial for any organization involved in the business of digital automation. For instance, a paradigm that promotes respectful (over efficient), legible (over calm) and honest (over smart) technologies. Those are the types of values that emerge when professionals (e.g. engineers, data scientists, designers, decision-makers, executives) wander outside their practice, apply critical thinking to uncover dishonest behaviors, and use fictions to take decisions that consider implications beyond the scope of the “user” and the “task” to automate.

I believe that the organizations in the business of automation that maintain the status-quo and do not evolve into a business of trust might eventually need to deal with a corroded reputation and its effects on their internal values, the moral of employees, the revenues and ultimately the stakeholders trust.


The World Of Self-Driving Cars

It’s easy to get all..Silicon Valley when drooling over the possibility of a world chock-full of self-driving cars.

The fact of the matter is that a world where self-driving cars are a reality will be as prickly as the world today, only Algorithms will be the source of our frustration rather than other drivers..at least until the underground of self-driving car retrofits, mods and hacks come along and everything goes all amuck despite Google-Apple-Facebook-Amazon’s best efforts to convince us things are better..

SF Chron_2_REV

TMZ

It’s easy to speculate breathlessly about the world of the future when the self-driving car is normal, ordinary and everyday. However, when an idea moves from speculation to designed product the work necessary to bring it into the world means that it is necessary to consider the many facets of its existence — the who, what, how, when, why’s of the self-driving car. To address these questions we took a sideways glance at it by forcing ourselves to write the quick-start guide for a typical self-driving car.

A Quick Start Guide as Design Fiction Archetype

To spark a conversation around the larger questions regarding a technology that could substantially change mobility in the future we followed a Design Fiction approach to produce this Quick Start Guide.

Our Quick Start Guide is a 14-page z-fold document from the near future. It’s a reminder that every great technology needs instructions for the uninitiated. That instruction may be a document, a tutor from a friend, ‘rider’ instructions — something that gives a feel of the things car owners might do first and do often with their first self-driving vehicle. Get your copy.

qsg_images12

This Design Fiction archetype is a natural way to focus on the human experiences around complicated systems. It implies a larger ecosystem that indeed may be quite complex. It also allows one to raise a topic of concern without resolving it completely — often an approach that’s necessary in order to not be bogged down in details before it’s necessary. For example, mentioning that it costs more to park your car rather than sending it back on the roadway as a taxi is a way to open a conversation about such a possibility and its implications for reclaiming space used by parking garages. In the Quick Start Guide, you will find:

  • What do you do if you forget a bag of groceries — or your sleeping child — after sending your self-driving car off for the evening to earn a few shekels in ‘Uber’ mode?
  • Is there a geo-fencing mechanisms to control where the car goes — and how fast it goes?
  • How do you activate and lock the “Child Safe Mode” for your teenage son to take to football practice?
  • Does it conform to ISO1851 Codified Child Rearing Mandates and Findings?
  • How does the car pickup groceries — and how do you upload the list — when you send it on errands?
  • Will you pick a car based on the size and features of its Cold-n-Hot Grocery Trunk?
  • What do you do when the display shows “Unknown Profile”?
  • Does your self-driving car obey the most recent DoT Emergency Maneuvers requirements?
  • How do you activate ValetPark®, Amazon PrimeValet®, EverDrive™, RE-FRESH™ and of course the agnostic Interior Ambience by Amazon®?
  • Which countries/protectorates/jurisdictions allow a total car history reset?
  • How to install your Dynamic Insurance plug-in from your insurance providers’ download site?
  • What supplementary fees does the car charge for using Apple Roadways?
  • And more…

header1-alt2_1024x1024

header3-alt_1024x1024

header2-alt2_1024x1024

The Design Fiction Workshop

The Quick Start Guide was produced as part of a workshop at IxDA 2015 in collaboration with students from the California College of the Arts and conference participants. In a short amount of time, we  identified the key systems that implicate the human aspects of a self-driving car and we brought to life such experiences in a very tangible, compelling fashion for designers, engineers, gurus, and anyone else involved in the development of a technology. Through the collective production of Quick Start Guide it became a totem through which we could discuss the consequences, raise design considerations and hopefully shape decision making.

The Design Fiction approach led to:

  • Better thinking around new products, a richer story and good, positive, creative work.
  • Identify topics that may not come up when discussing the larger system.
  • Create rather than just debate, and represent topics concisely to focus the work and challenge us to describe features succinctly.
  • Experience the consequences and implications of a world with self-driving vehicles.

qsg-workshop1

qsg-workshop2

qsg-workshop3

Infrastructure-Platform-shutterstock_173237231

The Assumptions

Visions of exciting future things rarely look at the normal, ordinary, everyday aspects of what life will be like to turn the thing on, fix a data leak, set a preference, manage subscriber settings, address a bandwidth problem, initiate a warranty request for a chipped screen or increase storage. It’s those everyday experiences — after the gloss of the new purchase has worn off — that tell a rich story about life with a self-driving vehicle. In this project, we made a few category assumptions.

  1. The self-driving vehicle is all about the data. When Amazon, Facebook, Microsoft, Google and/or Apple become part of the vehicle “ecosystem” — either by making cars, having their operating system integral to the car, owning roadways or whatever their strategy teams are dreaming up — they will do it because #data. To them it will be about knowing who is going where, when they’re going there, to get what; it will be about knowing when your tires are wearing down; it will be to give you discounts when send your car to get Pizza Hut for dinner; it will be to have your car go through the Amazon Pantry Pickup Warehouse to do your grocery shopping. The data.
  2. As fundamental as mobility is to humans, owning a network of millions of interoperable vehicles is big business. Apple’s vehicle ecosystem will be always moving, as will Google-Uber’s. It will cost you more to park your self-driving car because it can earn money for them (and maybe you) by putting it into “Taxi” or “Uber” mode when it’s dropped you off at work. This led us to consider what one might need to do if one’s car has strangers in it while you’re at work, or the movies, or asleep. And what will happen to all those parking garages?
  3. Roadways will be the new platform play. How will (or will not) roadways that are owned by Amazon, Facebook, Microsoft, Google and/or Apple interoperate? Will Google have the best, fastest, least congested roadways in Los Angeles? What are the consequences of switching to semi-partial manual drive mode? What happens when those Fast and Furious guys figure out how to jailbreak their vehicle’s OS and supercode the engine?

Get Your Quick Start Guide

To experience those assumptions, get the copy of your first self-driving car Quick Start Guide.

Little Cesars Advertisement

Acknowledgments

The Quick Start Guide is a Near Future Laboratory project produced based on a workshop at IxDA 2015 in collaboration with students from the California College of the Arts and conference participants:

  • Rafi Ajl (CCA)
  • Phil Balagtas (GE Global Research)
  • Sankalp Bhatnagar (Carnegie Mellon University)
  • Julian Bleecker (Near Future Laboratory)
  • Maru Carrion-Lopez (CCA)
  • Wendy Cown (Charles Schwab & Co.)
  • Bill DeRouchey (Aviation GE)
  • Blake Engel (Nextbit)
  • Nick Foster (Near Future Laboratory)
  • Cristina Gaitan (CCA)
  • Susan Hosking (GE Global Research)
  • Shani Jayant (Intel)
  • Flemming Jessen (Designit)
  • Zhouxing Lu (Indiana University Bloomington)
  • Chris Noessel (Cooper)
  • Anna Mansour (Intel)
  • Nicolas Nova (Near Future Laboratory)
  • Angelica Rosenzweigcastillo (GE Global Research)
  • Margaret Shear (Margaret Shear | Experience Design)
  • Liam Woods (CCA)
  • Aijia Yan (Google).

Special thanks to John Sueda, Ben Fullerton and Raphael Grignani.