Fostering Smart Software: Deep Design for Deep Learning

Fostering Smart Software: Deep Design for Deep Learning

How exposing mistakes can improve machine learning.
Article

As software programs become more like the human mind, they acquire the human mind’s fallibility. Like us, they must learn from mistakes, which they can do only if we teach them. This interdependence is changing our relationship with software and prompting a conversation with these systems that will develop its own rules and etiquette. The way we collectively decide to communicate with our software will define the tone of our daily lives as synthetic intelligence permeates our world.

A common feature of the recent breakthroughs in deep learning is a rate of error. The errors are benign in some domains (speech recognition, captioning a photo) but potentially catastrophic in others (self-driving cars, medicine). What is common about these errors, though, is that they invite us to ask why they happened. We expect an explanation from the underlying system, not so we can fix the problem like a programmer, but so we can reassure ourselves that the mistakes are somehow reasonable. If the mistakes are reasonable, they can be corrected with additional inputs.

This mindset is rare when people approach software today. When a desktop application crashes, we do not ask why. We assume that somewhere in a dull, poorly lit, meager little cubbyhole, some careless coder did something wrong. We do not care what it was, because it was arcane and—teleologically speaking—meaningless. The most we feel is a surge of anger at the company that let one of its errant employees ruin our afternoon. The flaw is in our fellow human.

In contrast, users of speech recognition on desktops are accustomed to inspecting errors via the process of training. You say, “correct ‘the cart was right,’” and a list of other possible matches is displayed. The user can see that the system was considering other possibilities. Often, the user sees the right phrase in the list and simply says “choose three” to change the phrase to “Descartes was right.” The user has both gained confidence in the system’s intelligence and offered input to make the system better.

This mode of interaction is best understood as a conversation. The system says, “I thought you said . . .” and the user replies, “No, actually . . .” This is the aspect missing from many of the new systems making their way into our lives today. How many of us still see items we purchased months ago advertised back to us from our browsers? For me, these are bird-watching binoculars I bought for my wife. How often does our newly sentient economy expect us to buy binoculars? Imagine the accuracy boost that would occur—in both deep and shallow channels—if ads had “like” and “dislike” buttons. By opening a conversation with the recommendation engine, humans would graduate from victim to participant in advertising, while the deep-learning backend would learn more deeply. (Google appears to be experimenting with a similar feedback mechanism, but it is not yet pervasive.)

The more appealing vessel for such functionality is a personal “agent,” whose interactions and knowledge we control. But the first step—simply opening up a conversation between user and system—is what we need right now to make the experience better. It is the model we should adopt as the sophistication of software grows.

Our current habit is to design only the surface of an experience, striving to make it as simple as possible, which can shut the user out. I am reminded of an experience with a smartwatch that presented a “service not available” message when the watch could not communicate with the phone. It turned out that the web service was not actually down, but since the device was not working or giving me any useful feedback, I was forced to make serendipitous discoveries about how close the devices needed to be, whether to switch from Bluetooth to cellular, and so on, until the deep learning in my brain modified my behavior to accommodate the watch’s limitations. How much easier this would have been if my watch had simply told me what was wrong—like a mature companion—rather than making me guess.

“Deep design” encompasses the full product, from its surface down into its core capabilities. Rather than hiding errors, we should design learning systems that reveal details and ask the user for guidance. “I thought you liked scotch,” the personal assistant of tomorrow might say. “Yes, but not for breakfast!” we will answer.

Of course, training a neural network is not the only form deep design can take. A recent project at frog considered how to show users the possible futures of a complex dataset. Rather than show the user a best guess or a set of likely futures, we decided that the more effective approach—both for visualization and for calculation—was to show all possibilities as probability distributions. This greatly simplified the visual elements that needed to be displayed, while also increasing the information conveyed. We invited the user to view all of the bad guesses alongside the good ones. We were revealing the errors, too, and inviting the user to interpret them, enabling many new interactions, such as checking that a scenario was possible and refining the probabilities via human input.

As research into the human brain continues to reveal the fallacy of our own thinking, leading in some circles to a belief that knowledge is only a momentary viewpoint within “a state space of possible world views” and even to suggestions that human choice should join the four humors as an antiquated notion, we may find ourselves much more tolerant of fallible machines. In the 1920s, tolerance for automobile fatalities was nearly unthinkable; 60% of fatalities were children, often playing outside their homes. Drivers in all kinds of accidents were charged with manslaughter and paraded through the streets in “safety parades.” We will no doubt face a similarly strident response to self-driving cars in the near future, but just as traffic lights, crosswalks, and the notion of the “accident” helped society balance risk and reward in the automobile era, so will we balance risk and reward as we integrate machine intelligence into all aspects of our lives.

We will succeed by understanding that our machines’ errors make sense in some way, which we can do only by entering into a conversation with them. Via deep learning, some systems have already begun the conversation. They are the first generation of a new order of things being born everywhere from the walls of our houses to the clothes on our bodies. For our own sake, let’s raise them well.

Author
Sheldon Pacotti
Senior Solutions Architect
Sheldon Pacotti
Sheldon Pacotti
Senior Solutions Architect

Sheldon is Senior Solution Architect at frog in Austin. Having studied math and English at MIT and Harvard, Sheldon enjoys cross-disciplinary creative projects. He builds award-winning software, writes futurist fiction, creates software architectures for businesses and writes about technology.

Cookies settings were saved successfully!