AI Has Become a Design Problem

AI is facing new calls for regulation now that it has emerged from the laboratory and is becoming more widely deployed across our daily lives. The public does not trust the technology. Nor should they. The problems are numerous, from understanding how the models actually work, controlling the data that feeds AI, and addressing growing distrust in those that wield this technology. Even companies themselves are unsure of how to safely and effectively use AI as part of their business.

The bulk of the discussion is currently centered on engineering. It’s understandable — artificial intelligence is a mysterious black box to many people, one assumed to be fixable only through lines of code and better data. But I want to argue that in order to truly understand and begin to master this technology, we must get better at seeing AI as a component of larger systems and design accordingly. Fundamentally, AI is not just a technology problem, it has become a design problem.

Human-centered design has a vital role across three key areas: Design thinking can help companies map their systems to understand how and where AI fits. Design is needed to devise better tools to create, monitor, and manage AI. And design must create new interfaces centered around the kind of information that AI delivers users.

Design has a long-familiar practice around research and discovery (aka design thinking) to help effectively frame problems. And it can help companies understand what it takes to ensure their AI works as desired. Design teams now routinely create journey maps to show how a customer flows through all of a company’s touchpoints, as well as the external touchpoints, and how those collectively drive their experience. Similar methods can be used to map the flow of data, software, and decision-making within a company, covering not only the AI itself, but more holistically the larger systems that will influence the AI. This exercise can help companies begin to better understand what drives an AI to perform — or not. It’s complex, to be sure, but in short, AI is not an island. Any first step to ensuring ethical AI requires understanding all of the systems impacting it.

Right now, creating an AI system that contributes usefully to any given business is still the primary struggle. The data may be too raw, suspect, or shallow. The models may be unproven. And integrating the AI into the rest of the business engine is difficult. Because of this there’s often not enough attention placed on higher order goals: efficiency, accuracy, and business value. It’s a very Wild-West attitude — move fast and break things. We’ll figure it out as we go.

Much of this attitude can be attributed to the early stages of how AI systems are created. The process is still very engineering-driven and, even then, requires a highly customized approach to each problem. In such situations, engineering teams tend to measure success by whether they can get the system to work; not by how well it fits its purpose.

Because of this, it is imperative to move the act of making things “up the stack.” That means creating tools that make the development of AI systems less of a raw engineering chore and more of a creative and operational task for the business itself. This is where design is critical. Tools must be designed to demystify the data, objects, and processes that make up AI so that subject-matter experts focused on business outcomes can participate in authoring these systems.

There are many analogies to draw from. Desktop publishing moved graphic design from a draftsman-and-camera-room specialty to a simple desktop tool anyone could use. The result was an explosion of contributors and a dramatic improvement in the quality of design overall. In software engineering, simplified tools like HTML and JavaScript have moved application and website development into the hands of people with intent and ideas rather than solely engineering skills. These people have more time and attention to focus on the quality of the work.

All the best data, model, and development practices in the world cannot fully guarantee perfectly behaved AI. In the end, good user interface design has to appropriately present AI to end users. An effective user interface can, for instance, tell the user the provenance of its insight, recommendations, and decisions. This gives the user agency in making sense of what the AI has to offer.

UI design also needs to evolve its art form of presenting information. Historically, UIs presented data as matter-of-fact. Common lists of data were not suspect; they were simply regurgitating what was stored. But increasingly, presentations of data are sourced, culled, and shaped by AI and therefore carry with them the suspect nature of the AI’s curation. UI design must introduce new mechanisms to allow users to inspect data provenance and reasoning and introduce visual cues to better share data confidence and bias to the user.

As we navigate the intricacies of a technology already integrated into many of our systems, we must design these systems in a responsible manner, mindful of transparency, privacy, and fairness. Design can frame AI-driven user experiences to end users in a manner that engenders trust and helps the end user understand the scope, strengths, and weaknesses of a given system. In turn, fear and mistrust are alleviated around the mysterious black boxes.

Trust is where the story ends — or begins. Better systems, tools, and interfaces will lead to AI that performs as designed and can be trusted. Because trust will be the final measure of effective and responsible AI systems.

Mark Rolston is Founder and Chief Creative Officer of argodesign, a global product design consultancy. He was previously Chief Creative Officer of frogdesign and has worked with such companies as Disney, Magic Leap, Dreamworks, Salesforce, GE, Microsoft, and AT&T. He currently serves as advisor to the Responsible AI Institute (RAI), working to define responsible AI with practical tools and expert guidance.

Share the Post:

Related Posts

Responsible AI Programs Webinar

Responsible AI Institute March 20, 2024 Webinar Recap During the most recent installment of Responsible AI Institute’s Virtual Event Series, we convened experts from leading...

AI Standards & Policies

Var Shankar, Executive Director, Responsible AI Institute and Steve Mills, Managing Director & Partner, Chief AI Ethics Officer, Boston Consulting Group (BCG) AI Governance Mechanisms...

Dow New Member Annoucement of Responsible AI Institute
The Responsible AI Institute welcomes Dow to its growing community of industry leading members....
Leaders in Responsible AI

March 2024 Gerald Kierce-Iturrioz Trustible Co-Founder & CEO What does your job entail within your organization? As the co-founder & CEO of a responsible AI...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!