Language Selection

Your selected language is currently:

9 Min Read

How To Earn And Keep The Trust Of Business Stakeholders In AI System Efforts

January 26, 2021 / Suzanne Taylor

Stories in the press discuss the need to ensure that AI does no harm by introducing or perpetuating bias, creating security or privacy risk, or otherwise adversely affecting people, property, or the environment. Harvard Business Review recently published “A Practical Guide to Building Ethical AI,” noting the Microsoft Office of Responsible AI guidelines.

But there is a different, fundamental trust you must establish when rolling out AI systems. Establishing this kind of trust involves building confidence with business stakeholders that you will deliver on the promised results of your AI projects. You need to prove that using an AI-based solution is the right approach to solving the problem. And there needs to be a realistic expectation of what it takes to deliver an AI project. This will help avoid the overpromise of AI, which is important because there are many examples of system failure.

In one recent example, an AI-controlled camera mistook a lineman’s bald head for a soccer ball. These kinds of public failures, not all of them equally harmless, contribute to AI skepticism.

Most software applications are predictable and well understood. But AI-based systems are different — they’re not deterministic. AI systems — and particularly data-driven algorithms in the categories of machine learning, deep learning and natural language processing — are highly dependent on the quality and quantity of data. Because these systems learn, they need time to improve and could take months to reach initial potential. That’s why you need to have clear expectations and gain confidence with your business stakeholders.

Pick The Right Business Problem

Understand your customer’s pain points. Then, be realistic on whether a solution is well suited for a data-driven approach. If it’s a problem that can be solved with traditional programming or some heuristics, AI may not be the answer.

AI doesn’t mean complete automation or removal of the human from the process. Although we strive for hyper-automation, it makes sense for a human to be in the loop in some cases.

The current environment has stretched many healthcare workers thin. Front-line healthcare workers often have to make fast decisions about hospital admissions and bed allocation. A hospital would never leave these tasks exclusively to a robot. But if an AI or machine learning system could assess available data and make recommendations, a clinician could look at those suggestions and decide what makes sense.

Don’t Oversell It

Be upfront with business stakeholders about what it’s going to take to build your AI system. Let them know what your AI model can do and what it cannot do.

Understand the cost of making an incorrect prediction or recommendation. You may need to weigh your recommendations in one direction if the cost of making a wrong recommendation is much worse than making the right one.

For retail recommendation systems, the cost of making a wrong recommendation is not high. The system might recommend shoes that the customer doesn’t like, which does not create major problems. But in other scenarios, the cost of a wrong recommendation can be significant. For example, it could cause a help desk to send an inquiry to the wrong person, lengthening time to resolution due to rerouting and increasing company costs in the process. The repercussions of incorrect recommendations in medical diagnosis could be even worse.

The AI doesn’t have to know everything. You can build in an “I don’t know” response with an appropriate action.

Anticipate Questions

Your clients and users may have questions about how the underlying algorithms work. Be prepared to answer them.

Explain why the AI is a good fit for the problem, how your models are trained and validated, and the models’ strengths and weaknesses. Teams that can articulate these technical topics with empathy and respect, and build shared understanding with their users and decision-makers, are much more successful in creating trust.

Start Small

To gain trust with your business stakeholders, use your AI system to do something small at first.

Fail quickly, and then adjust and prove how your AI system can deliver the desired business outcome.

Prove out your AI system with test data. Or introduce your AI solution into production in a quiet period. This provides an opportunity to see how your AI system behaves. Build in checks and balances to watch for changes, and make needed adjustments.

Build Trust One Day At A Time

The types of automation enterprises have implemented in the past are generally well understood. They involve clear processes. They don’t go out of control unless there is human error — such as a programming issue. And you can test them in a very reliable way.

But when you introduce AI, that all changes. Now you’re automating decision-making, pattern recognition, predictions and recommendations. If you’re going to act on those things, you need to have confidence that these predictions and recommendations are accurate and reliable.

In my experience, that confidence does not happen on day one.

Have experts look at the system’s outputs to decide whether they make sense. Compare the results to how humans handled the problem by, for example, using information from historic incident tickets. Involve both developers and downstream users in your validation.

Be prepared to accept some level of error. After all, humans make errors, too; people just tend to be more tolerant of those mistakes. However, be sure to prevent anything catastrophic. And have a continuous learning approach so that the system gets better and better.

Stay Vigilant To Retain Trust

Continue to watch the behavior of AI systems, which can change over time. Monitor the business outcomes expected by the AI systems, such as a reduction in the number of incident tickets or infrastructure costs, to see if they are trending in the expected direction.

Design self-monitoring capabilities to detect if you need to reexamine the model or data. Know that model drift is not uncommon because data and the environment change over time.

With this earnest and deliberate approach to AI, you will win the trust of your business stakeholders and use intelligent technology to deliver valuable business outcomes.

Discover how Unisys can empower your organization to unlock the limitless potential of AI to make swift, data-driven decisions, automate tasks, streamline processes and free up valuable employee time for strategic initiatives.

Learn more