Sélection de la langue

Le langage choisi actuellement est:

Française
15 Min Read

Readying your digital workforce for AI? Five pitfalls to avoid

janvier 1, 2020 / Weston Morris

Short on time? Read the key takeaways:

  • Start with end-users, not data: Start with end-users to understand their needs and wants before working with data. The use of data can lead to hidden biases and privacy concerns among end-users.
  • Select the right "intents": Define the end-user intent for each automation and prioritize based on the frequency of usage and complexity of automation. Start with low-complexity, high-frequency tasks for the biggest impact.
  • Avoid the "creepiness factor": End-users should have confidence in the privacy of their data and the AI system should not appear creepy by accessing and utilizing data in an inappropriate manner.
  • Implement AI with the end in mind: AI should serve as a personal assistant to end-users by anticipating their needs, providing answers in their preferred method, and automating routine tasks to save time and gather intelligence. Avoid typical pitfalls by considering the end-users first.

You’ve been handed the project of your dreams: developing an artificial intelligence (AI) service management solution for your digital workers.

It will be a high-profile project, visible to every employee, from the CEO to the salespeople demoing for big customers to the colleagues you work with daily. They will turn to your system with high expectations when they are held back, uncertain, and in a hurry.

Will they be among the forward-moving fortunate who enjoy the benefits of AI of helping them solve their problems, increase their knowledge and perform at their best? Or will they get left behind because of the pitfalls that plague AI implementations?

Let’s start with the end in mind: What are AI expectations in the digital workplace?

In the fast-changing, high-demand digital world, workers will have questions as they encounter complex technology. AI should be a “personal assistant” for many of those questions. Like a good hotel concierge, this artificially intelligent personal assistant should anticipate their needs, have ready answers and provide them in the questioner’s language and their preferred method.

AI can also take over low-level tasks, allowing knowledge workers to focus their time and energy on higher cognitive pursuits. AI can accelerate and simplify routine tasks for the digital worker looking to rise in the organization and take on more significant responsibilities with more visibility. It can automate support to save time and be delivered 24/7 while gathering intelligence on worker needs and responses.

And there’s another crucial expectation that cannot be overlooked in today’s tight job market for skilled digital workers. Management recognizes that today’s workers demand up-to-date technology and will leave companies that fall behind. But AI implementations have a rocky history, often falling short of expectations, disappointing users and squandering time, budget and competitiveness. To avoid that fate, we have identified five keys to avoid typical pitfalls.

1. Start with your end user, not your data

Ahh, the data — so much, so rich, so revealing. You probably have heard that you need many data to train your AI system properly. True, but just because you have many data at your disposal doesn’t mean you are ready to start.

Data can contain hidden biases. It might be outdated or inaccurate. If you let data lead you down its path, you’ll exacerbate those shortcomings.

Instead, start with your end user. Let them tell you what they need and want, what will best serve their workplace needs and the organization. It’s not unusual for AI implementations to blindly follow the data and disappoint the user. Instead of saving time, you end up reworking your AI solution at a great cost to your time and money.

There’s another reason to set the data aside until you’re confident of the user’s needs. We call it “the creepiness factor.” You’re familiar with it if you’ve ever placed a call to a company and the person you reach knows much more about you than you thought. Or if you ask Alexa a question and suddenly find your newsfeed flooded with ads related to your question. That’s creepy.

In the workplace, workers must be confident of an appropriate level of privacy about data that pertains to them. Your wasteful use of data can worry them. “If they know that,” they might think, “do they have access to my HR info? Or maybe my health insurance records? My salary? Too creepy. I’m not touching that system.”

2. Select the right “intents”

The above user analysis will reveal several possible automations to implement. For each automation, you must define an end-user intent that leads you to that automation. For each intent, you must identify the end user utterances — all ways the end user can indicate their intent.

What criteria can you use when narrowing down your automations and intents? AI implementations fail when the first intents selected are too complex. AI implementations are considered unsuccessful if the intents are infrequently used.

Before choosing your intents, plot the scope of your possibilities on a simple quadrant like the one below. Assess each intent first on how frequently your users will request it and second on the complexity involved in automating it.

Complexity Type Low Frequency High Frequency
High Complexity App Installation Employee Onboarding
Low Complexity Business Process FAQ Password Reset

That will make it clear which automations and intents to develop first.

  • High-complexity/low-frequency tasks? Obviously not. Even if you successfully create the automation/intent, so few people will use it that you will never be compensated for your development costs.
  • Low-complexity/low-frequency tasks? You may get your automation and intent working quickly, but so few people will use it that it is not worth the effort.
  • Low-complexity/high-frequency tasks? Starting here will have the biggest impact on your end users with the least risk. Once you’ve proven your processes and technology with this group of tasks, you can move on to…
  • High-complexity/high-frequency tasks? The experience you get from successfully building out the low-complexity/high-frequency tasks is vital to overcoming the complexity of these automations.

Choose a limited number of intents that will ease or accelerate the daily tasks of most workers. Score your wins there, show management the ROI, get your workers to trust your system (more about that vital aspect later), capture utterances for intents that your AI doesn’t yet understand and then take your fresh learning with you as you move on up the value and complexity scale.

3. Expect complexity and surprises

Above, we mentioned relatively low complexity as a good starting place. But in truth, all AI implementations are complex to start with and usually reveal unexpected complications as they unfold. Expect complexity in these aspects:

  • The intent might involve back-and-forth interactions that demand greater natural language understanding than you originally supposed. You should consider how users might introduce a problem and be ready to respond accordingly.
  • How many native languages do you need to accommodate in your system? In today’s hyper-connected global economy, requiring 12 or more languages is not unusual.
  • And don’t forget compliance issues, which can vary considerably across geographies and demand scrupulous attention.
  • Do you have multi-tenancy requirements to consider, requiring domain knowledge segregation in your user base?
  • Every implementation encounters the complexity of integration. What you develop must be integrated into various systems — voice recognition, service management, downstream automation, and security and identity management systems.
  • And finally, channels. Omni-channel is essential to efficiency. Your end users will choose different channels for different purposes and switch back and forth at will. Intelligence needs to follow the user across channels and even anticipate channel transitions.

4. Bring objectivity to user acceptance testing

“Confirmation bias” is a common term often referring to political discourse. But it’s equally prevalent in technology testing and can create unfortunate surprises when the final version gets rolled out. If you use people who helped write the use cases as testers, you test yourself into a corner. They find what they predicted.

A writer friend likens this pitfall to proofreading. “People think a writer should be good at catching errors, and I am — if it’s others’ errors. But when I proofread my work, I read what I meant to write.”

For a reliable user acceptance testing (UAT) process for your AI solution, engage testers with nothing to do with writing the use cases. It may seem counterintuitive, but the organization's naysayers do some of the best tests. Find those who are skeptical about AI and involve them in the testing. They are highly motivated to prove that it doesn’t work.

As they find problems and you resolve them, you win twice: once by finding bugs that would have otherwise been missed and again by winning over your skeptics. When the naysayers complete their testing with you, they can become one of your biggest advocates.

5. Expect users to test before they trust

The real test, of course, comes when you release. Most users will first do their test with their data, and many won’t be testing your use cases.

Even after you are confident of your AI solution’s utility, and even after you’ve perfected its use of natural language, multiple languages, omni-channels and automation, many users will want to see if it’s intelligent in their view. They will query it with their questions. Will it rain tomorrow? Who won the World Series? What’s your favorite pizza topping? How old are you?

If they don’t get good responses, they may conclude that your solution is “dumb,” and they won’t trust it for its intended uses. Keep that in mind if you are inclined to exclude social chat interactions in favor of just those intents that deliver business value.

Over time, you will keep enriching your AI solution, adding more complex intents and expecting users to shift more and more of their support needs accordingly, perhaps ultimately using it as their primary “personal assistant.” That will only happen if they trust it, so don’t jeopardize that opportunity. Include enough general conversation to launch the relationships you hope to build over time.

And be sure to address this matter in your training. Make sure from the outset that you set reasonable expectations with users about what your AI solution is supposed to do for them and even what it’s not, as well as your plans for future utility.

AI has entered an exciting phase

It is already delivering high value for digital workers, and new use cases are emerging as fast as they can be developed. But pitfalls abound. Avoid them, and you’ll find faster success, greater utilization and an ongoing opportunity to expand on your accomplishments.

To explore how Unisys can help ready your digital workforce for AI, please contact us.

Contact Us