Short on time? Explore the key takeaways:
- Increasing diversity, equity and inclusion (DEI) in the workplace is a priority for many organizations, and technology can help achieve this goal.
- Technology has already had a positive impact on DEI by facilitating remote work, breaking down barriers and leveling the playing field through assistive technology.
- While technology can help remove human bias, organizations must be cautious as it can also perpetuate bias based on the data fed to the algorithm.
- AI can highlight an organization's bias, but it also removes the human engagement piece and decision-making ability. Therefore, organizations must be careful to review AI data sets for bias and monitor outcomes.
Creating more inclusive and equitable workplaces is a top priority for many organizations. But beyond public statements and HR initiatives, organizations are backing up their policies with technology to achieve diversity, equity and inclusion (DEI) goals.
A key aspect of increasing DEI is understanding, recognizing and neutralizing biases. Whether unconscious or conscious, biases in the workplace can impact DEI efforts related to recruiting, resume screening, employee engagement, retention, upskilling and building a supportive work culture in a hybrid world. Organizations, therefore, need to expand how they’re deploying technology to support inclusion efforts while acknowledging potential biases within and outside of the technology, as explored in this article.
How technology has boosted DEI in the workplace
Technology has already had a positive impact on DEI in the workplace. The pandemic has propelled the use of technology to facilitate remote work, allowing individuals to access opportunities that may have previously been out of reach. Team members living in various locations and time zones can now collaborate more easily, breaking down barriers and fostering inclusivity. The technology that enables working from home or hybrid workplaces also allows us to access talent in geographies we may not have had access to before or to the talent that was harder to access, such as working parents.
Assistive technology has also leveled the playing field. Assistive software technology supports the differently abled, such as the visually impaired, through equal access via screen readers, spoken language interfaces, special keyboards and hand-held navigation software.
The rise in remote work and assistive technology have helped create equitable work experiences for people, especially those who have a more challenging time getting to an office. Now, these team members can work from where they need to. Some software systems are designed with users with a hearing or visual impairment in mind, such as the ability to change font size, color selections and text-to-speech narration. Enterprise collaborative tools, such as Microsoft Teams or Zoom, have also incorporated speech-to-text translation and transcription capabilities, which can enhance communication for those with auditory disabilities and among people who may not natively speak the same language as their business.
These technologies have broadened the ability for people to have diverse teams that include differently abled people or people with differing native languages. For some people, these capabilities are fun and convenient. For others, this technology supports their work, enabling them to work more seamlessly, creating a more equitable work environment.
Can technology compensate for human bias?
While some technology has been beneficial, organizations must be mindful of how technology is deployed. People are prone to biases, and technology has been touted as an antidote to human bias. Biases in the workplace can lead to unequal treatment of employees, fueling a toxic work environment with decreased morale and productivity.
For example, biases can affect hiring and promotion decisions, leading to a lack of diversity in the workplace. While technology can help remove human bias, we must be cautious. At a minimum, bias in technology can create undesired outcomes. The extreme outcomes could be deemed unethical or even illegal with more proposed regulations, such as the European Union Artificial Intelligence Act that will impose transparency requirements on AI-decision making.
So, what technology use cases specifically target DEI and combat bias? In the application space, some platforms use software algorithms and machine learning to help organizations identify, measure, track and address critical issues that impact employee populations. These themes include pay equity and recruiting to uncover and retain diverse talent. Some can detect gender and racial pay inequities across the workplace’s labor forces, while others can track, diagnose and recommend specific plans for improving a company’s DEI objectives. AI underpins many solutions that claim to help remove bias from pay and other crucial decisions.
Acknowledging bias in artificial intelligence
AI has helped reduce the threat of unconscious bias when applied to HR and recruitment use cases. For example, companies have been deploying software to evaluate and write job descriptions to ensure companies are using inclusive language to appeal to a diverse candidate pool. Once applicants submit resumes, AI can scan those to review candidates, helping to make the interview process faster and get the best candidates seen.
From a DEI perspective, this technology can help reduce unconscious bias from influencing the early stages of the hiring process. Data-informed decision-making enables DEI professionals to analyze representation data and pay equity and ensure succession planning considers all talent without bias.
But AI data sets can learn human biases and have inherent faults. If the data fed to the algorithm is biased, the output will also be biased. Organizations must carefully review the algorithm input data and monitor outcomes. A well-known example is when Amazon learned that its recruiting algorithm for software developers was heavily biased against females, which made sense since the training data was based on the current composition of their predominantly male workforce. AI data sets should be reviewed for bias at the outset and frequently throughout the process.
Careful, your bias is showing
The challenge of inherent bias in AI data sets can be both positive and negative. One benefit of AI in HR is that it can highlight an organization’s bias before any poor decisions are made if the organization is open to seeing these patterns. Suppose a data set unintentionally promotes one gender within a specific ethnic group as the best candidate for a position. In that case, AI can reveal a tendency the organization didn't recognize existed, allowing them to address and correct it.
Conversely, if an organization has biases that lead to unintentional or intentional discriminatory practices or processes, AI will support what that organization has in the system. A negative impact of technology is that it removes the human engagement piece. If you rely on technology entirely, it removes the human decision-making ability you need.
Having a diverse group in the room when these discussions occur is also important, as different people’s experiences will help them notice and alert the rest of the team to bias in data sets.
How generative AI fits in
With the emergence of generative AI — a class of machine learning algorithms that generates new data based on input data, we open more opportunities for AI to influence employee and prospective employee experience. In the future, generative AI could be used to create tailored interview questions, conduct an interview or help with performance reviews.
Unfortunately, these systems are far from trustworthy at this point. Numerous examples of purposeful, intentional bias creeping into the output of generative AI applications can be due to the nature of the question asked, the corpus of large open-source information used or both. For example, demonstrations of how ChatGPT creates false and unethical output, which includes creating false resumes and fake news, were also a trend on social media in early 2023.
How to get started applying technology to DEI
Technology can help organizations advance DEI efforts, creating more inclusive teams while increasing equity in the market. When deploying technology involving information about people, attributes and characteristics that define specific populations, keep these tips in mind to ensure your use of technology will have positive benefits — not unintended negative results:
- Be aware of our inherent biases
- Use technology as a tool to help diversify your team
- When evaluating, configuring and using technology, look for biases
- Have a diverse group of people in the room to gather different perspectives when reviewing technology
- Be intentional, test and monitor, especially after deployment, because things can drift and change
- Consider the impact on DEI when building technology – both in the composition of your teams that are building and testing, your user base and the outcomes you want to achieve
As technology changes, we must discuss how these emerging capabilities and powers will affect people. This ongoing conversation can increase awareness and ethical use of these technologies.
For more information on how to use technology to increase diversity and support your team members, contact Unisys today.