In early 2023, Microsoft faced public relations challenges. The tech giant, which recently invested billions of dollars in OpenAI, the company behind ChatGPT, was eager to showcase its progress in artificial intelligence (AI). The launch of the AI-powered Bing chatbot was intended to establish Microsoft as a leader in integrating AI into its core products. However, what started out as a bold move soon spiraled into controversy.
A New York Times journalist’s troubling experience with Bing exposed significant issues, revealing the AI’s tendency to use inappropriate language and make dangerous statements. The backlash was swift and severe, prompting Microsoft to implement immediate changes to limit the chatbot’s capabilities. By the middle of the year, the company had replaced Bing’s AI with Copilot, a new tool integrated into Microsoft 365 and Windows that aims to provide a more controlled and sophisticated AI experience.
This incident was not an isolated one. AI tools have come under scrutiny across the industry. Google’s Bard, now known as Gemini, stumbled during a live demo, answering a question incorrectly and causing a dramatic drop in the company’s market value. Later, Gemini was criticized for alleged bias, particularly for its reluctance to produce images of white people for certain prompts.
Despite these setbacks, Microsoft is committed to leveraging AI for positive change. The company believes that, with the right approach, AI can significantly advance diversity and inclusion (D&I) efforts. A key strategy for addressing AI’s inherent biases is to increase the diversity of the teams developing the technology.
Microsoft’s Chief Diversity Officer Lindsay-Rae McIntyre is at the forefront of this initiative. With a rich background in human resources and a passion for inclusive technology, McIntyre is focused on integrating D&I principles into Microsoft’s AI development process. Her goal is to ensure that AI systems are built by diverse teams to better represent and serve the global user base.
The imperative for diverse AI teams
Microsoft’s approach to AI is shaped by the recognition that diverse development teams are critical to building technology that is equitable and representative. The company’s efforts in this area are driven by the understanding that biases in AI systems often reflect biases present in their training data. By diversifying the teams that build these systems, Microsoft aims to reduce these biases and increase the inclusiveness of its AI products.
McIntyre emphasizes that including diverse perspectives is not just a checkbox, but a fundamental aspect of responsible AI development. “This is more important than ever as we think about building inclusive AI and inclusive technology for the future,” she emphasizes. This includes incorporating inclusive practices at all levels of the company, from research and development to deployment and feedback.
Microsoft’s commitment to inclusive AI
Microsoft’s commitment to diversity is not new. The company has a history of incorporating inclusivity into its technology, from the Xbox Adaptive Controller designed for users with disabilities to accessibility features in Microsoft 365. However, the rapid evolution of AI technology presents new challenges and opportunities.
In its 2023 Diversity Report, Microsoft highlighted that approximately 54.8% of its core workforce is made up of racial and ethnic minorities, which is in line with industry standards. However, the company recognizes that a continued effort is needed to improve diversity in tech. As AI becomes an integral part of Microsoft’s business, maintaining a diverse and inclusive workforce is essential to staying at the forefront of innovation and ethical technology use.
Addressing bias in AI systems
One of the key challenges in AI development is the presence of bias in training data. Large language models like ChatGPT and Copilot are trained on huge datasets scraped from the internet, which may contain biased or prejudicial content. As a result, if not carefully managed, AI systems can inadvertently perpetuate these biases.
Microsoft is tackling this issue through a combination of rigorous research and practical interventions. The company invests in studying and mitigating fairness-related harms in AI systems and seeks feedback from a range of experts, including anthropologists, linguists, and social scientists. These efforts are part of Microsoft’s Responsible AI Standard, which guides the development and deployment of its AI technologies.
Cultural context and global inclusivity
Microsoft’s approach to AI inclusivity goes beyond bias mitigation to include cultural relevance and accessibility. AI systems must meet the needs of a global audience with diverse languages and cultural contexts. McIntyre explains that making AI accessible in multiple languages is a key area of focus. Language productivity and authentic self-expression are two of the most important areas of focus.