Artificial Intelligence (AI) and machine learning have created lots of buzz with vendors. Being cast as the superheroes of technology is great for getting attention. But even Superman and Supergirl had their kryptonite.* Could the lack of diversity and inclusiveness in the design teams and data types weaken these two superhero technologies, like kryptonite weakened our friends from Krypton? Now is the time to shine a spotlight on problems that arise from the lack of inclusiveness and diversity in these areas to make sure that we are not automating existing biases in data or design.
Discrimination and non-inclusiveness in product development can be harmful—
When workforces are not diverse and inclusive, problems stemming from various types of bias may occur. For example, women might not get a fair shot at a position because hiring standards have been set to match the pool of traits exhibited by current employees—
Datasets can be at fault, as well, especially when populations are skewed because of social issues or the biases of system designers. Take the case of raw data used to predict criminality. Since the current justice system is biased against African Americans, who are incarcerated at a rate which is five times that of Caucasians, the dataset will be biased, too.
AI and machine learning require a collaborative, inclusive approach that is ethical and respectful of the values each employee brings to the table. But diversity and inclusiveness are not only about ethnicity, gender, and gender-orientation. It’s also about a diversity of viewpoints and ways of examining issues and problem solving.
Lack of team diversity can hurt productivity. Homogenous teams may outperform diverse teams initially, but over time, the productivity of diverse teams increases. This is due, in part, to the strength gained from a variety of perspectives brought to the problem-solving process.
For example: A lawyer brings a unique awareness and mindset to problem-solving that differs from the mindset of privacy experts, mathematicians, data scientists, ethicists, and more. These different viewpoints and skillsets create stronger solutions and practices. Furthermore, diverse viewpoints ensure that the values of fairness, reliability, safety, security, privacy, inclusiveness, transparency, and accountability are included in any data model.
Be aware that if diversity comes in many forms, bias does as well. Companies should work hard to remove biases based on culture, geography, income bracket, educational background, and ageism in addition to those already mentioned.
In creating resilient models that better detect and respond to cybersecurity issues, the greater the team diversity, the greater the resilience to attack and perturbation the models may be. Potentially, these more diverse models will provide us with a greater variety of insights and tools as well.
We’re already seeing that diversity in teams creates diversity in AI and machine learning models, which in turn increases the speed and precision of detection. For example, as part of the Microsoft Threat Protection solution using machine learning, Emotet was detected and blocked in milliseconds.
Since cybercriminals are varied in background and skillset, there is no one type of cyberattack we can defend against and no single machine learning model to find and stop all cyberattacks. But by working with diverse and inclusive design teams and using diverse, layered machine learning models, we’re increasing our ability to find and stop attacks quickly.
If you want more resilient cybersecurity, looking to a superhero isn’t really an option. Instead, rely on the diversity of the cyberheroes you hire and put the power of inclusivity to work for you.
I encourage you to read the report in this companion book, Microsoft: The Future Computed—