Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Minimizing Automation Bias in Machine Learning
Microsoft's Diana Kelley Says Diversity Is Key Component for Resilient ML ModelsDeveloping robust and resilient machine learning models requires diversity in the teams working on the models as well as in the datasets used to train the models, says Diana Kelley of Microsoft.
See Also: Safeguarding Election Integrity in the Digital Age
”If you don’t understand the datasets that you are using properly, it’s a potential to automate bias,” she says.
In a video interview with Information Security Media Group at the recent RSA Conference 2019 in Singapore, Kelley discusses:
- The APAC security landscape;
- Automation bias in AI & ML;
- The need for more diversity in ML teams and datasets.
Kelley is the cybersecurity field chief technology officer for Microsoft and a cybersecurity architect, executive adviser and author. She leverages her more than 25 years of cyber risk and security experience to provide advice and guidance to CSOs, CIOs and CISOs at some of the world’s largest companies. Previously, she was the global executive security adviser at IBM.