Gregory Falco, an assistant research professor in the Department of Civil and Systems Engineering and a member of the Institute for Assured Autonomy, is part of a team recently awarded a $250,000 grant from the National Institute of Standards and Technology (NIST) to develop a taxonomy for Artificial Intelligence risks that can be used to classify what can go wrong with AI-enabled systems.

Along with colleagues at Stanford University, Falco will build a public dashboard (called Accidents.AI) to capture publicly documented AI accidents that have occurred – whether in the form of autonomous vehicle mishaps or biases that had negative implications—to foster transparency about AI risks and their implications.

“As engineers build and society engages with AI-enabled systems, accidents are inevitable,” said Falco, a cyber civil engineer who designs, builds, and investigates critical infrastructure’s digital layer. He will become an assistant professor in 2021. “As with any emerging technology, AI won’t always work as intended. This could be a function of an adversary that compromised the system, an unfortunate bug in the program, or a misfire that happens when developers trained the AI on a data set.”

He notes that though society has a vested interest in understanding the risk of engaging with AI systems, the lack of a so-called “vernacular to describe these risks” results in a lack of public awareness and visibility around them.

“So as part of our project, our team will develop educational material to have constructive conversations about AI risks with various stakeholder groups – especially with marginalized communities,” Falco says.