Published:
Author: Mike Field
Category:
Greyscale headshot of Cathy Petrozzino
"There are much more risk identification and management activities, and more thought about how we manage the risk today than in the past, thanks partially to ChatGPT. So I am encouraged by that.”

How fast is too fast? Cathy Petrozzino, ’80, (Mathematical Sciences) believes that in the realm of AI and large language model chatbots, this is a question we need to be asking. As a principal in cybersecurity, privacy, and AI ethics at the MITRE Corporation, a public interest nonprofit in Bedford, Massachusetts, that operates federally funded R&D centers on behalf of the government, she is paying close attention to the speed and ubiquity with which these tools are being adapted and employed. In particular, Petrozzino doesn’t believe an organization’s good intentions alone are sufficient to identify and manage ethical risk in AI.

Petrozzino sees the integrity and reliability of data used in AI as a defining limitation, describing the challenge as a form of “ethical debt.” She borrows from the concept of “technical debt,” which compares releasing first-time computer code that has not been fully tested and corrected to going into debt. Sending out code with technical debt speeds development and can be a good thing—so long as it is “paid back” promptly to ensure the integrity of the code base.

“Ethical debt is the same kind of concept,” she explains. “It’s when you’re more interested in putting out a solution quickly, which is often the business driver—to be first to market. But you haven’t yet looked at your solution from an ethical perspective and you end up having a situation where there are biases or other problems within the data or AI solution that haven’t been addressed. And so, there’s ethical debt.”

But there’s a crucial difference. Technical debt is owed by developers to customers who bought the product. Ethical debt falls not on customers, but on individuals who have no concept that they’re being treated unfairly because an algorithm invisible to them was used and the humans accountable for developing, deploying, and sustaining this algorithm did not adequately address ethical risk. “Often, the unfortunate truth is that it’s the people—particularly individuals or groups who have been historically marginalized—who pay the price as a result.”

Petrozzino describes herself as optimistic about finding ways to overcome ethical debt. “I look at this as a classic risk management problem. There are much more risk identification and management activities, and more thought about how we manage the risk today than in the past, thanks partially to ChatGPT. So I am encouraged by that.”