AI Research Tackles Decision-Making Transparency

As artificial intelligence (AI) becomes more and more commonplace, the demand for these tools to reduce mistakes and become more transparent has created a critical need for advanced research.

Bolstered by a $7.55 million grant from the Defense Advanced Research Projects Agency (DARPA), Kate Saenko, assistant professor of computer science and core faculty member of the AI Research (AIR) Initiative at BU, and UC-Berkeley faculty Trevor Darrell are working to uncover new ways to understand the decision-making processes of AI tools. As part of the grant, Saenko’s BU lab has received $800,000 to work on developing a translation tool for AI decision-making.

Saenko is working to address two issues that currently create a trust gap between AI tools and their human users. “If an AI tool makes mistakes, human users quickly learn to discount it, and eventually stop using it altogether,” she says. “I think that humans by nature are not likely to just accept things that a machine tells them.”

Additionally, as AI tools become more sophisticated and powerful, the algorithms they rely on become extremely complex and often incomprehensible for human users. In addition to improving humans’ trust in AI tools, this research will also provide critical feedback on AI reasoning by improving accuracy and allowing human users to contribute small adjustments during complex reasoning.

Saenko’s research has largely been focused on using deep neural networks to develop new AI systems that employ an iterative approach. “It’s very hard for us to anticipate all possible ways a dog might look in any image anywhere in the world, for example,” says Saenko. “If you have enough processing power and data, a better approach would be to show a computer a million pictures of dogs and let it define them itself.”

Since requiring a deep neural network to explain itself would almost certainly decrease productivity, Saenko and Darrell are working with a colleague at the University of Amsterdam (Netherlands), Zeynep Akata, and Kitware, an open-source software company, to address the issue. Their goal is to great a translation tool that operates alongside the AI tool, interprets its decisions and explains them to a human user in real time.

“In the future, we’re going to be using AI as a collaboration between humans and computers. We need to be able to communicate with it, understand its strengths, and know what it’s good at, so it can help us with things we’re not so good at—like sorting through a petabyte of video to identify content,” Saenko says. “I see this as creating superhumans. It’s a collaboration between humans and AI.”

[Read the full BU Research article]

Back to AIR Research Profiles