Teaching AI human uncertainty

Tuesday, 29 August, 2023

Teaching AI human uncertainty

Researchers are attempting to bridge the gap between human behaviour and machine learning by incorporating uncertainty into these systems.

‘Human-in-the-loop’ machine learning systems — a type of AI system that enables human feedback — are often framed as a promising way to reduce risks in settings where automated models cannot be relied upon to make decisions alone. But what if the humans are unsure?

Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.

Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton and Google DeepMind, are examining ways that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis.

How was the study conducted?

The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image. The researchers found that training with uncertain labels can improve these systems’ performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop.

“Uncertainty is central in how humans reason about the world, but many AI models fail to take this into account,” said Katherine Collins, first author from Cambridge’s Department of Engineering.

“A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.

“Many human-AI systems assume that humans are always certain of their decisions, which isn’t how humans work — we all make mistakes,” Collins said.

“We wanted to look at what happens when people express uncertainty, which is especially important in safety-critical settings, like a clinician working with a medical AI system.”

“We need better tools to recalibrate these models, so that the people working with them are empowered to say when they’re uncertain,” co-author Matthew Barker said.

“Although machines can be trained with complete confidence, humans often can’t provide this, and machine learning models struggle with that uncertainty.”

Machine learning datasets

For their study, the researchers used some of the benchmark machine learning datasets: one was for digit classification, another for classifying chest X-rays and one for classifying images of birds. For the first two datasets, the researchers simulated uncertainty, but for the bird dataset, they had human participants indicate how certain they were of the images they were looking at: whether a bird was red or orange, for example. These annotated ‘soft labels’ provided by the human participants allowed the researchers to determine how the final output was changed. However, they found that performance degraded rapidly when machines were replaced with humans.

“We know from decades of behavioural research that humans are almost never 100% certain, but it’s a challenge to incorporate this into machine learning,” Barker said.

“We’re trying to bridge the two fields, so that machine learning can start to deal with human uncertainty where humans are part of the system.”

The researchers said their results have identified several open challenges when incorporating humans into machine learning models. They are releasing their datasets so that further research can be carried out and uncertainty might be built into machine learning systems.

“As some of our colleagues so brilliantly put it, uncertainty is a form of transparency, and that’s hugely important,” Collins said.

“We need to figure out when we can trust a model and when to trust a human, and why. In certain applications, we’re looking at a probability over possibilities. Especially with the rise of chatbots for example, we need models that better incorporate the language of possibility, which may lead to a more natural, safe experience.”

“In some ways, this work raised more questions than it answered,” Barker said.

“But even though humans may be miscalibrated in their uncertainty, we can improve the trustworthiness and reliability of these human-in-the-loop systems by accounting for human behaviour.”

Image credit: iStock.com/David Gyung

Related Articles

SaaS uplift to boost student experience

Bond University recently migrated to TechnologyOne's software-as-a-service (SaaS)...

Tech partnership simplifies school administration

Atturra has partnered with Brisbane Grammar School to deliver a student information system (SIS)...

Does online delivery trump the classroom?

A new study by Charles Darwin University has explored the effectiveness of online learning when...

  • All content Copyright © 2024 Westwick-Farrow Pty Ltd