Blame it on HAL 9000, Clippy’s constant cheerful interruptions, or any navigational system leading delivery drivers to dead-end destinations. In the workspace, people and robots don’t always get along.
But as more artificial intelligence systems and robots aid human workers, building trust between them is key to getting the job done. One University of Georgia professor is seeking to bridge that gap with assistance from the U.S. military.
Aaron Schecter, an assistant professor in the Terry College’s department of management information systems, received two grants – worth nearly $2 million – from the U.S. Army to study the interplay between human and robot teams. While AI in the home can help order groceries, AI on the battlefield offers a much riskier set of circumstances — team cohesion and trust can be a matter of life and death.
“My research is less concerned with the design and the elements of how the robot works; it’s more the psychological side of it. When are we likely to trust something? What are the mechanisms that induce trust? How do we make them cooperate? If the robot screws up, can you forgive it?” — Aaron Schecter
“In the field for the Army, they want to have a robot or AI not controlled by a human that is performing a function that will offload some burden from humans,” Schecter said. “There’s obviously a desire to have people not react poorly to that.”
While visions of military robots can dive into “Terminator” territory, Schecter explained most bots and systems in development are meant to transfer heavy loads or provide advanced scouting — a walking platform carrying ammunition and water, so soldiers aren’t burdened with 80 pounds of gear.
“Or imagine a drone that isn’t remote-controlled,” he said. “It’s flying above you like a pet bird, surveilling in front of you and providing voice feedback like, ‘I recommend taking this route.’”
But those bots are only trustworthy if they are not getting soldiers shot or leading them into danger.
“We don’t want people to hate the robot, resent it, or ignore it,” Schecter said. “You have to be willing to trust it in life and death situations for them to be effective. So, how do we make people trust robots? How do we get people to trust AI?”
Rick Watson, Regents Professor and J. Rex Fuqua Distinguished Chair for Internet Strategy, is Schecter’s co-author on some AI teams research. He thinks studying how machines and humans work together will be more important as AI develops more fully.
Understanding limitations
“I think we’re going to see a lot of new applications for AI, and we’re going to need to know when it works well,” Watson said. “We can avoid the situations where it poses a danger to humans or where it gets difficult to justify a decision because we don’t know how an AI system suggested it where it’s a black box. We have to understand its limitations.”
Understanding when AI systems and robots work well has driven Schecter to take what he knows about human teams and apply it to human-robot team dynamics.
“My research is less concerned with the design and the elements of how the robot works; it’s more the psychological side of it,” Schecter said. “When are we likely to trust something? What are the mechanisms that induce trust? How do we make them cooperate? If the robot screws up, can you forgive it?”
Schecter first gathered information about when people are more likely to take a robot’s advice. Then, in a set of projects funded by the Army Research Office, he analyzed how humans took advice from machines, and compared it to advice from other people.
Relying on algorithms
In one project, Schecter’s team presented test subjects with a planning task, like drawing the shortest route between two points on a map. He found people were more likely to trust advice from an algorithm than from another human. In another, his team found evidence that humans might rely on algorithms for other tasks, like word association or brainstorming.
“We’re looking at the ways an algorithm or AI can influence a human’s decision making,” he said. “We’re testing a bunch of different types of tasks and finding out when people rely most on algorithms. … We haven’t found anything too surprising. When people are doing something more analytical, they trust a computer more. Interestingly, that pattern might extend to other activities.”
In a different study focused on how robots and humans interact, Schecter’s team introduced more than 300 subjects to VERO — a fake AI assistant taking the shape of an anthropomorphic spring. “If you remember Clippy (Microsoft animated help bot), this is like Clippy on steroids,” he says.
During the experiments on Zoom, three-person teams performed team-building tasks such as finding the maximum number of uses for a paper clip or listing items needed for survival on a desert island. Then VERO showed up.
Looking for a good collaboration
“It’s this avatar floating up and down — it had coils that looked like a spring and would stretch out and contract when it wanted to talk,” Schecter said. “It says, ‘Hi, my name is VERO. I can help you with a variety of different things. I have natural voice processing capabilities.’”
But it was a research assistant with a voice modulator operating VERO. Sometimes VERO offered helpful suggestions — like different uses for the paper clip; other times, it played as moderator, chiming in with a ‘nice job, guys!’ or encouraging more restrained teammates to contribute ideas.
“People really hated that condition,” Schecter said, noting that less than 10% of participants caught on to the ruse. “They were like, ‘Stupid VERO!’ They were so mean to it.”
Schecter’s goal wasn’t just to torment subjects. Researchers recorded every conversation, facial expression, gesture, and survey answer about the experience to look for “patterns that tell us how to make a good collaboration,” he said.
An initial paper on AI human and human teams was published in Nature’s Scientific Reports in April, but Schecter has several more under consideration and in the works for the coming year.
Reference: “Humans rely more on algorithms than social influence as a task becomes more difficult” by Eric Bogert, Aaron Schecter and Richard T. Watson, 13 April 2021, Scientific Reports.DOI: 10.1038/s41598-021-87480-9