Researchers better understand human-machine relationships

As technology becomes more advanced, concerns about artificial intelligence may creep into the minds of people regardless of their depth of understanding. Some worry that AI might one day replace humans, while others argue that certain skills cannot be automated by machines.

In a scientific paper in the journal IEEE Access, Army researchers suggest that both views stem from the misguided perception of AI as merely tools and nothing more.

Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory, developed a theoretical construct for human relationships with artificial intelligence and other smart technologies that proposes a new vision for more compatible team-like partnerships.

“We believe that there’s a systematic way of thinking about this class of technology that’s way oversimplified,” said Dr. Jason Metcalfe, Army research kinesiologist. “The prevailing mindset essentially boils down to an ‘either or’ mentality: Either a human will do the job or an AI will do the job. We believe that this way of thinking is fundamentally flawed.”

According to Metcalfe, artificial intelligence possesses capabilities that exceed the limitations of humans, but taking humans out of the equation leaves AI systems vulnerable to unpredictable events that only humans can deftly navigate.

In other words, biological intelligence and computational intelligence work in fundamentally different ways that counter each other’s flaws, which makes them compatible not in a user-tool dynamic but in a more advanced and dynamic teammate-to-teammate relationship.

When humans and machines work together in this way, their combined efforts can lead to emergent intelligence, a unique source of creativity that cannot exist without the active participation of both parties.

“When the chess-playing computer, Deep Blue, beat World Chess Champion Garry Kasparov in 1997, everybody looked at that as the peak of progress,” said Dr. Brandon Perelman, Army research psychologist. “So when Kasparov came back with his own computer that he trained to beat Deep Blue, it gave birth to a whole line of competition called Centaur Chess. This kind of partnership opens potential performance gains that you could only get through synergistic work between humans and AI.”

Army researchers said they believe the treatment of AI as teammates in this manner will help accelerate the development of smart technologies that go beyond the service of a singular purpose and instead work as part of a constantly changing ecosystem of humans and machines.

In order to describe the different types of human-AI relationships within this ecosystem, the research team created a three-dimensional construct that maps out the relative capabilities of biological and computational intelligence, which then may specify dynamic roles and functional responsibilities as members in a complex team.

Known as the landscape of human-AI partnership, the model charts capabilities along two axes, available time and information certainty, as well as a novel third axis that denotes the complexity of the problem.

“The idea of capability as a key variable to compare humans and AI is a pretty unifying notion,” Metcalfe said. “Two critical factors that show up broadly in the science on this are the time available to execute a response and the level of certainty in the information about the task. With all that, a key element of our argument is that these discussions almost always neglect task complexity as an important factor.”

In the context of the landscape, simple problems have easily identifiable and well-defined solutions, whereas complex problems feature much more ambiguous task requirements that neither humans nor AI can address robustly on their own.

Metcalfe explained that the inclusion of complexity as the third dimension in this model allows the construct to more accurately represent the broad spectrum of human and AI capabilities that scientists and engineers should consider.

“The landscape is an abstract space that contains all of these possible types of relationships between humans and intelligent technologies,” Metcalfe said. “It’s meant to begin a discussion about how to formalize the phrase, ‘The whole is always greater than the sum of it individual parts,’ in a way that’s specific to the capabilities brought about by the complex group of humans and intelligent systems that are available at that moment and in that context.”

With this theoretical construct as a reference, researchers can track how different smart technologies cover different regions in the landscape of human-AI partnership depending on the scenario.

This knowledge can then guide the design of control frameworks and learning algorithms that drive future devices to better secure their human teammates’ blind spots, as well as to facilitate human support for the devices in a complementary manner.

“For problems that aren’t that hard to solve, especially in cases where you don’t have enough certainty or time, a machine can perform the work way faster than a human, hands down,” Perelman said. “We recognize that there’s a whole class of problems that require more general intelligence than machines typically have. By having synergy and breaking from that ‘either or’ mentality, we can pursue a sort of robustness that we’re not necessarily going to get by allocating our functions solely to a human or to an AI.”