Research Suggests Soldiers, AI Are Trusting One Another

Army researchers recently completed a simulation study where crew members and artificial intelligent agents demonstrated trust and cohesion while working together.

U.S. Army Combat Capabilities Development Command’s Army Research Laboratory researchers and U.S. Army Military Academy cadets conducted the study as part of an academic capstone project. It also supports the Army Wingman Joint Capabilities Technology Demonstration and the Army’s Next Generation Combat Vehicle mission prioritization.

“The project was a simulation study of a manned-unmanned vehicle gunnery team that was conducted to assess potential metrics of team trust and cohesion for evaluating future human-autonomy teams,” said Dr. Kristin Schaefer-Lay, an Army researcher. “Subjective, behavioral, performance, communication and physiological data were collected to identify possible team trust and team cohesion metrics.”

Researchers from the Human-Autonomy Teaming Essential Research Program collaborated with Drs. Ericka Rovira, Robert Thomson and two cadets to determine and develop metrics for assessing trust and cohesion in human-autonomy teams.

“An experiment was developed with cadet support to use a simulation for training and assessing these human-autonomy teams,” said Ralph Brewer, Army computer scientist and retired Army master sergeant. “The cadets used the Wingman simulation testbed, which allows a human crew to interact with the actual robotic vehicle autonomy on a realistic gunnery task. This software-in-the-loop simulation environment integrates all the real-world vehicle autonomous mobility and lethality into a lab-based virtual setting.”

The cadets supported study design and ran the trials with participants in the academy’s behavioral sciences and leadership courses. They collected informed consent, briefed the participants, and collected questionnaire data, along with timing the event.

“The simulation allowed the three manned crew to couple with a weaponized unmanned ground vehicle to conduct a live fire gunnery evaluation similar to fully manned crews using remote weapon stations,” Brewer said. “The cadets filled the roles of mobility and lethality operator with me as vehicle commander. The cadets were trained on the user interface, vehicle autonomy and the conduct of fire for this vehicle and weapon station. They picked up the basics on maneuver and execution of gunnery skills with minimal training time.”

Thomson, a cyber and cognitive science fellow at the Army Cyber Institute and associate professor in engineering psychology in the USMA Department of Behavioral Sciences, highlighted that this involved a real simulator and an actual live-fire exercise, well beyond the usual in front of a computer screen tasks traditionally seen in a cognitive psychology and human factors setting.

“This collaboration with the laboratory provides cadets unparalleled experience with real Army research priorities while honing their skills running behavioral studies beyond the classroom setting,” Thomson said. “The cadets will take this experience with them through their careers.”

The performance scores were based on how effectively they engaged and destroyed the targets–they must match speed with precision in engaging the enemy targets, researchers said.

“When understanding team trust and cohesion among human-autonomy teams, performance scores are only one part of the equation,” Brewer said. “Cadets were recorded both visually, audially, as well as physiologically. Physiological monitoring was through the use of a wrist monitor to gather heart rate and electrodermal activity.”

A critical achievement of this approach is a multi-method outcome that advances science in how behavior and physiological data provide useful windows into the trust and cohesion exhibited by crew members in human-autonomy teams, above and beyond traditional self-report methods, and provides valuable understanding about team effectiveness, Schaefer-Lay said.

Trust in human-autonomy teams is primarily measured by a subjective scale, researchers said, often ranging from a single item–how much do you trust this system or technology–to a scale that was validated under different constructs.

“While there is still a lot of value to this method, it does not explain the whole trust picture, especially when you extend beyond a dyadic partnership to a larger team that includes intelligent agents or technologies,” Schaefer-Lay said. “Therefore, this work builds on the developing science related to trust and cohesion in human-autonomy teams by taking a multi-method approach to trust-based assessment.”

After the experiment completion, the cadets analyzed the data and wrote their final report. They collaborated with laboratory researchers on a conference paper that recently won best paper in the category of Human Factors and Simulation at the International Conference on Applied Human Factors and Ergonomics.

“The recent data will benefit future Soldiers to help form critical trust-based interventions, such as user interface design, adapt the intelligent agent behaviors, adapt communication – modality, timing, amount, frequency to calibrate team trust and cohesion in order to ultimately improve team effectiveness,” Schaefer-Lay said.