Scalable, Adaptive, and Resilient Autonomy (SARA) Collaborative Research Alliance Overview

Purpose

  • Enable Autonomous Maneuver in complex and contested environments.
  • Develop fundamental understanding and inform the art-of-the-possible for Scalable, Adaptive, and Resilient Autonomy.
  • Improve air and ground based autonomous vehicle perception, learning, reasoning, communication, navigation, and physical capabilities.
  • Realize adaptive and resilient Intelligent Systems that can reason about the environment, work in distributed and collaborative heterogeneous teams, and make decisions operational tempo.

Approach

  • A series of technology sprint topics executed in annual program cycles integrated with ARL Essential Research Programs (ERPs).
  • Integrate state-of-the-art solutions from leading Institutions and PIs into government-owned autonomous testbeds and autonomy stack.
  • Comprehensive and cumulative capability exploring and experimentally demonstrating the art-of-the-possible in scalable, adaptive, and resilient autonomy.
Off-Road Autonomous Maneuver

Cycle 1 Technology Sprint Topic: Off-Road Autonomous Maneuver

  • How to increase the operational tempo and mobility of autonomous systems to traverse increasingly complex off-road environments.
  • Aligned with the Artificial Intelligence for Maneuver and Mobility (AIMM) ERP focus and experimentation for high-speed unmanned ground vehicle (UGV).
Experimentation in simulation and relevant environments
Operationalizing autonomy science

Products

  • Novel methods for all-terrain ground and aerial maneuver, to interact with and move through complex environments.
  • Methods for scalable and heterogeneous behaviors in support of collaborative air and ground manned-unmanned teaming (MUM-T) operations.
  • Techniques for improved perception, decision making, and adaptive behaviors for fully-autonomous maneuver in contested environments.
  • Methods, metrics, and tools to facilitate, simulate, and enable evaluation, verification, and validation of emerging approaches for intelligent and autonomous systems under Army relevant constraints and environments.
  • Experimental testbeds to develop and refine knowledge products to inform and transition technology to Army stakeholders.
Various testbeds

AI for Maneuver and Mobility (AIMM) 2025 Integrated Demonstration

ARL Autonomy Stack implemented on Husky platform

An outcome of Robotics Collaborative Technology Alliance (RCTA).

ARL Autonomy Stack implemented on Warthog platform

Warthog UGV @ Texas A&M University RELLIS Campus

ARL air Autonomy Stack implemented on mission-driven Small UAS

UAS auto-landing on moving Warthog UGV @ APG

Army awards nearly $3M to push research boundaries in off-road autonomy

“The SARA program will help us accelerate our research in autonomous vehicles by including best of breed performers who will augment the capabilities of our core software, enabling future combat vehicles to operate in complex environments,” said Dr. John Fossaceca, acting AIMM ERP program manager at the lab.

“Robotic and autonomous systems need the ability to enter into an unfamiliar area, without the ability to communicate and for which there are no maps showing terrain or structures, make sense of the environment, and perform safely and effectively at the Army’s operational tempo,” said Eric Spero, SARA program manager.

Partners will work in close collaboration with each other and the lab to further develop and then integrate their solutions onto representative testbed platforms, and into the lab’s autonomous systems software repository, for collaboration across the Army Futures Command and Army autonomy enterprise. Disciplined research experimentation will then verify and validate both expected and new behaviors, Spero said.

U.S. Army CCDC Research Laboratory Public Affairs

Awards to academia and industry

(7 Ground topics, 1 Air topic)

  • Safe, Fluent, and Generalizable Outdoor Autonomy – University of Washington
  • Uncertainty-Aware Autonomous Navigation – General Electric Company, GE Research
  • Resilient, Resource-adaptive Multi-sensor Fusion for Persistent Navigation – University of Delaware
  • Efficiently Adaptive State Lattices for Robust Guidance, Navigation, and Control of Unmanned Ground Vehicles – University of Rochester
  • Self-reflective Robot Adaptation to Unstructured Terrain for Off-road Ground Maneuver – Colorado School of Mines
  • Enhancing Unmanned Maneuver through Mission-based Traversability Assessment (MeTA) Tool – Florida Institute for Human and Machine Cognition (IHMC)
  • 2.5D Terrain Adaptive Decision-Theoretic Planning for Autonomous Vehicles – Indiana University
  • High-speed and reactive planning, localization, perception and navigation for aerial robots in dynamic off-road environments – University of California Berkeley

ARL Autonomy Stack – Architecture

[Insert diagram here]

ARL Autonomy Stack – Description

The ARL Autonomy Stack provides an implementation of the architecture and consists of four major capabilities:

  1. Perception pipeline: Take sensor data (e.g., RGB images and point clouds) and process to symbolic observations. Components include object detection, per-pixel image classification, object position/pose estimation based on LIDAR, etc.
  2. Simultaneous Localization and Mapping (SLAM): Using sensor data and perception pipeline products, formulate SLAM problem as a pose-graph optimization and solve. Includes components for point cloud alignment (ICP), pose-graph optimization (GTSAM), caching/data-association/fusion of symbolic object measurements, renderers of terrain classes/occupancy grids/point clouds.
  3. Metric Planning and Execution: Use metric model of the world to achieve metric goals (e.g., waypoint navigation). Includes components for global planning (e.g., lattice-based motion planning), local planning (e.g., trajectory optimization), and an executor to sequence planning and control.
  4. Symbolic Planning and Execution: Use symbolic model of the world to achieve symbolic goals (e.g., going near a particular object). Underlying symbolic planning architecture is based on behavior trees. Includes components for mission planning (e.g., the Planning and Acting using Behavior Trees), mission execution, sample behaviors that interface with mission planning/execution and the metric planning/execution layer (e.g., going to an object).

ARL Autonomy Stack – Contribution

Contributions to the existing architecture come in three possible ways:

  • Replace an existing algorithm (i.e., node) with one that maintains the same input/output specifications. Experiments should then be conducted to show improved performance.
  • Add an existing algorithm or capability to the existing system. Experiments should then be conducted to show augmented capability.
  • Replace an existing capability (i.e., cluster of nodes) such that aggregate input/output specifications are maintained. Experiments should then be conducted to show maintained end-to-end performance and augmented capabilities.

To support this collaborative and cumulative engagement, software code developed under the SARA CRA will be added to the ARL Autonomy Stack Repository for use by current and future ARL and SARA sprint performers.