SARA CRA Overview
- Enable Autonomous Maneuver in complex and contested environments.
- Develop fundamental understanding and inform the art-of-the-possible for Scalable, Adaptive, and Resilient Autonomy.
- Improve air and ground based autonomous vehicle perception, learning, reasoning, communication, navigation, and physical capabilities.
- Realize adaptive and resilient Intelligent Systems that can reason about the environment, work in distributed and collaborative heterogeneous teams, and make decisions operational tempo.
- A series of technology sprint topics executed in annual program cycles integrated with ARL Essential Research Programs (ERPs).
- Integrate state-of-the-art solutions from leading Institutions and PIs into government-owned autonomous testbeds and autonomy stack.
- Comprehensive and cumulative capability exploring and experimentally demonstrating the art-of-the-possible in scalable, adaptive, and resilient autonomy.
Cycle 1 Technology Sprint Topic: Off-Road Autonomous Maneuver
- How to increase the operational tempo and mobility of autonomous systems to traverse increasingly complex off-road environments.
- Aligned with the Artificial Intelligence for Maneuver and Mobility (AIMM) ERP focus and experimentation for high-speed unmanned ground vehicle (UGV).
Cycle 2 Technology Sprint Topic: Autonomous Complex Terrain Maneuver
- Extend beyond the Cycle #1 topics to explore novel methods for increasing the complexity of the planning, decision-making, and traversable physical environments for off-road maneuver of Army autonomous systems
- Intelligence and path planning capability over long distances, long times scales, varying terrains and surfaces
- Novel methods for all-terrain ground and aerial maneuver, to interact with and move through complex environments.
- Methods for scalable and heterogeneous behaviors in support of collaborative air and ground manned-unmanned teaming (MUM-T) operations.
- Techniques for improved perception, decision making, and adaptive behaviors for fully-autonomous maneuver in contested environments.
- Methods, metrics, and tools to facilitate, simulate, and enable evaluation, verification, and validation of emerging approaches for intelligent and autonomous systems under Army relevant constraints and environments.
- Experimental testbeds to develop and refine knowledge products to inform and transition technology to Army stakeholders.
AI for Maneuver and Mobility (AIMM) 2025 Integrated Demonstration
ARL Autonomy Stack implemented on Husky platform
An outcome of Robotics Collaborative Technology Alliance (RCTA)
ARL Autonomy Stack implemented on Warthog platform
Warthog UGV @ Texas A&M University RELLIS Campus
ARL air Autonomy Stack implemented on mission-driven Small UAS
UAS auto-landing on moving Warthog UGV @ APG
Army awards nearly $3M to push research boundaries in off-road autonomy
“The SARA program will help us accelerate our research in autonomous vehicles by including best of breed performers who will augment the capabilities of our core software, enabling future combat vehicles to operate in complex environments,” said Dr. John Fossaceca, acting AIMM ERP program manager at the lab.
“Robotic and autonomous systems need the ability to enter into an unfamiliar area, without the ability to communicate and for which there are no maps showing terrain or structures, make sense of the environment, and perform safely and effectively at the Army’s operational tempo,” said Eric Spero, SARA program manager.
Partners will work in close collaboration with each other and the lab to further develop and then integrate their solutions onto representative testbed platforms, and into the lab’s autonomous systems software repository, for collaboration across the Army Futures Command and Army autonomy enterprise. Disciplined research experimentation will then verify and validate both expected and new behaviors, Spero said.DEVCOM Army Research Laboratory Office of Strategic Communications
ARL Autonomy Stack
The ARL Autonomy Stack provides an implementation of the architecture and consists of four major capabilities:
- Perception pipeline: Take sensor data (e.g., RGB images and point clouds) and process to symbolic observations. Components include object detection, per-pixel image classification, object position/pose estimation based on LIDAR, etc.
- Simultaneous Localization and Mapping (SLAM): Using sensor data and perception pipeline products, formulate SLAM problem as a pose-graph optimization and solve. Includes components for point cloud alignment (ICP), pose-graph optimization (GTSAM), caching/data-association/fusion of symbolic object measurements, renderers of terrain classes/occupancy grids/point clouds.
- Metric Planning and Execution: Use metric model of the world to achieve metric goals (e.g., waypoint navigation). Includes components for global planning (e.g., lattice-based motion planning), local planning (e.g., trajectory optimization), and an executor to sequence planning and control.
- Symbolic Planning and Execution: Use symbolic model of the world to achieve symbolic goals (e.g., going near a particular object). Underlying symbolic planning architecture is based on behavior trees. Includes components for mission planning (e.g., the Planning and Acting using Behavior Trees), mission execution, sample behaviors that interface with mission planning/execution and the metric planning/execution layer (e.g., going to an object).
Contributions to the existing architecture come in three possible ways:
- Replace an existing algorithm (i.e., node) with one that maintains the same input/output specifications. Experiments should then be conducted to show improved performance.
- Add an existing algorithm or capability to the existing system. Experiments should then be conducted to show augmented capability.
- Replace an existing capability (i.e., cluster of nodes) such that aggregate input/output specifications are maintained. Experiments should then be conducted to show maintained end-to-end performance and augmented capabilities.
To support this collaborative and cumulative engagement, software code developed under the SARA CRA will be added to the ARL Autonomy Stack Repository for use by current and future ARL and SARA sprint performers.