Below is an archive of general questions and answers, listed in order received.
Q1: Are these robots fully autonomous or do they have to collaborate with a human teammate?
In this sprint, there is not expected to be any collaboration with a human teammate during mission execution. We anticipate the human role in the system to be limited to activities such as providing initial tasking and setting operating parameters.
Q2: ROS 1 is good for single vehicle and few vehicle operations. As the number of collaborating agents increase, and the interconnected system becomes more complex, does ARL think this will continue to be a reasonable solution, or will a more ”big data” tech stack approach need to be pursued?
It is our current expectation that ROS 1 will be sufficiently able to handle the “squad scale” operation in this program (4-10 UGVs). One technical note relating to this question is that the ARL Autonomy Stack contains components designed to support the communication between platforms in a “multi-master” configuration over a UDP-based link. Future sprints will potentially transition to ROS 2. The use of ROS2 “bridge nodes” would also be a potential approach to improve scaling.
Q3: What is the expected order of the proposed budget for each project?
Proposed budgets should be appropriate for the work described. ARL anticipates funding between 3 to 5 efforts in this sprint.
Q4: Do the ground robots have a camera that transmits video back to a human operator in real-time?
Current robot models in the simulator feature a variety of sensors including several different cameras and lidars. Additional sensor models of existing hardware can be added to the simulation environment as needed for the proposed effort.
Q5: Is observation from aerial assets in scope for this program, or is the current call limited to observations from ground assets?
This sprint is looking at a team of ground robots in “squad” scale (4-10) units. It does not include an aerial component.
Q6: Can there be a scenario where robot/ communication failure(s) happen?
The scenario which includes robot and or communications failure could be specified in a proposal but is not generally called for in this sprint topic. Including robot attrition / communications disruptions is not planned for this cycle but could be part of the performance metrics utilized by specific proposers to demonstrate robustness if desired.
Q7: What does ARL mean by doctrine driven maneuver?
Doctrine for the Army is a standard of operation which is taught to Soldiers. This includes formations (such as wedge, echelon, column, line), and maneuver tactics such as traveling or bounding overwatch.
Q8: For the related ARL collaborative research programs (like DCIST, IoBT, SARA, etc.), what is the rough proportion of companies v. universities/UARCs among the performers?
Prior ARL programs contain a mix of industry, UARCs, and university performers. There is no target distribution for the performers in this program.
Q9: Is the call for building a virtual simulator with synthetic scenes (only) or is there is an expectation of a real POC demo (at some ARL site) as well?
The call is for development of multi-agent tactical maneuver behaviors in a simulator environment. Each funded proposal will be working with an ARL technical POC (who can be identified during the proposal review process by ARL). This POC will facilitate follow-on transitions where appropriate.
Q10: Does generalizability of machine learning mean adaptability to unforeseen situations for same scenario?
Generalizable machine learning here should be able to operate with different initial conditions in the same simulation environment, as well as to operate (potentially with some but minimal degraded performance) in similar but unseen simulation environments (i.e., a learned controller would be trained in several “forest” simulations and could be demonstrated in an unseen test “forest” simulation, but not expected to operate well in a “village” simulation).
Q11: What limitations regarding SWAP, cost, covertness, etc. should be considered for communication and sensory systems?
No limitations are specified here with respect to SWAP aside from what could conceivably fit within a vehicle platform.
Q12: What is the rough scale for the mission (both in terms of area/distance and mission duration)?
Multiple scales of operation could be considered from a highly complex 50–100-meter maneuver up to a multi kilometer operation with mixed complexity.
Q13: Should we assume that all scenarios are available and known when the system is trained, or should we expect some testing to occur on novel scenarios?
ARL plans to develop reserved “test scenarios” which will be used to support the evaluation of proposed solutions.
Q14: Is the objective of this call to propose new machine learning algorithms? on top on what’s available in the current ARL multiagent reinforcement learning module?
Yes, the tools provided by ARL are a starting point meant to support further novel research in AI and machine learning techniques that are relevant to the Call for Proposals.
Q15: Is machine learning component necessary or expected in proposals?
Based on the described research problem, ARL anticipates many potential solutions will have a machine learning component. However, a machine learning component is not a requirement if a proposed approach can address the described problems without use of such a component.
Q16: Is there an expectation that ARL will fund a blend of proposers from academia, industry, and UARCs?
There are no preconceived expectations on the proportion of funded proposals from academia, industry, and UARCs.
Q17: Other than the cited ATP documents, are there other resources about military tactics/doctrine?
There are many documents on this subject provided at https://armypubs.army.mil/ProductMaps/PubForm/ATP.aspx which can be referenced in proposals. ARL has provided examples from the Combined Arms Battalion (ATP_3-90.5) and Scout Platoon (ATP_3-20.98) as a starting point, but other sources can be cited.
Q18: For sprint 1, is there any consideration of interaction, communication, or collaboration with humans? Or is sprint 1 purely focused on robot behavior?
Apart from human input potentially defining initial conditions, goals, and other parameters – this sprint is focused on the autonomous operation of ground robot teams without further human involvement.
Q19: Can current ARL CRA program participants (DCIST, IoBT, SARA, etc) build upon and extend their work from these programs for a proposal to TBAM?
Yes, proposals can build from research outcomes from other ARL and non-ARL programs.
Q20: Does in-kind or leveraged funding factor into the review process?
No, leveraged or in-kind funding is not a technical review criterion. Additional leveraged or in-kind funds can be disclosed to justify lower cost bids.
Q21: In page 19 of the FOA, under the section “Other Attachments”, what is the 1st point (“Attached the complete certifications”) referring to. What type of completed certifications are we supposed to attach?
Refer to Section F, Articles c, and d of the FOA for the required certifications.
Q22: The FOA wants us to attach “SF424 R&R Senior/Key Person Profile” and “SF424 Personal data” in field 12 of the form. However, these are already a part of the package in Grants. Gov, then why do we need to attach it again in field 12?
Please follow the instruction listed in FOA to ensure all completed forms are included in the package.
Q23: The 5th point (page 20) refers to completing Representations under DoD Assistance Agreements. Is there a link for that?
Q24: Are references / bibliography included in the page limit for Chapter 1: Technical Component?
References are not included in the page limits.
Q25: Is it reasonable to assume that there is a global map of terrain? Or only some incomplete information of the terrain is available?
Prior information can be used to a degree, but any proposed solution should be able to handle incomplete or absent prior information (such as in a potential “hold-out” test scenario).
Q26: What about the adversaries? Will their locations be available (up to a certain accuracy)?
Prior information about the location of adversaries should be limited to high-level operating conditions: i.e. Beyond this phase line, contact with adversary scout units is (not/somewhat/highly) probable. Adversaries have been seen in this named area. Prior knowledge of the adversary should not be limited to precise position information. Maneuvering ground assets should not assume that all adversaries are known in advance.
Q27: Is it expected to conduct hardware demonstrations at the end of the 2nd year?
Hardware experimentation is not required for this cycle in the second year. If hardware demonstrations are feasible and appropriate for a specific project, the ARL Technical POC(s) can help to facilitate the transition of research outcomes to the appropriate technical maturity suitable for this experimentation.
Q28: It was mentioned during the seminar that incorporating IoBT scenarios could be of interest. Could you please confirm if this is the case? and if so, if there are specific directions of interest?
Proposers can cite current capabilities in IoBT systems to justify their choices / configurations in terms of capabilities; however, as there is an existing IoBT collaborative research alliance, proposed research should not have significant overlap with that program.
Q29: Will the webinar be recorded for those who are unable to attend it live due to schedule conflicts?
The slides, videos, and recordings from the opportunity webinar are posted to the TBAM webpage. Additionally, new recordings are available there for the MITRE Multi-Agent Environment (MMAE) as well as current research overviews.
Q30: Please provide feedback on this white paper to let us know if it is of interest to this program.
Please review the webinar recordings / slides and review all provided information in the FOA and webinar slides for further information on relevant research topics/direction.
Q31: Are only distributed algorithms applicable here or can we consider centralized algorithms?
In general, there are no specific restrictions on the types of approaches which can be proposed. Each proposal will be evaluated on the criteria enumerated in the FOA.
Q32: Is the April 29 deadline listed on page 20 a typo?
This is a typo – the correct due date is 27 May at 1700 EDT.
Q33: Would the inclusion/collaborations of UAVs with UGVs be permitted?
The primary focus of this program is on the coordinated maneuver of ground assets in this sprint cycle. The availability of additional “prior” information (attributable to aerial assets) could be included. The coordination between UAVs and UGVs is the subject of other programs (DCIST).
Q34: Is it necessary to use RLlib or Gym for the implementation of RL algorithms if we need to integrate them into MITRE Multi-Agent Environment (MMAE) framework?
No, there is no requirement of this kind.
Q35: Is there any limitation on the number of agents that need to be navigated?
“Squad scale” – 4 to 10 agents
Q36: Do we need to include ‘adversary contact’ while training/testing the algorithms in the simulation? Or it should be enough to reach a fixed goal (position/location) by choosing routes that maximize cover and concealment?
It should be enough to reach goal location(s)/ areas by choosing routes that maximize cover and concealment.
Q37: If we need to consider desert, what could be the cover here compared with forest/jungle?
Desert environments could still exhibit cover in the form of sparse foliage (brush) and elevation changes. This type of operating environment is likely to be more challenging with respect to finding cover and concealment.
Q38: Can computers to be considered materials or equipment? On page 18 of the solicitation, I am unclear of where to list computers in the budget.
Durable equipment such as computers should be listed as “equipment.”
Q39: Under Proposal Intent on Page 8, the subparagraph discussing IP states, “The success of this multidisciplinary effort will require meaningful collaborative partnerships between government, academia, and industry to advance the science. Proposals must address the intellectual property (IP) approach….” Does ARL intend for all IP to remain open to transition to government ownership or would ARL be amenable to the proposer retaining its commercial rights for its IP?
Awards will require compliance with Bayh Dole and the appropriate implementing regulations which provides the Government with a nonexclusive, irrevocable, paid-up license to use a Subject Invention throughout the world.