Human Capability Enhancement

Human Capability Enhancement is a basic research, applied research, and advanced technology development effort, which aims to discover, innovate, and develop technologies that directly and indirectly enhance human perceptual, cognitive, physical, and social capabilities rang­ing from individuals and teams to organizations and societies for the Army. Innovations in this area are expected to generate equipment and training technologies that will provide unprecedented capabilities for future warfighters and enable future leaders to make sound decisions effectively in complex socio-cultural contexts.


Characterization of human gait states and transitions

This research aims to identify the variables that strongly indicate particular gait states and transitions between gait states in humans. This information can be used to develop effective control algorithms for exoskeletons and other personal augmentation systems for Soldiers. Efforts focus on examining human gait under the conditions and in the environments in which dismounted Soldiers operate (e.g., carrying a heavy rucksack and walking up and down hills or tactical movements on uneven terrain). Collaborations are sought in the following areas: theoretical and experimental motor control research, pattern recognition, machine learning, and statistical methods to quantitatively identify states of gait and transitions from one type of gait to another.

Principal Investigator:

Dr. Philip Crowell,, (410) 278-5986

Improving Neural Enhancement by Combining Stimulation and Network Analysis

Recent research has indicated the promise of utilizing real-time measurement of physiological activity in feedback studies to improve performance. Participants are shown their ongoing physiological activity, and they develop internal strategies to modulate their physiology based on volitional control. In our research, we investigate alternative approaches for the physiologic target of the feedback process.

In one component of our research, we capitalize on our recent results that identified control points of networks (Gu et al., 2015, Nature Comunications, 6, 8414), and instead of targeting the most predictive physiological connection directly, we can target the boundary control nodes that are responsible for mediating activity within the most predictive node of the network. In this way, we gain a degree of control and precision over the network process that may give rise to a complex behavioral response. The activity level of boundary control nodes can serve as feedback, and the subjects will learn to vary the regional activity level of the boundary control nodes and, as a result, modulate their functional connectivity.

In another component, we extend our previous research that has demonstrated how stimulation can be used to identify differential global brain network changes if the stimulated local area is highly interconnected across different brain networks (Garcia et al., 2011, Journal of Neurophysiology, 106(4), 1734-46). Here, we stimulate a variety of different brain areas in order to map the networks that are most flexible or inflexible to external perturbation. By mapping the cascade of neural activity following stimulation, we may also gain access to the naturally occurring networks that may be associated with particular functions (e.g., task performance).

Collaborators are desired on either of these ongoing approaches or brainstorming additional methodological innovations that can be used with stimulation to augment human performance. This research supports ARL’s ERA in Accelerated Learning for a Ready and Responsive Force.

Principal Investigator:

Dr. Javi Garcia,, (410) 278-8949

Multisensory Interface for Information Foraging

Intelligence analysts are few in number, scan imagery using slow, methodical techniques, and are costly to train. Regardless, demand for analysts is high and increasing due to large volumes of imagery from intelligence, surveillance, and reconnaissance (ISR) sensors and platforms. This trend makes it nearly impossible for the Department of Defense to keep pace with the analysis of the images and data from its many and varied data collection sources. Consequently, there is a critical need for novel approaches/techniques that enhance analysts’ ability to perform at high levels for extended periods of time. This research explores touch and 3D interfaces with an emphasis on the integration across speech, eye-gaze, and gesture interactions with computer vision object detection. The resulting prototype system will attempt to enhance image analysis efficiency, target detection performance, and reduce fatigue. Collaborative opportunities exist in computer science, human-computer interaction (HCI), and neurostimulation. Computer science opportunities involve machine learning techniques for hand gestures and computer vision algorithms for automatic target detection capabilities. HCI collaboration regarding data visualization design for 2D and 3D information presentation and the integration of voice, gesture, and eye-gaze input are possible. Finally, the application of neurostimulation to enhance cognitive performance is also an area for collaborative efforts.

Principal Investigator:

Dr. Jeff Hansberger,, (256) 273-9895

Neurotechnology for Rapid Capability Enhancement

Non-invasive neurotechnologies and cognitive training approaches show great promise for augmenting cognitive capabilities. ARL is investigating the use of transcranial direct current stimulation (tDCS) and mindfulness meditation to augment Soldier capabilities through enhanced training as well as direct enhancement of task performance.  Stimulation is also a useful tool to perturb, characterize, and actively produce state changes within large dynamical systems such as the human nervous system. ARL is seeking postdocs and collaborators with backgrounds in neurostimulation, control theoretic and dynamical systems approaches, and statistical modeling of training effects.

Principal Investigator:

Dr. Alfred Yu,, (410) 278-1037


Adaptive Training and Education Research

Goals for this research program include the discovery of tools and methods to support automated instruction, to reduce skills and time to author adaptive instruction, and to evaluate adaptive training and educational methods.  The Learning in Intelligent Tutoring Environments (LITE) Laboratory investigates methods to enable computer-based tutoring systems (also known as Intelligent Tutoring Systems - ITSs) to automatically adapt instruction to optimize the learning, performance, retention, and transfer of skills from training to the operational environment.  Instruction is tailored for individual learners or teams based on their states (e.g., cognitive, social) which in turn are inferred based on historical, self-reported, observed, physiological, and behavioral data.  In concert with our major goals, we are capturing best practices in the Generalized Intelligent Framework for Tutoring (GIFT) which is a free, open architecture for authoring, automating instruction, and evaluating the effectiveness of adaptive instructional tools and methods.  At this time, GIFT has nearly 1100 users in 70 countries.  As part of the Open Campus initiative, ARL has created GIFT Virtual Open Campus (VOC), a cloud-based version of GIFT accessible from anywhere on a myriad of host platforms.  GIFT VOC is available at

Principal Investigator:

Dr. Robert Sottilare,, (407) 384-3007

Synthetic Environment research for training devices to reduce time for producing terrain

TRADOC Combined Arms Center identified the ARL One World Terrain (OWT) research project as a key required enabler to ensure success of the PEO STRI Synthetic Training Environment (STE).  OWT research provides a single source terrain database solution for training and operational systems.  Research goal is to have the capability for Soldiers to rapidly update OWT and thereby reduce terrain development and maintenance costs.  OWT will leverage real world and operational data to enhance STE training realism.  OWT will acquire information from advanced data support analytics, algorithms, attributions, visualizations, and accessibility programs.  Collaborative research is needed to obtain the goal to reduce development and maintenance cost.  Collaborative research is also needed to make terrain generation easier for Soldiers to execute by reducing their learning curve to two days and providing OWT on mobile devices with cloud computing capabilities.

Principal Investigator:

Mr. Julio De la Cruz,, (407) 208-3022

Mixed Reality Training research to improve soldier IED detection

A collaborative training environment that optimizes human and team performance has been identified for future training environments.  In order to achieve this environment there is a need to merge real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time. Mixed Reality is the umbrella term to describe the spectrum of immersive graphic presentation technologies spanning Virtual Reality, Augmented Virtually and Augmented Reality.  Through the use of existing mixed reality technologies the Army intends to develop training environments that are adaptive and allow cross-domain training, specifically between the live training and other simulation training domains. Research collaboration is needed to see if mixed reality can be used on existing improvised explosive device virtual training devices to enhance training.

Principal Investigator:

Ms. Latika Eifert,, (407) 384-5338

Biotechnology and concepts for the manipulation of cellular processes to achieve training endpoints

The effects of training, both physical and cognitive, are the result of changes in how individual cells in the body respond. Through repetition and experience, biological processes adapt to increase efficiency and optimize performance. Advances in the basic research of biotechnology are required to enable direct interaction with the biology of healthy humans to achieve training outcomes faster. Biotechnology approaches to direct manipulation of human biology may also serve as a complimentary approach to the use of simulation by adding elements of the biological components of real-world experience. Potential collaborations on bioengineering extracellular vesicles, with a specific interest in optimizing endosomal escape efficiency, cell-type specific targeting and/or purification of loaded vesicles from a mixed population.

Principal Investigator: 

Dr. Keith W. Whitaker,, (410) 278-5599

Mixed Reality Battlespace Visualization (ARES & more)

Complete situational awareness of the battlefield is essential to making informed operational decisions. Modern military tactics have led to further delegation of decision authorities to lower echelons to decrease the time to decision. In this environment, providing a seamless understanding of the battlespace across echelons is critical to mission success. ARL is exploring means to deliver a user-defined common operating picture at the point of need by using low cost and readily available technologies. One example of a prototype being used to research the benefits of a tangible, 3-dimensional, shared terrain model is the ARES (Augmented REality Sandtable) project ( Collaborations are sought with relevant organizations to investigate innovative techniques and technologies to aid users and teams in their ability to understand and act on complex, multi-dimensional data.  Collaborations are also sought to develop open, extensible architectures which can support the shared battlespace in all of its forms (incl. cyber) through the use of augmented-, mixed-, and virtual reality devices. Collaborators capable of conducting pertinent human factors research in perception, interaction, collaboration, and cognition are also sought.

Principal Investigator:

Mr. Charles Amburn,, (407) 384-3901

Individual Traits and Training Effectiveness: How Individual Differences Affect Response to Immersion in Virtual Training Environments

Effective training is a universal need, not just for the Warfighter, but across military, industrial, and academic contexts. In pursuit of effective training, a vast array of virtual training environments (VTEs) have been developed. These may be as simple as text on a computer screen or as elaborate as head-mounted virtual reality systems. VTE designers often operate under the mantra of "more is better" - the idea that higher fidelity, more immersive simulations will yield better learning outcomes. However, the reality is much more complex. The relationships among fidelity, immersive characteristics, the psychological experience of presence, and training outcomes are far from clear-cut. The literature shows inconsistent training benefits--sometimes even detriments--for VTEs that have more immersive characteristics as compared to VTEs with less immersive characteristics. We believe this may be due to three reasons: 1) Detrimental effects (e.g., simulator sickness) of immersive characteristics lead to diminishing training returns; 2) Individual trait differences affect a person's experience of presence and how trainees respond to varying levels of immersive characteristics; 3) Task-relevance affects how immersive characteristics influence learning and presence.

We have yet to see a systematic investigation of all three of these factors explored together in the published literature. This is needed to address likely complex interactions and explore why optimal/maximal training outcomes that are expected by the use of immersive VTEs are often not achieved as compared to VTEs with less immersive characteristics; and how individual traits affect a trainee’s experience in a VTE and training outcomes. Our goal is to close this gap by addressing the following: 1) What is the relationship between immersive characteristics and training effectiveness? Are there diminishing returns? 2) Do individual trait differences and simulator sickness mediate the relationship between immersive characteristics and training outcomes? 3) Does the relationship vary depending on task relevancy? In executing this research, we seek interesting discussions with researchers in relevant fields.

Principal Investigator:

Dr. Kimberly Pollard,, (310) 574-5709
Dr. Jason Moss,, (407) 384-3921