Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

Report No. ARL-TR-7827
Authors: Maggie Wigness, John G Rogers, Luis Navarro-Serment, and Bruce A Draper
Date/Pages: September 2016; 50 pages
Abstract: Multi-concept visual classification is emerging as a common environment perception technique, with applications in autonomous mobile robot navigation. Supervised visual classifiers are typically trained with large sets of images, hand annotated by humans with region boundary outlines followed by label assignment. This annotation is time consuming, and unfortunately, a change in the environment requires new or additional labeling to adapt visual perception. The time it takes for a human to label new data is called adaptation latency. High adaptation latency is not simply undesirable, but may be infeasible for scenarios with limited labeling time and resources. We introduce a labeling framework that significantly reduces adaptation latency using unsupervised segmentation and clustering in exchange for a small amount of label noise. We demonstrate the framework's speed and ability to collect environment labels that train high-performing, multi-concept classifiers in several outdoor urban environments. Finally, we show the relevance of this label collection process for visual perception as it applies to navigation in outdoor environments.
Distribution: Approved for public release
  Download Report ( 7.497 MBytes )
If you are visually impaired or need a physical copy of this report, please visit and contact DTIC.
 

Last Update / Reviewed: September 1, 2016