Gaussian switch sampling (GauSS) is an active learning approach for training deep neural networks such as those used by autonomous vehicles (AVs) for object detection and tracking. Developed by researchers at Georgia Tech, the GauSS strategy reduces costs and improves performance by combining prediction switches with both diverse and uncertain sampling components. The technology creates an active learning framework that automatically finds samples that need human annotations. This reduces the amount of data that must be manually annotated and keeps all important data samples for training.
The strategy is anchored by a unifying definition of “uncertainty” and “diversity” for active learning based on the idea of neural network forgetting. It uses a concept of "forgetting events" to differentiate and analyze model interactions with uncertain and diverse samples separately. GauSS outperforms existing strategies on various in-distribution metrics while maintaining valuable robustness characteristics for out-of-distribution data.
- Better analyses: It provides a method to analyze models and their interaction with data.
- Improved accuracy: By matching or outperforming existing technologies in terms of test accuracy on in-distribution as well as out-of-distribution applications, GauSS is more selective and finds samples that help the network learn as well as remember things previously learned.
- Versatile: The strategy is based exclusively on learning dynamics and, therefore, can be generalized across networks, data, and training methods.
- Affordability: It requires annotating fewer data and therefore costs less to operate.
- Autonomous vehicle companies and other companies with safety-critical applications where understanding uncertain or diverse model limitations is a necessity
- Defense and security applications
- Any organization that deals with data in real-world scenarios, such as scenarios where knowledge about model interactions with uncertain or diverse samples is important
- Medical applications
- Subsurface imaging applications
To enhance deep learning algorithms, data selection methods such as active learning must be cautiously deployed to avoid risks. Current active learning strategies use interchanging definitions for uncertainty and diversity and do not consider a unified uncertainty definition to evaluate algorithms. The lack of analysis tools for interactions with uncertain and diverse samples makes the real-world deployment of active learning protocols challenging for safety-critical applications (e.g., autonomous vehicles) where understanding model limitations is critical.
Autonomous vehicles primarily use deep learning-based methods for object detection and tracking algorithms. These deep neural networks require a significant amount of annotated training data to cover all possible cases in the real world and ensure high accuracy and robustness of the algorithms. Finding data samples (e.g., unforeseen new events) to help the network perform better, however, is very costly and is usually done manually. This results in very long development cycles and expensive AVs.
How It Works
The GauSS strategy creates an active learning framework that automatically finds samples that need human annotations, which reduces the costs of algorithm training and improves its performance. To reduce the amount of data that needs to be manually annotated while keeping all important data samples for training, collected data is run through the neural network to find samples that are difficult for the neural network to predict. The approach determines which samples the network is uncertain about and which samples were not included in the prior training dataset.
These samples can be identified by using the concept of “forgetting events,” which is defined as a sample that is learned in one training epoch and forgotten in subsequent training epochs. “Forgetting events” is approximated using prediction switches on a trained model from different epochs. Prediction switches are then used as “forgetting events” in an active learning framework to obtain the uncertain and unforeseen samples. This approach defines “uncertain” samples as frequently "forgotten" while “diverse” samples are defined as the “least forgotten."