Prediction stability as a criterion in active learning
2416-Prediction-Stability-as-a-Criterion-in-Active-Learning-1.pdf (2,012Mb) (Restricted access) Request copy
Què és aquest botó?
Aquest botó permet demanar una còpia d'un document restringit a l'autor. Es mostra quan:
- Disposem del correu electrònic de l'autor
- El document té una mida inferior a 20 Mb
- Es tracta d'un document d'accés restringit per decisió de l'autor o d'un document d'accés restringit per política de l'editorial
Document typeConference report
Rights accessRestricted access - publisher's policy
Recent breakthroughs made by deep learning rely heavily on a large number of annotated samples. To overcome this shortcoming, active learning is a possible solution. Besides the previous active learning algorithms that only adopted information after training, we propose a new class of methods named sequential-based method based on the information during training. A specific criterion of active learning called prediction stability is proposed to prove the feasibility of sequential-based methods. We design a toy model to explain the principle of our proposed method and pointed out a possible defect of the former uncertainty-based methods. Experiments are made on CIFAR-10 and CIFAR-100, and the results indicates that prediction stability was effective and works well on fewer-labeled datasets. Prediction stability reaches the accuracy of traditional acquisition functions like entropy on CIFAR-10, and notably outperformed them on CIFAR-100.
CitationLiu, J. [et al.]. Prediction stability as a criterion in active learning. A: International Conference on Artificial Neural Networks. "Part of the Lecture Notes in Computer Science book series (LNCS, volume 12397)". Springer Nature, 2020, p. 157-167. DOI 10.1007/978-3-030-61616-8_13.
|2416-Prediction ... n-in-Active-Learning-1.pdf||2,012Mb||Restricted access|