RESUMEN
Noisy pairwise comparison feedback has been incorporated to improve the overall query complexity of interactively learning binary classifiers. The positivity comparison oracle is extensively used to provide feedback on which is more likely to be positive in a pair of data points. Because it is impossible to determine accurate labels using this oracle alone without knowing the classification threshold, existing methods still rely on the traditional explicit labeling oracle, which explicitly answers the label given a data point. The current method conducts sorting on all data points and uses explicit labeling oracle to find the classification threshold. However, it has two drawbacks: (1) it needs unnecessary sorting for label inference and (2) it naively adapts quick sort to noisy feedback. In order to avoid these inefficiencies and acquire information of the classification threshold at the same time, we propose a new pairwise comparison oracle concerning uncertainties. This oracle answers which one has higher uncertainty given a pair of data points. We then propose an efficient adaptive labeling algorithm to take advantage of the proposed oracle. In addition, we address the situation where the labeling budget is insufficient compared to the data set size. Furthermore, we confirm the feasibility of the proposed oracle and the performance of the proposed algorithm theoretically and empirically.
RESUMEN
Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in a Euclidean space of the given instances that preserve the comparison order as much as possible. Unlike fully labeled data, triplet comparison data can be collected in a more accurate and human-friendly way. Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data without all the labels has remained unanswered. In this letter, we give a positive answer to this important question by proposing an unbiased estimator for the classification risk under the empirical risk minimization framework. Since the proposed method is based on the empirical risk minimization framework, it inherently has the advantage that any surrogate loss function and any model, including neural networks, can be easily applied. Furthermore, we theoretically establish an estimation error bound for the proposed empirical risk minimizer. Finally, we provide experimental results to show that our method empirically works well and outperforms various baseline methods.