학술논문
A Probabilistic Approach for Model Alignment with Human Comparisons
Document Type
Working Paper
Author
Source
Subject
Language
Abstract
A growing trend involves integrating human knowledge into learning frameworks, leveraging subtle human feedback to refine AI models. While these approaches have shown promising results in practice, the theoretical understanding of when and why such approaches are effective remains limited. This work takes steps toward developing a theoretical framework for analyzing the conditions under which human comparisons can enhance the traditional supervised learning process. Specifically, this paper studies the effective use of noisy-labeled data and human comparison data to address challenges arising from noisy environment and high-dimensional models. We propose a two-stage "Supervised Learning+Learning from Human Feedback" (SL+LHF) framework that connects machine learning with human feedback through a probabilistic bisection approach. The two-stage framework first learns low-dimensional representations from noisy-labeled data via an SL procedure and then uses human comparisons to improve the model alignment. To examine the efficacy of the alignment phase, we introduce a concept, termed the "label-noise-to-comparison-accuracy" (LNCA) ratio. This paper identifies from a theoretical perspective the conditions under which the "SL+LHF" framework outperforms the pure SL approach; we then leverage this LNCA ratio to highlight the advantage of incorporating human evaluators in reducing sample complexity. We validate that the LNCA ratio meets the proposed conditions for its use through a case study conducted via Amazon Mechanical Turk (MTurk).