학술논문

Fine-Tuning the Retrieval Mechanism for Tabular Deep Learning
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Language
Abstract
While interests in tabular deep learning has significantly grown, conventional tree-based models still outperform deep learning methods. To narrow this performance gap, we explore the innovative retrieval mechanism, a methodology that allows neural networks to refer to other data points while making predictions. Our experiments reveal that retrieval-based training, especially when fine-tuning the pretrained TabPFN model, notably surpasses existing methods. Moreover, the extensive pretraining plays a crucial role to enhance the performance of the model. These insights imply that blending the retrieval mechanism with pretraining and transfer learning schemes offers considerable potential for advancing the field of tabular deep learning.
Comment: Table Representation Learning Workshop at NeurIPS 2023