학술논문

Bias learning, knowledge sharing
Document Type
Author Abstract
Source
IEEE Transactions on Neural Networks. July, 2003, Vol. 14 Issue 4, p748, 18 p.
Subject
Neural networks -- Research
Neural network
Business
Computers
Electronics
Electronics and electrical industries
Language
English
ISSN
1045-9227
Abstract
Biasing properly the hypothesis space of a learner has been shown to improve generalization performance. Methods for achieving this goal have been proposed, that range from designing and introducing a bias into a learner to automatically learning the bias. Multitask learning methods fall into the latter category. When several related tasks derived from the same domain are available, these methods use the domain-related knowledge coded in the training examples of all the tasks as a source of bias. We extend some of the ideas presented in this field and describe a new approach that identifies a family of hypotheses, represented by a manifold in hypothesis space, that embodies domain-related knowledge. This family is learned using training examples sampled from a group of related tasks. Learning models trained on these tasks are only allowed to select hypotheses that belong to the family. We show that the new approach encompasses a large variety of families which can be learned. A statistical analysis on a class of related tasks is performed that shows significantly improved performances when using this approach. Index Terms--Bias learning, knowledge sharing, knowledge transfer, learning to learn, multitask learning.