학술논문

Learning Nonsparse Kernels by Self-Organizing Maps for Structured Data
Document Type
Periodical
Source
IEEE Transactions on Neural Networks IEEE Trans. Neural Netw. Neural Networks, IEEE Transactions on. 20(12):1938-1949 Dec, 2009
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Kernel
Self organizing feature maps
Machine learning
Neural networks
Data mining
Chemistry
Chemical elements
Web sites
XML
Kernel methods
self-organizing maps (SOMs)
structured data
tree kernels
Language
ISSN
1045-9227
1941-0093
Abstract
The development of neural network (NN) models able to encode structured input, and the more recent definition of kernels for structures, makes it possible to directly apply machine learning approaches to generic structured data. However, the effectiveness of a kernel can depend on its sparsity with respect to a specific data set. In fact, the accuracy of a kernel method typically reduces as the kernel sparsity increases. The sparsity problem is particularly common in structured domains involving discrete variables which may take on many different values. In this paper, we explore this issue on two well-known kernels for trees, and propose to face it by recurring to self-organizing maps (SOMs) for structures. Specifically, we show that a suitable combination of the two approaches, obtained by defining a new class of kernels based on the activation map of a SOM for structures, can be effective in avoiding the sparsity problem and results in a system that can be significantly more accurate for categorization tasks on structured data. The effectiveness of the proposed approach is demonstrated experimentally on two relatively large corpora of XML formatted data and a data set of user sessions extracted from website logs.