Learning in networks of similarity processing neurons
Document typeConference report
Rights accessOpen Access
Similarity functions are a very flexible container under which to express knowledge about a problem as well as to capture the meaningful relations in input space. In this paper we describe ongoing research using similarity functions to find more convenient representations for a problem –a crucial factor for successful learning– such that subsequent processing can be delivered to linear or non-linear modeling methods. The idea is tested in a set of challenging problems, characterized by a mixture of data types and different amounts of missing values. We report a series of experiments testing the idea against two more traditional approaches, one ignoring the knowledge about the dataset and another using this knowledge to pre-process it. The preliminary results demonstrate competitive or better generalization performance than that found in the literature. In addition, there is a considerable enhancement in the interpretability of the obtained models.
CitationBelanche, Ll. Learning in networks of similarity processing neurons. A: Workshop New Challenges in Neural Computation. "Workshop New Challenges in Neural Computation 2013". Saarbrücken: 2013, p. 97-105.