Learning in networks of similarity processing neurons
Visualitza/Obre
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/23349
Tipus de documentText en actes de congrés
Data publicació2013
Condicions d'accésAccés obert
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
Abstract
Similarity functions are a very flexible container under which to express knowledge about a problem as well as to capture the meaningful relations in input space. In this paper we describe ongoing research using similarity functions to find more convenient representations for a problem –a crucial factor for successful learning– such that subsequent processing can be delivered to linear or non-linear modeling methods. The idea is tested in a set of challenging problems, characterized by a mixture of data types and different amounts of missing values. We report a series of experiments testing the idea against two more traditional approaches, one ignoring the knowledge about the dataset and another using this knowledge to pre-process it. The preliminary results demonstrate competitive or better generalization performance than that found in the literature. In addition, there is a considerable enhancement in the interpretability of the obtained models.
CitacióBelanche, Ll. Learning in networks of similarity processing neurons. A: Workshop New Challenges in Neural Computation. "Workshop New Challenges in Neural Computation 2013". Saarbrücken: 2013, p. 97-105.
ISBN1865-3960
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
NC2-2013.pdf | 140,6Kb | Visualitza/Obre |