Learning with heterogeneous neural networks
Document typePart of book or chapter of book
PublisherNova Science Publishers, Inc. New York
Rights accessRestricted access - publisher's policy
This chapter studies a class of neuron models that computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the quasi-linear mean of the partial input-weight similarities. The neuron model is capable of dealing directly with mixtures of continuous as well as discrete quantities, among other data types and there is provision for missing values. An artificial neural network using these neuron models is trained using a breeder genetic algorithm until convergence. A number of experiments are carried out in several real-world problems in very different application domains described by mixtures of variales of distinct types and eventually showing missing values. This heterogeneous network is compared to a standard radial basis function network and to a multi-layer perceptron networks and shown to learn from with superior generalization ability at a comparable computational cost. A further important advantage of the resulting neural solutions is the great interpretability of the learned weights, which is done in terms of weighted similarities to prototypes.
CitationBelanche, Ll. Learning with heterogeneous neural networks. A: "New developments in artificial neural networks research". Nova Science Publishers, Inc. New York, 2011, p. 257-276.
|NOVA chapter.pdf||Capítol del llibre||178.9Kb||Restricted access|