Mostra el registre d'ítem simple

dc.contributor.authorRomero Merino, Enrique
dc.contributor.authorAlquézar Mancho, René
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament de Llenguatges i Sistemes Informàtics
dc.date.accessioned2012-11-09T10:32:51Z
dc.date.created2012-01
dc.date.issued2012-01
dc.identifier.citationRomero, E.; Alquezar, R. Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks. "Neural networks", Gener 2012, vol. 25, núm. 1, p. 122-129.
dc.identifier.issn0893-6080
dc.identifier.urihttp://hdl.handle.net/2117/16872
dc.description.abstractRecently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal outputlayer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems.
dc.format.extent8 p.
dc.language.isoeng
dc.subjectÀrees temàtiques de la UPC::Informàtica::Informàtica teòrica
dc.subject.lcshMachine learning
dc.subject.lcshApproximation theory
dc.subject.lcshRegression analysis
dc.subject.lcshSupport vector machines
dc.subject.otherError minimized extreme learning machines
dc.subject.otherSequential approximations
dc.subject.otherSupport vector sequential feed-forward neural networks
dc.titleComparing error minimized extreme learning machines and support vector sequential feed-forward neural networks
dc.typeArticle
dc.subject.lemacAprenentatge automàtic
dc.subject.lemacAproximació, Teoria de l'
dc.subject.lemacAnàlisi de regressió
dc.contributor.groupUniversitat Politècnica de Catalunya. SOCO - Soft Computing
dc.identifier.doi10.1016/j.neunet.2011.08.005
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttp://www.ncbi.nlm.nih.gov/pubmed/21959130
dc.rights.accessRestricted access - publisher's policy
local.identifier.drac8834868
dc.description.versionPostprint (published version)
dc.date.lift10000-01-01
local.citation.authorRomero, E.; Alquezar, R.
local.citation.publicationNameNeural networks
local.citation.volume25
local.citation.number1
local.citation.startingPage122
local.citation.endingPage129


Fitxers d'aquest items

Imatge en miniatura

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple