Mostra el registre d'ítem simple

dc.contributor.authorPan, Junting
dc.contributor.authorSayrol Clols, Elisa
dc.contributor.authorGiró Nieto, Xavier
dc.contributor.authorMcGuinness, Kevin
dc.contributor.authorO'Connor, Noel
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions
dc.date.accessioned2016-12-14T15:11:49Z
dc.date.issued2016
dc.identifier.citationPan, J., Sayrol, E., Giro, X., McGuinness, K., O'Connor, N. Shallow and deep convolutional networks for saliency prediction. A: IEEE Conference on Computer Vision and Pattern Recognition. "29th IEEE Conference on Computer Vision and Pattern Recognition: 26 June-1 July 2016: Las Vegas, Nevada". Las Vegas, Nevada: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 598-606.
dc.identifier.isbn978-1-4673-8852-8
dc.identifier.urihttp://hdl.handle.net/2117/98248
dc.description.abstractThe prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.
dc.format.extent9 p.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.subjectÀrees temàtiques de la UPC::So, imatge i multimèdia::Creació multimèdia::Imatge digital
dc.subjectÀrees temàtiques de la UPC::Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes
dc.subject.lcshComputer vision
dc.subject.lcshPattern recognition systems
dc.subject.otherComputer vision
dc.subject.otherConvolution
dc.subject.otherForecasting
dc.subject.otherNeural networks
dc.titleShallow and deep convolutional networks for saliency prediction
dc.typeConference lecture
dc.subject.lemacVisió per ordinador
dc.subject.lemacReconeixement de formes (Informàtica)
dc.contributor.groupUniversitat Politècnica de Catalunya. GPI - Grup de Processament d'Imatge i Vídeo
dc.identifier.doi10.1109/CVPR.2016.71
dc.description.peerreviewedPeer Reviewed
dc.relation.publisherversionhttp://ieeexplore.ieee.org/document/7780440/
dc.rights.accessRestricted access - publisher's policy
local.identifier.drac18719964
dc.description.versionPostprint (published version)
dc.relation.projectidinfo:eu-repo/grantAgreement/MINECO//TEC2013-43935-R/ES/PROCESADO DE INFORMACION HETEROGENEA Y SEÑALES EN GRAFOS PARA BIG DATA. APLICACION EN CRIBADO DE ALTO RENDIMIENTO, TELEDETECCION, MULTIMEDIA Y HCI./
dc.date.lift10000-01-01
local.citation.authorPan, J.; Sayrol, E.; Giro, X.; McGuinness, K.; O'Connor, N.
local.citation.contributorIEEE Conference on Computer Vision and Pattern Recognition
local.citation.pubplaceLas Vegas, Nevada
local.citation.publicationName29th IEEE Conference on Computer Vision and Pattern Recognition: 26 June-1 July 2016: Las Vegas, Nevada
local.citation.startingPage598
local.citation.endingPage606


Fitxers d'aquest items

Imatge en miniatura

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple