Mostra el registre d'ítem simple

dc.contributorAyguadé Parra, Eduard
dc.contributorLlosa Espuny, José Francisco
dc.contributor.authorNoguera Vall, Ferran
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors
dc.date.accessioned2021-04-30T09:09:23Z
dc.date.available2021-04-30T09:09:23Z
dc.date.issued2021-01
dc.identifier.urihttp://hdl.handle.net/2117/344886
dc.description.abstractIn recent years, neural networks have grown in popularity, mostly thanks to the advances in the field of high performance computing. Nevertheless, some factors are still limiting the usage of neural networks. In particular, two limiting factors are storage requirements and computational cost. The aim of this project is to radically improve storage demand and provide direction for accelerating the execution of neural networks. In the scope of this thesis two compression algorithms have been developed. These algorithms share a common basis, both exploit error-tolerance is a property, because of this property the weight matrix can be divided into blocks simplifying the problem while merely impacting the accuracy. The first algorithm, groups the weights inside every block using different clustering techniques: Arithmetic mean and K-Means. To decide which clustering method to apply to which block standard deviation is employed among others. The user can specify a trade-off between accuracy and compression. This method has underperformed, obtaining a compression rate of 10,57 for AlexNet, which is not nearly state-of-the-art. The main issue is that meaningless weights are being merged with significant ones, causing a significant drop in the accuracy. The second algorithm, takes on the problem of accuracy loss by pruning all the unimportant weights. After pruning, quantization is applied. For both steps, pruning and quantization, two options have been explored which are effective for different kinds of neural networks. Of the possible combinations between pruning and quantization, one is selected by trial-and-error. The first pruning technique focuses on removing as many weights as possible, while the second pruning method considers blocks to a greater extend. The two types of quantization allow three values per block and five values per block respectively. This algorithm performed very well, obtaining a compression rate of 57,15 for AlexNet with minimal accuracy loss.
dc.language.isoeng
dc.publisherUniversitat Politècnica de Catalunya
dc.subject.lcshNeural networks (Computer science)
dc.subject.lcshMachine learning
dc.subject.lcshArtificial intelligence
dc.subject.otheraprenentatge profund
dc.subject.othercompressió de la matriu de pesos
dc.subject.othercompressió de xarxes neuronals
dc.subject.otheralgoritmes de clustering
dc.subject.otherquantització
dc.subject.otherK-Means
dc.subject.othermitjana aritmètica
dc.subject.otheracceleració de xarxes neuronals
dc.subject.otherconsum d'energia
dc.subject.otherxarxes neuronals convolutionals
dc.subject.othercapa densament connectada
dc.subject.otherdeep learning
dc.subject.otherneural networks
dc.subject.otherweight matrix compression
dc.subject.otherneural network compression
dc.subject.otherclustering algorithms
dc.subject.otherquantization
dc.subject.otherclustering algorithms
dc.subject.otherarithmetic mean
dc.subject.othermatrix compression
dc.subject.otherAlexNet
dc.subject.otherLeNet
dc.subject.otherCIFAR-10
dc.subject.otherMNIST
dc.subject.otherImageNet
dc.subject.otherartificial intelligence
dc.subject.otherconvolutional neural networks
dc.subject.otherfully-connected layer
dc.titleNeural network compression
dc.typeMaster thesis
dc.subject.lemacXarxes neuronals (Informàtica)
dc.subject.lemacAprenentatge automàtic
dc.subject.lemacIntel·ligència artificial
dc.identifier.slug156456
dc.rights.accessOpen Access
dc.date.updated2021-02-05T07:29:28Z
dc.audience.educationlevelMàster
dc.audience.mediatorFacultat d'Informàtica de Barcelona
dc.audience.degreeMÀSTER UNIVERSITARI EN INNOVACIÓ I RECERCA EN INFORMÀTICA (Pla 2012)


Fitxers d'aquest items

Thumbnail

Aquest ítem apareix a les col·leccions següents

Mostra el registre d'ítem simple