VEU - Grup de Tractament de la Parla
http://hdl.handle.net/2117/3746
2016-05-24T23:38:47ZParametric envelope in LPC speech coders
http://hdl.handle.net/2117/87288
Parametric envelope in LPC speech coders
Moreno Bilbao, M. Asunción; Lagunas Hernandez, Miguel A.; Vallverdú Bayés, Francesc
During the last decade, many efforts have been devoted to the relative importance of associated functions like magnitude and phase of Fourier Transforms in image and signal bandwidth reduction. The reported work deals with the importance of the real envelope and instantaneous frequency in signal analysis/sintesis problems. In this paper authors show a method to parametrize the envelope and instantaneous frequency of a real signal. This method is very closed to spectral analysis methods in the sense that with an appropiate study, time domain and frequency domain can be analyced in a similar way.
2016-05-24T15:20:00ZMoreno Bilbao, M. AsunciónLagunas Hernandez, Miguel A.Vallverdú Bayés, FrancescDuring the last decade, many efforts have been devoted to the relative importance of associated functions like magnitude and phase of Fourier Transforms in image and signal bandwidth reduction. The reported work deals with the importance of the real envelope and instantaneous frequency in signal analysis/sintesis problems. In this paper authors show a method to parametrize the envelope and instantaneous frequency of a real signal. This method is very closed to spectral analysis methods in the sense that with an appropiate study, time domain and frequency domain can be analyced in a similar way.The normalized backpropagation and some experiments on speech recognition
http://hdl.handle.net/2117/87286
The normalized backpropagation and some experiments on speech recognition
Monte Moreno, Enrique; Mariño Acebal, José Bernardo
In the paper we present the theoretical development of the normalized backpropagation, and we compare it with other algorithms that have been presented in the literature.
The algorithm that we propose is based on the idea of normalizing the adaptation step in the gradient search by the variance of the input. This algorithm is simple and gives good results in comparison with other algorithms that accelerate the learning and has the additional advantage that the parameters are calculated by the algorithm, so the user does not have to make several trials in order to trim the adaptation step and the momentum until the best combination is found.
The task which we have designed in order to compare the algorithms is the recognition of digits in the Catalan language, with a data base of 1000 items, spoken by 10 speakers. The algorithms that we have compared with the normalized back propagation are: D.E.Rumelhart and J .L. McCielland, Franzini, Suddhard, Fahlman, Monte.
2016-05-24T15:13:08ZMonte Moreno, EnriqueMariño Acebal, José BernardoIn the paper we present the theoretical development of the normalized backpropagation, and we compare it with other algorithms that have been presented in the literature.
The algorithm that we propose is based on the idea of normalizing the adaptation step in the gradient search by the variance of the input. This algorithm is simple and gives good results in comparison with other algorithms that accelerate the learning and has the additional advantage that the parameters are calculated by the algorithm, so the user does not have to make several trials in order to trim the adaptation step and the momentum until the best combination is found.
The task which we have designed in order to compare the algorithms is the recognition of digits in the Catalan language, with a data base of 1000 items, spoken by 10 speakers. The algorithms that we have compared with the normalized back propagation are: D.E.Rumelhart and J .L. McCielland, Franzini, Suddhard, Fahlman, Monte.Aprendizaje y asistencia virtual en red : la prueba Piloto : Cátedra Telefófica UPC : Análisis de le evolución y tendencias futuras de la sociedad de la información
http://hdl.handle.net/2117/86752
Aprendizaje y asistencia virtual en red : la prueba Piloto : Cátedra Telefófica UPC : Análisis de le evolución y tendencias futuras de la sociedad de la información
Fuentes Fort, Maria; González Bermúdez, Meritxell; Guardiola Garcia, Marta; Jofre Roca, Lluís; Romeu Robert, Jordi; Vallverdú Bayés, Francesc
Hemos llevado a cabo una prueba piloto, demostradora del uso de las tecnologías del lenguaje y el habla aplicadas al aprendizaje de inglés. En general, se espera que la actividad acerque al alumno el máximo posible a la realidad (simulación de entornos reales), donde se integren diferentes herramientas existentes en una sola para ofrecer al profesorado y al alumnado el valor añadido de la interacción, el feedback (necesario para la evaluación) y la reutilización de recursos y materiales existentes.
2016-05-09T09:28:23ZFuentes Fort, MariaGonzález Bermúdez, MeritxellGuardiola Garcia, MartaJofre Roca, LluísRomeu Robert, JordiVallverdú Bayés, FrancescHemos llevado a cabo una prueba piloto, demostradora del uso de las tecnologías del lenguaje y el habla aplicadas al aprendizaje de inglés. En general, se espera que la actividad acerque al alumno el máximo posible a la realidad (simulación de entornos reales), donde se integren diferentes herramientas existentes en una sola para ofrecer al profesorado y al alumnado el valor añadido de la interacción, el feedback (necesario para la evaluación) y la reutilización de recursos y materiales existentes.Looking for efficient and accurate ways of computing the global ionospheric electron density distribution from huge amounts of GNSS observations
http://hdl.handle.net/2117/86453
Looking for efficient and accurate ways of computing the global ionospheric electron density distribution from huge amounts of GNSS observations
Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume; Monte Moreno, Enrique; Aragón Ángel, María Ángeles
In this work the authors will explore different potential ways of estimating efficiently and accurately the global
number density of ionospheric free electrons from the most part of nowadays available GNSS measurements, taken from ground based GPS receivers (IGS network) and LEO on-board GPS receivers (such as FORMOSAT-3/COSMIC constellation).
It is basically designed as a bootstrapping approach, from a first determination of VTEC global maps based on
the ground data, to a final electron density extrapolation process aided by simple first-principle conditions, and
passing by an optimal error decorrelation treatment in the VTEC interpolation and corresponding application to
improve the inversion of the GPS occultation measurements.
The performances against external reference data, including dual frequency altimeters and ionosonde measurements, will be also shown to support the conclusions in different Solar Cycle conditions.
2016-05-02T08:39:12ZHernández Pajares, ManuelJuan Zornoza, José MiguelSanz Subirana, JaumeMonte Moreno, EnriqueAragón Ángel, María ÁngelesIn this work the authors will explore different potential ways of estimating efficiently and accurately the global
number density of ionospheric free electrons from the most part of nowadays available GNSS measurements, taken from ground based GPS receivers (IGS network) and LEO on-board GPS receivers (such as FORMOSAT-3/COSMIC constellation).
It is basically designed as a bootstrapping approach, from a first determination of VTEC global maps based on
the ground data, to a final electron density extrapolation process aided by simple first-principle conditions, and
passing by an optimal error decorrelation treatment in the VTEC interpolation and corresponding application to
improve the inversion of the GPS occultation measurements.
The performances against external reference data, including dual frequency altimeters and ionosonde measurements, will be also shown to support the conclusions in different Solar Cycle conditions.Medium Rate Speech Coding with Vector Quantization
http://hdl.handle.net/2117/86212
Medium Rate Speech Coding with Vector Quantization
Masgrau Gómez, Enrique José; Mariño Acebal, José Bernardo; Moreno Bilbao, M. Asunción
2016-04-26T15:40:05ZMasgrau Gómez, Enrique JoséMariño Acebal, José BernardoMoreno Bilbao, M. AsunciónAdaptive spectrum estimation with linear constrains
http://hdl.handle.net/2117/86201
Adaptive spectrum estimation with linear constrains
Vázquez Grau, Gregorio; Vallverdú Bayés, Francesc
A general constrained adaptive metbod is developed to be applied to the spectral estimation problem. The method presented can be used in a wide range of situatious, this is, we can get different estimators wíth it. The algorithm is formulated in a varíational approach context,and tbe non linear system obtained is solved with a constrained adaptive method applied to a digitized version of the spedrum. The set of constraínts is considered to be a set of known correlation values, and they can be located in non consecutíve lags. A generalization of the method is done, so it can be used in a rnu lt idimensional framework. As an example, a bidimensional ma.ximum entropy spectrum is presented.
2016-04-26T13:05:44ZVázquez Grau, GregorioVallverdú Bayés, FrancescA general constrained adaptive metbod is developed to be applied to the spectral estimation problem. The method presented can be used in a wide range of situatious, this is, we can get different estimators wíth it. The algorithm is formulated in a varíational approach context,and tbe non linear system obtained is solved with a constrained adaptive method applied to a digitized version of the spedrum. The set of constraínts is considered to be a set of known correlation values, and they can be located in non consecutíve lags. A generalization of the method is done, so it can be used in a rnu lt idimensional framework. As an example, a bidimensional ma.ximum entropy spectrum is presented.Leveraging online user feedback to improve statistical machine translation
http://hdl.handle.net/2117/86200
Leveraging online user feedback to improve statistical machine translation
Formiga, Lluís; Barrón-Cedeño, Alberto; Marquez, Lluis; Henriquez, Carlos A; Mariño Acebal, José Bernardo
In this article we present a three-step methodology for dynamically improving a statistical machine translation (SMT) system by incorporating human feedback in the form of free edits on the system translations. We target at feedback provided by casual users, which is typically error-prone. Thus, we first propose a filtering step to automatically identify the better user-edited translations and discard the useless ones. A second step produces a pivot-based alignment between source and user-edited sentences, focusing on the errors made by the system. Finally, a third step produces a new translation model and combines it linearly with the one from the original system. We perform a thorough evaluation on a real-world dataset collected from the Reverso.net translation service and show that every step in our methodology contributes significantly to improve a general purpose SMT system. Interestingly, the quality improvement is not only due to the increase of lexical coverage, but to a better lexical selection, reordering, and morphology. Finally, we show the robustness of the methodology by applying it to a different scenario, in which the new examples come from an automatically Web-crawled parallel corpus. Using exactly the same architecture and models provides again a significant improvement of the translation quality of a general purpose baseline SMT system.
2016-04-26T12:55:39ZFormiga, LluísBarrón-Cedeño, AlbertoMarquez, LluisHenriquez, Carlos AMariño Acebal, José BernardoIn this article we present a three-step methodology for dynamically improving a statistical machine translation (SMT) system by incorporating human feedback in the form of free edits on the system translations. We target at feedback provided by casual users, which is typically error-prone. Thus, we first propose a filtering step to automatically identify the better user-edited translations and discard the useless ones. A second step produces a pivot-based alignment between source and user-edited sentences, focusing on the errors made by the system. Finally, a third step produces a new translation model and combines it linearly with the one from the original system. We perform a thorough evaluation on a real-world dataset collected from the Reverso.net translation service and show that every step in our methodology contributes significantly to improve a general purpose SMT system. Interestingly, the quality improvement is not only due to the increase of lexical coverage, but to a better lexical selection, reordering, and morphology. Finally, we show the robustness of the methodology by applying it to a different scenario, in which the new examples come from an automatically Web-crawled parallel corpus. Using exactly the same architecture and models provides again a significant improvement of the translation quality of a general purpose baseline SMT system.On the Use of Higher Order Information in SVD Based Methods
http://hdl.handle.net/2117/86199
On the Use of Higher Order Information in SVD Based Methods
Vázquez Grau, Gregorio; Vallverdú Bayés, Francesc
2016-04-26T12:50:19ZVázquez Grau, GregorioVallverdú Bayés, FrancescCross spectrum ML estimate
http://hdl.handle.net/2117/86195
Cross spectrum ML estimate
Lagunas Hernandez, Miguel A.; Santamaría Pérez, María Eugenia; Gasull Llampallas, Antoni; Moreno Bilbao, M. Asunción
This work reports how to include general concepts of the one-dimensional MLM procedure in a two-channel problem of cross-spectrum estimation. It is shown in the sequel that there is no any problem in extrapolating the well-known procedures for auto-spectrum estimation to the cross-spectrum, if the original procedure can be explained as a filter bank analysis procedure. The resulting cross-spectrum estimate looks formally to satisfy the excellent features which the normalized maximum likelihood procedure, reported previously by the authors, does in the auto-spectrum problem as concerns with resolution a low-side lobe behavior.
2016-04-26T12:35:55ZLagunas Hernandez, Miguel A.Santamaría Pérez, María EugeniaGasull Llampallas, AntoniMoreno Bilbao, M. AsunciónThis work reports how to include general concepts of the one-dimensional MLM procedure in a two-channel problem of cross-spectrum estimation. It is shown in the sequel that there is no any problem in extrapolating the well-known procedures for auto-spectrum estimation to the cross-spectrum, if the original procedure can be explained as a filter bank analysis procedure. The resulting cross-spectrum estimate looks formally to satisfy the excellent features which the normalized maximum likelihood procedure, reported previously by the authors, does in the auto-spectrum problem as concerns with resolution a low-side lobe behavior.Método MLNq para arrays de alta resolución
http://hdl.handle.net/2117/86149
Método MLNq para arrays de alta resolución
Gasull Llampallas, Antoni; Lagunas Hernandez, Miguel A.; Fernández Rubio, Juan Antonio; Moreno Bilbao, M. Asunción
Spectral analysis techniques are used to bearing estimation problem. Each one of this gives a different array beamforming. We show here a generalized normalized Maximum Likehood Method which present a high resolution comparable to the singular value decomposition methods, but with a smaller computational load .
2016-04-25T13:36:49ZGasull Llampallas, AntoniLagunas Hernandez, Miguel A.Fernández Rubio, Juan AntonioMoreno Bilbao, M. AsunciónSpectral analysis techniques are used to bearing estimation problem. Each one of this gives a different array beamforming. We show here a generalized normalized Maximum Likehood Method which present a high resolution comparable to the singular value decomposition methods, but with a smaller computational load .