Articles de revista
http://hdl.handle.net/2117/3881
2017-05-26T17:05:58ZUn modelo para diseñar actividades de aprendizaje en la enseñanza de ingenierías
http://hdl.handle.net/2117/100186
Un modelo para diseñar actividades de aprendizaje en la enseñanza de ingenierías
Otero Calviño, Beatriz; Rodríguez Luna, Eva
En los actuales momentos nuestros estudiantes se encuentran bastante desmotivados a la hora de asistir a clase y de trabajar. Esto conduce a que sea necesario que el profesor introduzca cambios en sus clases que lo lleven a diseñar actividades de aprendizaje como base fundamental de su enseñanza. El presente documento propone un modelo para diseñar actividades de aprendizaje en asignaturas de cursos básicos de ingeniería. Las actividades propuestas pretenden motivar a los estudiantes, promover su aprendizaje y fortalecer las competencias genéricas de comunicación oral, trabajo en equipo y aprendizaje autónomo.
2017-01-27T10:32:07ZOtero Calviño, BeatrizRodríguez Luna, EvaEn los actuales momentos nuestros estudiantes se encuentran bastante desmotivados a la hora de asistir a clase y de trabajar. Esto conduce a que sea necesario que el profesor introduzca cambios en sus clases que lo lleven a diseñar actividades de aprendizaje como base fundamental de su enseñanza. El presente documento propone un modelo para diseñar actividades de aprendizaje en asignaturas de cursos básicos de ingeniería. Las actividades propuestas pretenden motivar a los estudiantes, promover su aprendizaje y fortalecer las competencias genéricas de comunicación oral, trabajo en equipo y aprendizaje autónomo.Criterios para practicas de evaluación de calidad
http://hdl.handle.net/2117/99596
Criterios para practicas de evaluación de calidad
Cadenato, Ana; Martínez Martínez, María del Rosario; Gallego Fernández, María Isabel; Amante García, Beatriz; Jordana Barnils, José; Farrerons Vidal, Óscar; Isalgue Buxeda, Antonio; Fabregat Fillet, Jaume
El grupo de innovación GRAPA ( GRupo de la Evaluación de la Práctica Académica ) d el proyecto RIMA de la Universitat Politècnica de Catalunya ha elaborado una serie de criterios , en forma de rúbrica, como buenas prácticas de evaluación , c oherentes con la integración y evaluación de competencias , exigencia en la imp lantación de los nuevos grados que se imparten en los estudios superiores en las universidades españolas . En la presente comunicación se podrá comprobar que además cualquier actividad de evaluación que cumpla estos criterios en el máximo nivel de exigencia representa una actividad de calidad, demostrando que se pueden tener asignatura s de calidad , objetivo común tanto de instituciones como de l profesorado y alumnado . Además en la presente comunicación se mostrarán alguno s ejemplos de estos criterios
2017-01-18T13:22:27ZCadenato, AnaMartínez Martínez, María del RosarioGallego Fernández, María IsabelAmante García, BeatrizJordana Barnils, JoséFarrerons Vidal, ÓscarIsalgue Buxeda, AntonioFabregat Fillet, JaumeEl grupo de innovación GRAPA ( GRupo de la Evaluación de la Práctica Académica ) d el proyecto RIMA de la Universitat Politècnica de Catalunya ha elaborado una serie de criterios , en forma de rúbrica, como buenas prácticas de evaluación , c oherentes con la integración y evaluación de competencias , exigencia en la imp lantación de los nuevos grados que se imparten en los estudios superiores en las universidades españolas . En la presente comunicación se podrá comprobar que además cualquier actividad de evaluación que cumpla estos criterios en el máximo nivel de exigencia representa una actividad de calidad, demostrando que se pueden tener asignatura s de calidad , objetivo común tanto de instituciones como de l profesorado y alumnado . Además en la presente comunicación se mostrarán alguno s ejemplos de estos criteriosEfficient coding of homogeneous textures using stochastic vector quantization and linear prediction
http://hdl.handle.net/2117/98253
Efficient coding of homogeneous textures using stochastic vector quantization and linear prediction
Gimeno, D; Torres Urgell, Lluís
Vector quantisation (VQ) has been extensively used as an effective image coding technique. One of the most important steps in the whole process is the design of the codebook. The codebook is generally designed using the LBG algorithm which uses a large training set of empirical data that is statistically representative of the images to be encoded. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. Stochastic vector quantisation (SVQ) provides an alternative way for the generation of the codebook. In SVQ, a model for the image is computed first, and then the codewords are generated according to this model and not according to some specific training sequence. The SVQ approach presents good coding performance for moderate compression ratios and different type of images. On the other hand, in the context of synthetic and natural hybrid coding (SNHC), there is always need for techniques which may provide very high compression and high quality for homogeneous textures. A new stochastic vector quantisation approach using linear prediction which is able to provide very high compression ratios with graceful degradation for homogeneous textures is presented. Owing to the specific construction of the method, there is no block effect in the synthetised image. Results, implementation details, generation of the bit stream and comparisons with the verification model of MPEG-4 are presented which prove the validity of the approach. The technique has been proposed as a still image coding technique in the SNHC standardisation group of MPEG.
2016-12-14T15:19:33ZGimeno, DTorres Urgell, LluísVector quantisation (VQ) has been extensively used as an effective image coding technique. One of the most important steps in the whole process is the design of the codebook. The codebook is generally designed using the LBG algorithm which uses a large training set of empirical data that is statistically representative of the images to be encoded. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. Stochastic vector quantisation (SVQ) provides an alternative way for the generation of the codebook. In SVQ, a model for the image is computed first, and then the codewords are generated according to this model and not according to some specific training sequence. The SVQ approach presents good coding performance for moderate compression ratios and different type of images. On the other hand, in the context of synthetic and natural hybrid coding (SNHC), there is always need for techniques which may provide very high compression and high quality for homogeneous textures. A new stochastic vector quantisation approach using linear prediction which is able to provide very high compression ratios with graceful degradation for homogeneous textures is presented. Owing to the specific construction of the method, there is no block effect in the synthetised image. Results, implementation details, generation of the bit stream and comparisons with the verification model of MPEG-4 are presented which prove the validity of the approach. The technique has been proposed as a still image coding technique in the SNHC standardisation group of MPEG.Analysis and synthesis of textures through the inference of Boolean functions
http://hdl.handle.net/2117/97955
Analysis and synthesis of textures through the inference of Boolean functions
Domínguez Pumar, Manuel; Torres Urgell, Lluís
This work deals with Boolean functions of non-linear and linear basis. The Boolean random functions of non-linear basis were proposed by Serra (1988,1989). These functions are generated through a Poisson point process upon which a family of independent functions, called germ functions, are installed. This process of installation consists in taking the Sup (supremum), point to point, of the result of placing the germ functions upon the points of the Poisson process. Boolean functions of linear basis, which are defined and proposed in this paper, are generated in the same manner as the non-linear functions but with a modified installation process. Instead of taking the Sup point to point, the sum point to point is defined. So the process is then equivalent to the convolution of a Poisson train of deltas with a random pulse. The aim of this paper is to analyse textures through these two models, in order to infere their genetics through a given realisation of the process, i.e., to analyse the complete statistics of the germ functions and the density of the associated Poisson process in order to characterise a given texture. Experiments and results are provided which prove that the real textures can be understood as realisations of Boolean random functions (of linear and non-linear basis), and that it has been possible to infere the genetics of unidimensional Boolean random functions of linear basis with the algorithm proposed here. It has also been possible to do it with non-linear Boolean functions but only by imposing two restrictive conditions on the genetics of the realisation.
2016-12-09T15:57:51ZDomínguez Pumar, ManuelTorres Urgell, LluísThis work deals with Boolean functions of non-linear and linear basis. The Boolean random functions of non-linear basis were proposed by Serra (1988,1989). These functions are generated through a Poisson point process upon which a family of independent functions, called germ functions, are installed. This process of installation consists in taking the Sup (supremum), point to point, of the result of placing the germ functions upon the points of the Poisson process. Boolean functions of linear basis, which are defined and proposed in this paper, are generated in the same manner as the non-linear functions but with a modified installation process. Instead of taking the Sup point to point, the sum point to point is defined. So the process is then equivalent to the convolution of a Poisson train of deltas with a random pulse. The aim of this paper is to analyse textures through these two models, in order to infere their genetics through a given realisation of the process, i.e., to analyse the complete statistics of the germ functions and the density of the associated Poisson process in order to characterise a given texture. Experiments and results are provided which prove that the real textures can be understood as realisations of Boolean random functions (of linear and non-linear basis), and that it has been possible to infere the genetics of unidimensional Boolean random functions of linear basis with the algorithm proposed here. It has also been possible to do it with non-linear Boolean functions but only by imposing two restrictive conditions on the genetics of the realisation.A region-based subband coding scheme
http://hdl.handle.net/2117/97938
A region-based subband coding scheme
Casas Pla, Josep Ramon; Torres Urgell, Lluís
This paper describes a region-based subband coding scheme intended for efficient representation of the visual information contained in image regions of arbitrary shape. QMF filters are separately applied inside each region for the analysis and synthesis stages, using a signal-adaptive symmetric extension technique at region borders. The frequency coefficients corresponding to each region are identified over the various subbands of the decomposition, so that the coding steps — namely, bit-allocation, quantization and entropy coding — can be performed independently for each region.
Region-based subband coding exploits the possible homogeneity of the region contents by distributing the available bitrate not only in the frequency domain but also in the spatial domain, i.e. among the considered regions. The number of bits assigned to the subbands is optimized region by region for the whole image, by means of a rate-distortion optimization algorithm. Improved compression efficiency is obtained thanks to the local adaptativity of the bit allocation to the spectral contents of the different regions. This compensates for the overhead data spent in the coding of contour information.
As the subband coefficients obtained for each region are coded as separate data units, the content-based functionalities required for the future MPEG4 video coding standard can be readily handled. For instance, content-based scalability is possible by simply imposing user-defined constraints to the bit-assignment in some regions.
2016-12-09T13:22:45ZCasas Pla, Josep RamonTorres Urgell, LluísThis paper describes a region-based subband coding scheme intended for efficient representation of the visual information contained in image regions of arbitrary shape. QMF filters are separately applied inside each region for the analysis and synthesis stages, using a signal-adaptive symmetric extension technique at region borders. The frequency coefficients corresponding to each region are identified over the various subbands of the decomposition, so that the coding steps — namely, bit-allocation, quantization and entropy coding — can be performed independently for each region.
Region-based subband coding exploits the possible homogeneity of the region contents by distributing the available bitrate not only in the frequency domain but also in the spatial domain, i.e. among the considered regions. The number of bits assigned to the subbands is optimized region by region for the whole image, by means of a rate-distortion optimization algorithm. Improved compression efficiency is obtained thanks to the local adaptativity of the bit allocation to the spectral contents of the different regions. This compensates for the overhead data spent in the coding of contour information.
As the subband coefficients obtained for each region are coded as separate data units, the content-based functionalities required for the future MPEG4 video coding standard can be readily handled. For instance, content-based scalability is possible by simply imposing user-defined constraints to the bit-assignment in some regions.Stochastic vector quantization of images
http://hdl.handle.net/2117/97894
Stochastic vector quantization of images
Torres Urgell, Lluís; Casas Pla, Josep Ramon; Arias, E
One of the most important steps in the vector quantization of images is the design of the codebook. The codebook is generally designed using the LBG algorithm, that is in essence a clustering algorithm which uses a large training set of empirical data that is statistically representative of the image to be quantized. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. One alternative to the generation of the codebook, called stochastic vector quantization, is presented in this paper. Stochastic vector quantization (SVQ) is based on the generation of the codebook according to some previous model defined for the image to be encoded. The well-known AR model has been used to model the image in the current implementations of the technique, and has shown good performance in the overall scheme. To show the merit of the technique in different contexts, stochastic vector quantization is discussed and applied to both pixel-based and segmentation-based image coding schemes.
2016-12-07T16:28:13ZTorres Urgell, LluísCasas Pla, Josep RamonArias, EOne of the most important steps in the vector quantization of images is the design of the codebook. The codebook is generally designed using the LBG algorithm, that is in essence a clustering algorithm which uses a large training set of empirical data that is statistically representative of the image to be quantized. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. One alternative to the generation of the codebook, called stochastic vector quantization, is presented in this paper. Stochastic vector quantization (SVQ) is based on the generation of the codebook according to some previous model defined for the image to be encoded. The well-known AR model has been used to model the image in the current implementations of the technique, and has shown good performance in the overall scheme. To show the merit of the technique in different contexts, stochastic vector quantization is discussed and applied to both pixel-based and segmentation-based image coding schemes.Region-based video coding using mathematical morphology
http://hdl.handle.net/2117/97723
Region-based video coding using mathematical morphology
Salembier Clairon, Philippe Jean; Torres Urgell, Lluís; Meyer, F; Gu, C
This paper presents a region-based coding algorithm for video sequences. The coding approach involves a time-recursive segmentation relying on the pixels homogeneity, a region-based motion estimation, and motion compensated contour and texture coding. This algorithm is mainly devoted to very low bit rate video coding applications. One of the important features of the approach is that no assumption is made about the sequence content. Moreover the algorithm structure leads to a scalable coding process giving various levels of quality and bit rates. The coding as well as the segmentation are controlled to regulate the bit stream. Finally, the interest of morphological tools in the content of region-based coding is extensively reviewed.
2016-12-02T16:55:49ZSalembier Clairon, Philippe JeanTorres Urgell, LluísMeyer, FGu, CThis paper presents a region-based coding algorithm for video sequences. The coding approach involves a time-recursive segmentation relying on the pixels homogeneity, a region-based motion estimation, and motion compensated contour and texture coding. This algorithm is mainly devoted to very low bit rate video coding applications. One of the important features of the approach is that no assumption is made about the sequence content. Moreover the algorithm structure leads to a scalable coding process giving various levels of quality and bit rates. The coding as well as the segmentation are controlled to regulate the bit stream. Finally, the interest of morphological tools in the content of region-based coding is extensively reviewed.An improvement on codebook search for vector quantization
http://hdl.handle.net/2117/97655
An improvement on codebook search for vector quantization
Torres Urgell, Lluís; Huguet, J
Presents a simple but effective algorithm to speed up the codebook search in a vector quantization scheme when a MSE criterion is used. A considerable reduction in the number of operations is achieved. This algorithm was originally designed for image vector quantization in which the samples of the image signal (pixels) are positive, although it can be used with any positive-negative signal with only minor modifications.
2016-12-01T18:19:27ZTorres Urgell, LluísHuguet, JPresents a simple but effective algorithm to speed up the codebook search in a vector quantization scheme when a MSE criterion is used. A considerable reduction in the number of operations is achieved. This algorithm was originally designed for image vector quantization in which the samples of the image signal (pixels) are positive, although it can be used with any positive-negative signal with only minor modifications.Coding of details in very low bit-rate video systems
http://hdl.handle.net/2117/97643
Coding of details in very low bit-rate video systems
Casas Pla, Josep Ramon; Torres Urgell, Lluís
In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details should be coded, even at very low data bit-rates, in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological techniques. Its application in the framework of a multiresolution segmentation-based coding algorithm yields better results than pure segmentation techniques at higher compression ratios, if the selection step fits some main subjective requirements. Details are extracted and coded separately from the region structure and included in the reconstructed images in a later stage. The bet of considering the local background of a given detail for its perceptual selection breaks the concept of
2016-12-01T17:04:20ZCasas Pla, Josep RamonTorres Urgell, LluísIn this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details should be coded, even at very low data bit-rates, in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological techniques. Its application in the framework of a multiresolution segmentation-based coding algorithm yields better results than pure segmentation techniques at higher compression ratios, if the selection step fits some main subjective requirements. Details are extracted and coded separately from the region structure and included in the reconstructed images in a later stage. The bet of considering the local background of a given detail for its perceptual selection breaks the concept ofSURF-based mammalian species identification system
http://hdl.handle.net/2117/91379
SURF-based mammalian species identification system
Otero Calviño, Beatriz; Rodríguez Luna, Eva; Ventura Queija, Jacint
The development of tools for the automated identification of species will reduce the burden of routine identifications conducted by many biologists. The design of these tools is difficult because it depends on the proper extraction of those most relevant characteristics of the image, namely, those unequivocally identify its species. The appropriate software for such extraction does not exist in all cases. This work proposes an architecture for the automated identification of the skulls of different mammalian species belonging to the order Eulipotyphla, which includes shrews, moles and hedgehogs, among others. Our system determines nine species of this mammalian group using existing object recognition techniques, identifying them based on a set of images of the skulls of these species in a digital image database. To validate the proposed architecture, mobile and web applications have been developed. These applications use the image recognition technology provided by the OpenCV library for the detection of the keypoints and matching of the images. The application extracts the descriptor of the input image using the Speed Up Robust Features (SURF) method and compares this descriptor against the image database for matching using a Euclidean distance based on the nearest-neighbor approach. The initial tests have achieved a reliability of 98 %.
2016-11-02T17:49:10ZOtero Calviño, BeatrizRodríguez Luna, EvaVentura Queija, JacintThe development of tools for the automated identification of species will reduce the burden of routine identifications conducted by many biologists. The design of these tools is difficult because it depends on the proper extraction of those most relevant characteristics of the image, namely, those unequivocally identify its species. The appropriate software for such extraction does not exist in all cases. This work proposes an architecture for the automated identification of the skulls of different mammalian species belonging to the order Eulipotyphla, which includes shrews, moles and hedgehogs, among others. Our system determines nine species of this mammalian group using existing object recognition techniques, identifying them based on a set of images of the skulls of these species in a digital image database. To validate the proposed architecture, mobile and web applications have been developed. These applications use the image recognition technology provided by the OpenCV library for the detection of the keypoints and matching of the images. The application extracts the descriptor of the input image using the Speed Up Robust Features (SURF) method and compares this descriptor against the image database for matching using a Euclidean distance based on the nearest-neighbor approach. The initial tests have achieved a reliability of 98 %.