Màster universitari Erasmus Mundus en Recerca en Tecnologies de la Informació i la Comunicació (MERIT) (Pla 2009)http://hdl.handle.net/2099.1/70292024-03-19T08:41:17Z2024-03-19T08:41:17Z3D Face and Object Reconstruction for Mobile ApplicationsIrurueta Carro, Albertohttp://hdl.handle.net/2117/3846422023-03-07T10:30:18Z2023-03-07T10:23:02Z3D Face and Object Reconstruction for Mobile Applications
Irurueta Carro, Alberto
The purpose of this document is to develop a 3D reconstruction algorithm for fixed scenes containing opaque objects with Lambertian surfaces
TFM que entro manuament i poso en restringit perquè era confidencial fins a 18/03/2015. En el seu moment no es va introduir.
2023-03-07T10:23:02ZIrurueta Carro, AlbertoThe purpose of this document is to develop a 3D reconstruction algorithm for fixed scenes containing opaque objects with Lambertian surfacesCharacterization of errors and noises in MEMS inertial sensors using Allan variance methodBarreda Pupo, Lesliehttp://hdl.handle.net/2117/1038492020-02-12T20:42:15Z2017-04-28T12:24:46ZCharacterization of errors and noises in MEMS inertial sensors using Allan variance method
Barreda Pupo, Leslie
This thesis work has addressed the problems of characterizing and identifying the noises inherent to inertial sensors as gyros and accelerometers, which are embedded in inertial navigation systems, with the purpose of estimating the errors on the obtained position. The analysis of the Allan Variance method (AVAR) to characterize and identify the noises related to these sensors, has been done. The practical implementation of the AVAR method for the noises characterization has been performed over an experimental setup using the IMU 3DM-GX3 -25 data and the Matlab environment. From the AVAR plots it was possible to identify the Angle Random Walk and the Bias Instability in the gyros, and the Velocity Random Walk and Bias Instability in the accelerometers. A denoising process was also performed by using the Discrete Wavelet Transforms and the Median Filter. After the filtering the AVAR plots showed that the ARW was almost removed or attenuated using Wavelets, but not good results were obtained with the Median Filter.
2017-04-28T12:24:46ZBarreda Pupo, LeslieThis thesis work has addressed the problems of characterizing and identifying the noises inherent to inertial sensors as gyros and accelerometers, which are embedded in inertial navigation systems, with the purpose of estimating the errors on the obtained position. The analysis of the Allan Variance method (AVAR) to characterize and identify the noises related to these sensors, has been done. The practical implementation of the AVAR method for the noises characterization has been performed over an experimental setup using the IMU 3DM-GX3 -25 data and the Matlab environment. From the AVAR plots it was possible to identify the Angle Random Walk and the Bias Instability in the gyros, and the Velocity Random Walk and Bias Instability in the accelerometers. A denoising process was also performed by using the Discrete Wavelet Transforms and the Median Filter. After the filtering the AVAR plots showed that the ARW was almost removed or attenuated using Wavelets, but not good results were obtained with the Median Filter.Implementation of a portfolio construction methodNéba, Dieket Marcellinhttp://hdl.handle.net/2117/998082020-02-12T20:42:15Z2017-01-23T09:55:10ZImplementation of a portfolio construction method
Néba, Dieket Marcellin
Portfolio management is a theory that establishes a set of mathematical tools and concepts with the goal of maximizing the investor wealth. Portfolio theory has evolved and seen many other competing theories. In this thesis, we expose many techniques of portfolio management and trading. We then implement two novel money management techniques that use a Google application named Google trends to forecast the market behaviour. The first method is a technique that builds a stock portfolio based on the volume of searches for a given term browsed in Google search engine. This technique determines the weights of the assets in the portfolio based on a power law formula using Google Trends outputs for company related search terms. The second method is a technique of trading that builds trading decisions based upon the differential of Google trends output for financial related terms over a given time frame. We then compare the Google Trends portfolio method to the Dow Jones Industrial Average (DJIA) benchmark over metrics like standard deviation, Sharpe ratio and evolution of the return on investment. We show that performance strongly depend on the value of the shape parameter used in the power law formula. We then develop ourselves a new financial product derived from the second technique. This model differs from its original form by the way the asset is managed. Instead of executing pure hedge technique, we decide to hold on some assets of the portfolio in order to take advantage of the state of economy suggested by the output of Google trends. In turn, we sell part of the assets when the horizon is cloudy. Such a technique is beating both the Dow Jones Industrial Average and the original technique in terms of return on investment. It also proves better than returns of the Google Trends Portfolio based on Sharpe ratio maximization.
2017-01-23T09:55:10ZNéba, Dieket MarcellinPortfolio management is a theory that establishes a set of mathematical tools and concepts with the goal of maximizing the investor wealth. Portfolio theory has evolved and seen many other competing theories. In this thesis, we expose many techniques of portfolio management and trading. We then implement two novel money management techniques that use a Google application named Google trends to forecast the market behaviour. The first method is a technique that builds a stock portfolio based on the volume of searches for a given term browsed in Google search engine. This technique determines the weights of the assets in the portfolio based on a power law formula using Google Trends outputs for company related search terms. The second method is a technique of trading that builds trading decisions based upon the differential of Google trends output for financial related terms over a given time frame. We then compare the Google Trends portfolio method to the Dow Jones Industrial Average (DJIA) benchmark over metrics like standard deviation, Sharpe ratio and evolution of the return on investment. We show that performance strongly depend on the value of the shape parameter used in the power law formula. We then develop ourselves a new financial product derived from the second technique. This model differs from its original form by the way the asset is managed. Instead of executing pure hedge technique, we decide to hold on some assets of the portfolio in order to take advantage of the state of economy suggested by the output of Google trends. In turn, we sell part of the assets when the horizon is cloudy. Such a technique is beating both the Dow Jones Industrial Average and the original technique in terms of return on investment. It also proves better than returns of the Google Trends Portfolio based on Sharpe ratio maximization.Speaker tracking system using speaker boundary detectionKhan, Umairhttp://hdl.handle.net/2117/997992021-02-04T09:06:40Z2017-01-23T09:08:33ZSpeaker tracking system using speaker boundary detection
Khan, Umair
This thesis is about a research conducted in the area of Speaker Recognition. The application is concerned to the automatic detection and tracking of target speakers in meetings, conferences, telephone conversations and in radio and television broadcasts. A Speaker Tracking system is developed here, in collaboration with the Center for Language and Speech Technologies and Applications (TALP) in UPC. The main objective of this Speaker Tracking system is to answer the question: When the target speaker speaks? The system uses training speech data for the target speaker in the pre-enrollment stage. Three main modules have been designed for this Speaker Tracking system. In the first module an energy based Speech Activity Detection is applied to select the speech parts of the audio. In the second module the audio is segmented according to the speaker turning points. In the last module a Speaker Verification is implemented in which the target speakers are verified and tracked. Two different approaches are applied in this last module. In the first approach for Speaker Verification, the target speakers and the segments are modeled using the state-of-the-art, Gaussian Mixture Models (GMM). In the second approach for Speaker Verification, the identity vectors (i-vectors) representation is applied for the target speakers and the segments. Finally, the performance of both these approaches is compared for the results evaluation.
2017-01-23T09:08:33ZKhan, UmairThis thesis is about a research conducted in the area of Speaker Recognition. The application is concerned to the automatic detection and tracking of target speakers in meetings, conferences, telephone conversations and in radio and television broadcasts. A Speaker Tracking system is developed here, in collaboration with the Center for Language and Speech Technologies and Applications (TALP) in UPC. The main objective of this Speaker Tracking system is to answer the question: When the target speaker speaks? The system uses training speech data for the target speaker in the pre-enrollment stage. Three main modules have been designed for this Speaker Tracking system. In the first module an energy based Speech Activity Detection is applied to select the speech parts of the audio. In the second module the audio is segmented according to the speaker turning points. In the last module a Speaker Verification is implemented in which the target speakers are verified and tracked. Two different approaches are applied in this last module. In the first approach for Speaker Verification, the target speakers and the segments are modeled using the state-of-the-art, Gaussian Mixture Models (GMM). In the second approach for Speaker Verification, the identity vectors (i-vectors) representation is applied for the target speakers and the segments. Finally, the performance of both these approaches is compared for the results evaluation.Detection of the Weaning Instant by Means of Recurrence Quantification AnalysisGarriga Rodríguez, Pauhttp://hdl.handle.net/2117/895122020-02-12T20:42:15Z2016-09-02T15:45:46ZDetection of the Weaning Instant by Means of Recurrence Quantification Analysis
Garriga Rodríguez, Pau
Weaning from mechanical ventilation is still one of the most challenging problems in intensive care. Unnecessary delays in discontinuation and weaning trials that are undertaken too early are both undesirable. This study investigate a method based on the analysis of heart rate and respiration rate by means of Recurrence Quantification Analysis (RQA), to detect if a patient on weaning trial is ready to remove its endotracheal tube. The analysis has been done on a database of patients, WEANDB, where they are classified, patients who overcome the process and those without. The proposed method is based on taking as reference for classification, time series RQA from the analysis of the physiological signals sliced in time. Two types of binary classifiers, “Support Vector Machine” and RQA signals histogram observation, both are used to differentiate successful trials on weaning from failed trials. The results show that the use of time series RQA gives promising results of classification, in both cases have been obtained results around 80% of success without having done an exhaustive parameter optimization.; El destete de la ventilación mecánica sigue siendo uno de los problemas más desafiantes en las unidades de cuidados intensivo. Tanto los retrasos innecesarios en los ensayos de interrupción como los destetes que se llevan a cabo demasiado pronto son perjudiciales para el paciente. Este estudio investiga un método basado en el análisis de la frecuencia cardiaca y la frecuencia respiratoria por medio de análisis de cuantificación de recurrencia (RQA), para detectar si un paciente en la prueba de destete está listo para eliminar su tubo endotraqueal. El análisis se ha realizado sobre una base de datos de pacientes, WEANDB, en la que se han clasificado los pacientes según superan el proceso o no. El método propuesto se basa en tomar como referencia para la clasificación, la evolución temporal de las variables RQA a partir del análisis de las señales fisiológicas divididas en ventanas temporales. Se han analizado dos tipos de clasificadores binarios, “Supported Vector Machine” y un método intuitivo basado en observación del histograma de las señales RQA. Los resultados obtenidos muestran que el uso de la evolución temporal de las variables RQA da resultados prometedores de clasificación, en ambos casos se han obtenido resultados en torno al 80% de éxito, sin haber realizado una optimización exhaustiva de los parámetros.; La des-intubació de la ventilació mecànica segueix sent un dels problemes més desafiadors en les unitats de cures intensives. Tant els retards innecessaris en els assajos d'interrupció com les des-intubacions que es duen a terme massa aviat són perjudicials pel pacient. Aquest estudi investiga un mètode basat en l'anàlisi de la freqüència cardíaca i la freqüència respiratòria per mitjà d'anàlisi de quantificació de recurrència (RQA), per detectar si un pacient en la prova de des-intubació està llest per eliminar el seu tub endotraqueal. L'anàlisi s'ha realitzat sobre una base de dades de pacients, WEANDB, en què s'han classificat els pacients segons superen el procés o no. El mètode proposat es basa en prendre com a referència per a la classificació, l'evolució temporal de les variables RQA a partir de l'anàlisi dels senyals fisiològiques dividides en finestres temporals. S'han analitzat dos tipus de classificadors binaris, "Supported Vector Machine" i un mètode intuïtiu basat en observació de l'histograma dels senyals RQA. Els resultats obtinguts mostren que l'ús de l'evolució temporal de les variables RQA dóna resultats prometedors de classificació, en tots dos casos s'han obtingut resultats al voltant del 80% d'èxit, sense haver realitzat una optimització exhaustiva dels paràmetres.
2016-09-02T15:45:46ZGarriga Rodríguez, PauWeaning from mechanical ventilation is still one of the most challenging problems in intensive care. Unnecessary delays in discontinuation and weaning trials that are undertaken too early are both undesirable. This study investigate a method based on the analysis of heart rate and respiration rate by means of Recurrence Quantification Analysis (RQA), to detect if a patient on weaning trial is ready to remove its endotracheal tube. The analysis has been done on a database of patients, WEANDB, where they are classified, patients who overcome the process and those without. The proposed method is based on taking as reference for classification, time series RQA from the analysis of the physiological signals sliced in time. Two types of binary classifiers, “Support Vector Machine” and RQA signals histogram observation, both are used to differentiate successful trials on weaning from failed trials. The results show that the use of time series RQA gives promising results of classification, in both cases have been obtained results around 80% of success without having done an exhaustive parameter optimization.
El destete de la ventilación mecánica sigue siendo uno de los problemas más desafiantes en las unidades de cuidados intensivo. Tanto los retrasos innecesarios en los ensayos de interrupción como los destetes que se llevan a cabo demasiado pronto son perjudiciales para el paciente. Este estudio investiga un método basado en el análisis de la frecuencia cardiaca y la frecuencia respiratoria por medio de análisis de cuantificación de recurrencia (RQA), para detectar si un paciente en la prueba de destete está listo para eliminar su tubo endotraqueal. El análisis se ha realizado sobre una base de datos de pacientes, WEANDB, en la que se han clasificado los pacientes según superan el proceso o no. El método propuesto se basa en tomar como referencia para la clasificación, la evolución temporal de las variables RQA a partir del análisis de las señales fisiológicas divididas en ventanas temporales. Se han analizado dos tipos de clasificadores binarios, “Supported Vector Machine” y un método intuitivo basado en observación del histograma de las señales RQA. Los resultados obtenidos muestran que el uso de la evolución temporal de las variables RQA da resultados prometedores de clasificación, en ambos casos se han obtenido resultados en torno al 80% de éxito, sin haber realizado una optimización exhaustiva de los parámetros.
La des-intubació de la ventilació mecànica segueix sent un dels problemes més desafiadors en les unitats de cures intensives. Tant els retards innecessaris en els assajos d'interrupció com les des-intubacions que es duen a terme massa aviat són perjudicials pel pacient. Aquest estudi investiga un mètode basat en l'anàlisi de la freqüència cardíaca i la freqüència respiratòria per mitjà d'anàlisi de quantificació de recurrència (RQA), per detectar si un pacient en la prova de des-intubació està llest per eliminar el seu tub endotraqueal. L'anàlisi s'ha realitzat sobre una base de dades de pacients, WEANDB, en què s'han classificat els pacients segons superen el procés o no. El mètode proposat es basa en prendre com a referència per a la classificació, l'evolució temporal de les variables RQA a partir de l'anàlisi dels senyals fisiològiques dividides en finestres temporals. S'han analitzat dos tipus de classificadors binaris, "Supported Vector Machine" i un mètode intuïtiu basat en observació de l'histograma dels senyals RQA. Els resultats obtinguts mostren que l'ús de l'evolució temporal de les variables RQA dóna resultats prometedors de classificació, en tots dos casos s'han obtingut resultats al voltant del 80% d'èxit, sense haver realitzat una optimització exhaustiva dels paràmetres.Automatic human detection and tracking for robust video sequence annotationLlorca Queralt, Ramónhttp://hdl.handle.net/2117/879112023-10-08T00:56:48Z2016-06-13T06:23:33ZAutomatic human detection and tracking for robust video sequence annotation
Llorca Queralt, Ramón
Along this thesis, a novel and robust approach for automatic human annotation in long video sequences is addressed. This work defines a fully automatic pipeline that is able to deal with different types of sequences. The proposed system has been both designed and implemented following a divide and conquer approach. First, a shot detector is used to divide the sequences in smaller ones. Then, humans are detected using a face detector based on the Viola & Jones algorithm. Once humans are detected, their faces are tracked using color-based particle filters and Local Binary Patterns (LBP). Several techniques and refinements have been implemented to improve the overall robustness of the system. Moreover, a track-by-detection technique is used to enhance the tracking accuracy. Finally, each human's track is annotated throughout every shot of the sequence. The performance of the global system is assessed in experiments with real sequences and compared against human made annotations. Furthermore, these annotated tracks set the groundwork for a future recognition system, that will complete the task of automatically annotating identities throughout sequences.
2016-06-13T06:23:33ZLlorca Queralt, RamónAlong this thesis, a novel and robust approach for automatic human annotation in long video sequences is addressed. This work defines a fully automatic pipeline that is able to deal with different types of sequences. The proposed system has been both designed and implemented following a divide and conquer approach. First, a shot detector is used to divide the sequences in smaller ones. Then, humans are detected using a face detector based on the Viola & Jones algorithm. Once humans are detected, their faces are tracked using color-based particle filters and Local Binary Patterns (LBP). Several techniques and refinements have been implemented to improve the overall robustness of the system. Moreover, a track-by-detection technique is used to enhance the tracking accuracy. Finally, each human's track is annotated throughout every shot of the sequence. The performance of the global system is assessed in experiments with real sequences and compared against human made annotations. Furthermore, these annotated tracks set the groundwork for a future recognition system, that will complete the task of automatically annotating identities throughout sequences.3D Face and object reconstruction for mobile applicationsIrurueta Carro, Albertohttp://hdl.handle.net/2117/867492020-02-12T20:42:15Z2016-05-09T08:54:11Z3D Face and object reconstruction for mobile applications
Irurueta Carro, Alberto
This work is part of project “Mobbio” developed at Visual Engineering within the R&D labs. As will be seen throughout the document, the results obtained with the 3D reconstruction technologies that have developed, not only will allow to build robust face detection technologies, but also will allow Visual Engineering to use this technology in other previously mentioned fields such as in media industry (e.g. video-games or movie effects), or for augmented reality purposes (e.g. maps and guides, showing additional information on industrial applications, etc.). Indeed, in posterior stages, the 3D reconstruction technology that has already been developed along with face modeling is expected to be immediately used for avatar creation or accurate face detection that will be ready to be used on video-games or for security purposes (i.e. access control, video-surveillance, etc.). Because 3D reconstruction can be used on any general purpose scene, in later stages it is then expected to use this technology for effects on the media industry, collecting 3D data for industrial purposes, or for sculpture or archeological sites’s scanning. Also, some medical images might be eligible for 3D reconstruction, such as radiographies or images captured by confocal microscopies. Although, in this latter case, medical images might need additional research as they are prone to contain transparencies, which makes them harder candidates to obtain reliable image point correspondences. The purpose of this document is to develop a 3D reconstruction algorithm for fixed scenes containing opaque objects with Lambertian surfaces (i.e. without transparencies like glass, and without glares due to highly reflecting surfaces like mirrors or metallic surfaces).
2016-05-09T08:54:11ZIrurueta Carro, AlbertoThis work is part of project “Mobbio” developed at Visual Engineering within the R&D labs. As will be seen throughout the document, the results obtained with the 3D reconstruction technologies that have developed, not only will allow to build robust face detection technologies, but also will allow Visual Engineering to use this technology in other previously mentioned fields such as in media industry (e.g. video-games or movie effects), or for augmented reality purposes (e.g. maps and guides, showing additional information on industrial applications, etc.). Indeed, in posterior stages, the 3D reconstruction technology that has already been developed along with face modeling is expected to be immediately used for avatar creation or accurate face detection that will be ready to be used on video-games or for security purposes (i.e. access control, video-surveillance, etc.). Because 3D reconstruction can be used on any general purpose scene, in later stages it is then expected to use this technology for effects on the media industry, collecting 3D data for industrial purposes, or for sculpture or archeological sites’s scanning. Also, some medical images might be eligible for 3D reconstruction, such as radiographies or images captured by confocal microscopies. Although, in this latter case, medical images might need additional research as they are prone to contain transparencies, which makes them harder candidates to obtain reliable image point correspondences. The purpose of this document is to develop a 3D reconstruction algorithm for fixed scenes containing opaque objects with Lambertian surfaces (i.e. without transparencies like glass, and without glares due to highly reflecting surfaces like mirrors or metallic surfaces).Study over a Rail-track detector for a low- cost Rail Monitor SystemFelip Falgas, Damiàhttp://hdl.handle.net/2117/847802020-02-12T20:42:15Z2016-03-29T12:34:43ZStudy over a Rail-track detector for a low- cost Rail Monitor System
Felip Falgas, Damià
The development of a track occupancy detector based on a combination of inertial and GNSS measurements
2016-03-29T12:34:43ZFelip Falgas, DamiàBlurred Image Detection from a Humanoid Generated Video SequenceCamacho Clavijo, Margaritahttp://hdl.handle.net/2117/802692023-10-08T10:43:19Z2015-12-07T14:31:42ZBlurred Image Detection from a Humanoid Generated Video Sequence
Camacho Clavijo, Margarita
The 3D shape reconstruction from a humanoid generated video sequence project deal with the development of a strategy to estimate the geometry of an interesting object from a monocular video sequence acquired by a walking humanoid robot. In order to generate the 3D model of the object, firstly, blurred image must be eliminated. I have collaborated in this preprocessing step in which the final result is a set of images that contains the object without blur. First, the presence of blur is detected by the calculus of gradient magnitude. Second, a gradient histogram is displayed with twenty bins and the latest ten bins are added to classify the images. Finally, an image is considered as clear when the addition of the bins is bigger than 0.5% of the contour pixels.; El proyecto "3D Shape Reconstruction from a Humanoid Generated Video Sequence" trata de desarrollar una estrategia para estimar la geometría de un objeto a partir de una secuencia de video grabada por un robot humanoide. Para generar este model 3D del objeto, primero hay que eliminar las imágenes borrosas. Yo he colaborado en esta fase de pre-procesado en el que se seleccionan las imágenes que contienen al objeto nítido. Para ello, se detecta la presencia de borrosidad a través del cálculo de la magnitud del gradiente de la imagen. En segundo lugar, se construye el histograma usando veinte barras y se usa la suma de las últimas diez barras para clasificar las imágenes. Finalmente, se considera que una imagen está nítida cuando esta suma es mayor que el 0.5% de los píxeles de contorno.
This thesis presents a method for classifying the images acquired by a humanoid robot. The blurred images are detected and eliminated. The clear images are used to create a 3D model of the object shown in the images.
2015-12-07T14:31:42ZCamacho Clavijo, MargaritaThe 3D shape reconstruction from a humanoid generated video sequence project deal with the development of a strategy to estimate the geometry of an interesting object from a monocular video sequence acquired by a walking humanoid robot. In order to generate the 3D model of the object, firstly, blurred image must be eliminated. I have collaborated in this preprocessing step in which the final result is a set of images that contains the object without blur. First, the presence of blur is detected by the calculus of gradient magnitude. Second, a gradient histogram is displayed with twenty bins and the latest ten bins are added to classify the images. Finally, an image is considered as clear when the addition of the bins is bigger than 0.5% of the contour pixels.
El proyecto "3D Shape Reconstruction from a Humanoid Generated Video Sequence" trata de desarrollar una estrategia para estimar la geometría de un objeto a partir de una secuencia de video grabada por un robot humanoide. Para generar este model 3D del objeto, primero hay que eliminar las imágenes borrosas. Yo he colaborado en esta fase de pre-procesado en el que se seleccionan las imágenes que contienen al objeto nítido. Para ello, se detecta la presencia de borrosidad a través del cálculo de la magnitud del gradiente de la imagen. En segundo lugar, se construye el histograma usando veinte barras y se usa la suma de las últimas diez barras para clasificar las imágenes. Finalmente, se considera que una imagen está nítida cuando esta suma es mayor que el 0.5% de los píxeles de contorno.Deployment of Indoor LTE Small-Cells in TV White SpacesAbdelkader, Abdelrahman Farouk Mahmoudhttp://hdl.handle.net/2117/800482020-02-12T20:42:15Z2015-11-30T16:20:13ZDeployment of Indoor LTE Small-Cells in TV White Spaces
Abdelkader, Abdelrahman Farouk Mahmoud
This work focuses on the deployment of indoor LTE small cells acting as secondary transmitters in TVWS. Proposed methods make use of measurements stored in a Radio Environment Map (REM) that characterizes the DVB-T reception inside the building under consideration. Under this framework, this work analyses two different approaches for the deployment of small cells. First approach is based on maximizing total secondary transmit power inside the building, while the second approach is based on maximizing the percentage of positions having a Signal to Interference and Noise Ratio (SINR) above a desired threshold. Approaches are validated by means of rigorous simulations supported by real measurements of DVB-T signal reception. Results include optimum secondary transmitter placement, and transmit power values for providing indoor LTE coverage considering operating in a channel adjacent to the one used by DVB-T. These results are compared against exhaustive enumeration techniques and proven to provide very accurate results. Results reveal that when considering system capacity or network throughput, the second approach is more efficient and provides better results than the first approach. To the author's best knowledge, this model is the only model that provides an actual deployment strategy of indoor LTE secondary transmitters while considering interference constraints from adjacent channel DVB-T transmission. While our approaches are only tested in the considered building, the methods used are generic and can be applied in the same manner within any indoor environment provided that the REM for that environment is established.
In this thesis, we present a systematic computer-based approach to solve the problem of optimum transmitter placement for indoor LTE coverage systems operating in the TVWS. This approach is supported with rigorous simulations that reflect very promising results.
2015-11-30T16:20:13ZAbdelkader, Abdelrahman Farouk MahmoudThis work focuses on the deployment of indoor LTE small cells acting as secondary transmitters in TVWS. Proposed methods make use of measurements stored in a Radio Environment Map (REM) that characterizes the DVB-T reception inside the building under consideration. Under this framework, this work analyses two different approaches for the deployment of small cells. First approach is based on maximizing total secondary transmit power inside the building, while the second approach is based on maximizing the percentage of positions having a Signal to Interference and Noise Ratio (SINR) above a desired threshold. Approaches are validated by means of rigorous simulations supported by real measurements of DVB-T signal reception. Results include optimum secondary transmitter placement, and transmit power values for providing indoor LTE coverage considering operating in a channel adjacent to the one used by DVB-T. These results are compared against exhaustive enumeration techniques and proven to provide very accurate results. Results reveal that when considering system capacity or network throughput, the second approach is more efficient and provides better results than the first approach. To the author's best knowledge, this model is the only model that provides an actual deployment strategy of indoor LTE secondary transmitters while considering interference constraints from adjacent channel DVB-T transmission. While our approaches are only tested in the considered building, the methods used are generic and can be applied in the same manner within any indoor environment provided that the REM for that environment is established.