Departament d'Estadística i Investigació Operativa
http://hdl.handle.net/2117/3941
Mon, 24 Jul 2017 08:49:05 GMT
20170724T08:49:05Z

The probabilistic pcenter problem: Planning service for potential customers
http://hdl.handle.net/2117/106526
The probabilistic pcenter problem: Planning service for potential customers
Martínez Merino, Luisa I.; Albareda Sambola, Maria; Rodríguez Chía, Antonio Manuel
This work deals with the probabilistic pcenter problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.
Mon, 17 Jul 2017 10:35:32 GMT
http://hdl.handle.net/2117/106526
20170717T10:35:32Z
Martínez Merino, Luisa I.
Albareda Sambola, Maria
Rodríguez Chía, Antonio Manuel
This work deals with the probabilistic pcenter problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.

Dimensionality reduction for samples of bivariate density level sets: an application to electoral results
http://hdl.handle.net/2117/106516
Dimensionality reduction for samples of bivariate density level sets: an application to electoral results
Delicado Useros, Pedro Francisco
A bivariate densities can be represented as a density level set containing a fixed amount of probability (0.75, for instance). Then a functional dataset where the observations are bivariate density functions can be analyzed as if the functional data are density level sets.We compute distances between sets and perform standard Multidimensional Scaling. This methodology is applied to analyze electoral results.
The final publication is available at link.springer.com
Mon, 17 Jul 2017 08:45:54 GMT
http://hdl.handle.net/2117/106516
20170717T08:45:54Z
Delicado Useros, Pedro Francisco
A bivariate densities can be represented as a density level set containing a fixed amount of probability (0.75, for instance). Then a functional dataset where the observations are bivariate density functions can be analyzed as if the functional data are density level sets.We compute distances between sets and perform standard Multidimensional Scaling. This methodology is applied to analyze electoral results.

Another look at principal curves and surfaces
http://hdl.handle.net/2117/106496
Another look at principal curves and surfaces
Delicado Useros, Pedro Francisco
Principal curves have been defined as smooth curves passing through the “middle” of a multidimensional data set. They are nonlinear generalizations of the first principal component, a characterization of which is the basis of the definition of principal curves. We establish a new characterization of the first principal component and base our new definition of a principal curve on this property. We introduce the notion of principal oriented points and we prove the existence of principal curves passing through these points. We extend the definition of principal curves to multivariate data sets and propose an algorithm to find them. The new notions lead us to generalize the definition of total variance. Successive principal curves are recursively defined from this generalization. The new methods are illustrated on simulated and real data sets.
© <year>. This manuscript version is made available under the CCBYNCND 4.0 license http://creativecommons.org/licenses/byncnd/4.0/
Mon, 17 Jul 2017 06:46:25 GMT
http://hdl.handle.net/2117/106496
20170717T06:46:25Z
Delicado Useros, Pedro Francisco
Principal curves have been defined as smooth curves passing through the “middle” of a multidimensional data set. They are nonlinear generalizations of the first principal component, a characterization of which is the basis of the definition of principal curves. We establish a new characterization of the first principal component and base our new definition of a principal curve on this property. We introduce the notion of principal oriented points and we prove the existence of principal curves passing through these points. We extend the definition of principal curves to multivariate data sets and propose an algorithm to find them. The new notions lead us to generalize the definition of total variance. Successive principal curves are recursively defined from this generalization. The new methods are illustrated on simulated and real data sets.

Influence of data resolution in nonlinear loads model for harmonics prediction
http://hdl.handle.net/2117/106333
Influence of data resolution in nonlinear loads model for harmonics prediction
Balcells Sendra, Josep; Lamich Arocas, Manuel; Griful Ponsati, Eulàlia; Corbalán Fuertes, Montserrat
This paper describes the influence of data resolution in the agreement of models to predict harmonics generated by nonlinear loads (NLL), basically formed by single phase and three phase rectifiers, eventually combined with linear loads. We assume that the network supplying the NLL has significant impedances and that it is disturbed by other parallel, random and unknown neighbor loads, sharing part of the supply system. The aim of building NLL models is to make predictions on the amount and flow paths of harmonic currents generated by such NLL in case of using parallel filters. In this paper, the models are obtained from sets of (V,I) data taken at a certain point, called measuring point (MP) and are valid to predict the NLL behavior when random known or unknown parallel loads are connected upstream of this point. The technique used to obtain the models studied here is based on Multivariate Multiple Outputs Regression (MMOR) and will not be described in detail in this paper. This method allows obtaining a set of equations giving the current harmonics as a function of voltage harmonics observed at the measuring point (MP). The accordance between model and the experimental results is very dependent on the resolution and accuracy of V and I measurements at the MP and is the core matter of this paper.
Tue, 11 Jul 2017 07:59:53 GMT
http://hdl.handle.net/2117/106333
20170711T07:59:53Z
Balcells Sendra, Josep
Lamich Arocas, Manuel
Griful Ponsati, Eulàlia
Corbalán Fuertes, Montserrat
This paper describes the influence of data resolution in the agreement of models to predict harmonics generated by nonlinear loads (NLL), basically formed by single phase and three phase rectifiers, eventually combined with linear loads. We assume that the network supplying the NLL has significant impedances and that it is disturbed by other parallel, random and unknown neighbor loads, sharing part of the supply system. The aim of building NLL models is to make predictions on the amount and flow paths of harmonic currents generated by such NLL in case of using parallel filters. In this paper, the models are obtained from sets of (V,I) data taken at a certain point, called measuring point (MP) and are valid to predict the NLL behavior when random known or unknown parallel loads are connected upstream of this point. The technique used to obtain the models studied here is based on Multivariate Multiple Outputs Regression (MMOR) and will not be described in detail in this paper. This method allows obtaining a set of equations giving the current harmonics as a function of voltage harmonics observed at the measuring point (MP). The accordance between model and the experimental results is very dependent on the resolution and accuracy of V and I measurements at the MP and is the core matter of this paper.

Loadsharing policies in parallel simulation of agentbased demographic models
http://hdl.handle.net/2117/106304
Loadsharing policies in parallel simulation of agentbased demographic models
Pellegrini, Alessandro; Montañola Sales, Cristina; Quaglia, Francesco; Casanovas Garcia, Josep
Execution parallelism in agentBased Simulation (ABS) allows to deal with complex/largescale models. This raises the need for runtime environments able to fully exploit hardware parallelism, while jointly offering ABSsuited programming abstractions. In this paper, we target lastgeneration Parallel Discrete Event Simulation (PDES) platforms for multicore systems. We discuss a programming model to support both implicit (inplace access) and explicit (message passing) interactions across concurrent Logical Processes (LPs). We discuss different loadsharing policies combining event rate and implicit/explicit LPs’ interactions. We present a performance study conducted on a synthetic test case, representative of a class of agentbased models.
Mon, 10 Jul 2017 10:00:25 GMT
http://hdl.handle.net/2117/106304
20170710T10:00:25Z
Pellegrini, Alessandro
Montañola Sales, Cristina
Quaglia, Francesco
Casanovas Garcia, Josep
Execution parallelism in agentBased Simulation (ABS) allows to deal with complex/largescale models. This raises the need for runtime environments able to fully exploit hardware parallelism, while jointly offering ABSsuited programming abstractions. In this paper, we target lastgeneration Parallel Discrete Event Simulation (PDES) platforms for multicore systems. We discuss a programming model to support both implicit (inplace access) and explicit (message passing) interactions across concurrent Logical Processes (LPs). We discuss different loadsharing policies combining event rate and implicit/explicit LPs’ interactions. We present a performance study conducted on a synthetic test case, representative of a class of agentbased models.

Optimal level sets for representing a bivariate density function
http://hdl.handle.net/2117/106300
Optimal level sets for representing a bivariate density function
Delicado Useros, Pedro Francisco; Vieu, Philippe
We deal with the problem of representing a bivariate density function by level sets. The choice of which levels are used in this representation are commonly arbitrary (most usual choices being those with probability contents .25, .5 and .75). Choosing which level is (or which levels are) of most interest is
an important practical question which depends on the kind of problem one has to deal with as well as the kind of feature one wishes to highlight in the density.
The approach we develop is based on minimum distance ideas.
Mon, 10 Jul 2017 08:40:04 GMT
http://hdl.handle.net/2117/106300
20170710T08:40:04Z
Delicado Useros, Pedro Francisco
Vieu, Philippe
We deal with the problem of representing a bivariate density function by level sets. The choice of which levels are used in this representation are commonly arbitrary (most usual choices being those with probability contents .25, .5 and .75). Choosing which level is (or which levels are) of most interest is
an important practical question which depends on the kind of problem one has to deal with as well as the kind of feature one wishes to highlight in the density.
The approach we develop is based on minimum distance ideas.

Contagion between United States and european markets during the recent crises
http://hdl.handle.net/2117/106121
Contagion between United States and european markets during the recent crises
Muñoz Gracia, María del Pilar; Márquez Cebrián, Dolores; Sánchez Espigares, Josep Anton
The main objective of this paper is to detect the existence of financial contagion between the North American and European markets during the recent crises. To accomplish this, the relationships between the US and the Euro zone stock markets are considered, taking the daily equity prices of the Standard and Poor’s 500 as representative of the United States market and for the European market, the five most representative indexes. Time Series Factor Analysis (TSFA) procedure has allowed concentrating the information of the European indexes into a unique factor, which captures the underlying structure of the European return series. The relationship between the European factor and the US stock return series has been analyzed by means of the dynamic conditional correlation model (DCC). Once the DCC is estimated, the contagion between both markets is analyzed. Finally, in order to explain the sudden changes in dynamic USEU correlation, a Markov switching model is fitted, using as input variables the macroeconomic ones associated with the monetary policies of the US as well as those related to uncertainty in the markets. The results show that there was contagion between the United States and European markets in the Subprime and Global Financial crises. The tworegime Markov switching model has helped to explain the variability of the pairwise correlation. The first regime contains mostly the financially stable periods, and the dynamic correlations in this regime are explained by macroeconomic variables and other related with monetary policies in Europe and US. The second regime is explained mainly by the Federal Funds rate and the evolution of the Euro/US Exchange rate.
Tue, 04 Jul 2017 06:17:08 GMT
http://hdl.handle.net/2117/106121
20170704T06:17:08Z
Muñoz Gracia, María del Pilar
Márquez Cebrián, Dolores
Sánchez Espigares, Josep Anton
The main objective of this paper is to detect the existence of financial contagion between the North American and European markets during the recent crises. To accomplish this, the relationships between the US and the Euro zone stock markets are considered, taking the daily equity prices of the Standard and Poor’s 500 as representative of the United States market and for the European market, the five most representative indexes. Time Series Factor Analysis (TSFA) procedure has allowed concentrating the information of the European indexes into a unique factor, which captures the underlying structure of the European return series. The relationship between the European factor and the US stock return series has been analyzed by means of the dynamic conditional correlation model (DCC). Once the DCC is estimated, the contagion between both markets is analyzed. Finally, in order to explain the sudden changes in dynamic USEU correlation, a Markov switching model is fitted, using as input variables the macroeconomic ones associated with the monetary policies of the US as well as those related to uncertainty in the markets. The results show that there was contagion between the United States and European markets in the Subprime and Global Financial crises. The tworegime Markov switching model has helped to explain the variability of the pairwise correlation. The first regime contains mostly the financially stable periods, and the dynamic correlations in this regime are explained by macroeconomic variables and other related with monetary policies in Europe and US. The second regime is explained mainly by the Federal Funds rate and the evolution of the Euro/US Exchange rate.

Exploring link covering and node covering formulations of detection layout problem
http://hdl.handle.net/2117/106076
Exploring link covering and node covering formulations of detection layout problem
Barceló Bugeda, Jaime; Gilliéron, Fanny; Linares Herreros, Mª Paz; Serch Muni, Oriol; Montero Mercadé, Lídia
The primary data input used in principal traffic models comes from OriginDestination (OD) trip matrices, which describe the patterns of traffic behavior across the network. In this way, OD matrices become a critical requirement in Advanced Traffic Management and/or Information Systems that are supported by Dynamic Traffic Assignment models. However, because OD matrices are not directly observable, the current practice consists of adjusting an initial or seed matrix from link flow counts which are provided by an existing layout of traffic counting stations. The adequacy of the detection layout strongly determines the quality of the adjusted OD. The usual approaches to the Detection Layout problem assume that detectors are located at network links. The first contribution of this paper proposes a modified set that formulates the link detection layout problem with side constraints. It also presents a new metaheuristic tabu search algorithm with high computational efficiency. The emerging Information and Communication Technologies, especially those based on the detection of the electronic signature of onboard devices (such as Bluetooth devices) allow the location of sensors at intersections. To explicitly take into account how these ICT sensors operate, this paper proposes a new formulation in terms of a node covering problem with side constraints that, for practical purposes, can be efficiently solved with standard professional solvers such as CPLEX.
Mon, 03 Jul 2017 07:08:54 GMT
http://hdl.handle.net/2117/106076
20170703T07:08:54Z
Barceló Bugeda, Jaime
Gilliéron, Fanny
Linares Herreros, Mª Paz
Serch Muni, Oriol
Montero Mercadé, Lídia
The primary data input used in principal traffic models comes from OriginDestination (OD) trip matrices, which describe the patterns of traffic behavior across the network. In this way, OD matrices become a critical requirement in Advanced Traffic Management and/or Information Systems that are supported by Dynamic Traffic Assignment models. However, because OD matrices are not directly observable, the current practice consists of adjusting an initial or seed matrix from link flow counts which are provided by an existing layout of traffic counting stations. The adequacy of the detection layout strongly determines the quality of the adjusted OD. The usual approaches to the Detection Layout problem assume that detectors are located at network links. The first contribution of this paper proposes a modified set that formulates the link detection layout problem with side constraints. It also presents a new metaheuristic tabu search algorithm with high computational efficiency. The emerging Information and Communication Technologies, especially those based on the detection of the electronic signature of onboard devices (such as Bluetooth devices) allow the location of sensors at intersections. To explicitly take into account how these ICT sensors operate, this paper proposes a new formulation in terms of a node covering problem with side constraints that, for practical purposes, can be efficiently solved with standard professional solvers such as CPLEX.

MarshallOlkin extended Zipf distribution
http://hdl.handle.net/2117/105912
MarshallOlkin extended Zipf distribution
Pérez Casany, Marta; Duarte López, Ariel; Prat Pérez, Arnau
Being able to generate large synthetic graphs resembling those found in the real world, is of high importance for the design of new graph algorithms and benchmarks. In this paper, we first compare several probability models in terms of goodnessoffit, when used to model the degree distribution of real graphs. Second, after confirming that the MOEZipf model is the one that gives better fits, we present a method to generate MOEZipf distributions. The method is shown to work well in practice when implemented in a scalable synthetic graph generator.
Wed, 28 Jun 2017 07:17:33 GMT
http://hdl.handle.net/2117/105912
20170628T07:17:33Z
Pérez Casany, Marta
Duarte López, Ariel
Prat Pérez, Arnau
Being able to generate large synthetic graphs resembling those found in the real world, is of high importance for the design of new graph algorithms and benchmarks. In this paper, we first compare several probability models in terms of goodnessoffit, when used to model the degree distribution of real graphs. Second, after confirming that the MOEZipf model is the one that gives better fits, we present a method to generate MOEZipf distributions. The method is shown to work well in practice when implemented in a scalable synthetic graph generator.

Use of results from honing test machines to determine roughness in industrial honing machines
http://hdl.handle.net/2117/105895
Use of results from honing test machines to determine roughness in industrial honing machines
Buj Corral, Irene; Rodero de Lamo, Lourdes; Marco Almagro, Lluís
In the present work, a new methodology is presented to translate roughness results from a test machine to different industrial machines without the need to stop production for a long time. First, mathematical models were searched for average roughness Ra in finish honing processes, in both a test and an industrial machine. Regression analysis was employed for obtaining quadratic models. Main factor influencing average roughness Ra was grain size, followed by pressure. Afterwards, several experiments were simulated in the common range of variables for the two machines using the models for average roughness Ra. A new variable DifRa corresponding to the difference between roughness values from the test machine and the industrial machine was defined and a quadratic model was obtained. Once DifRa is modeled, it is possible to predict roughness in a different industrial honing machine from results of the test machine by performing a few experiments in the industrial machine and translating the curves. This will reduce the number of tests to be performed in industrial machines. The suggested new methodology has been tested with two more roughness parameters, maximum height of profile Rz and core roughness depth Rk, proving its validity.
Tue, 27 Jun 2017 11:26:11 GMT
http://hdl.handle.net/2117/105895
20170627T11:26:11Z
Buj Corral, Irene
Rodero de Lamo, Lourdes
Marco Almagro, Lluís
In the present work, a new methodology is presented to translate roughness results from a test machine to different industrial machines without the need to stop production for a long time. First, mathematical models were searched for average roughness Ra in finish honing processes, in both a test and an industrial machine. Regression analysis was employed for obtaining quadratic models. Main factor influencing average roughness Ra was grain size, followed by pressure. Afterwards, several experiments were simulated in the common range of variables for the two machines using the models for average roughness Ra. A new variable DifRa corresponding to the difference between roughness values from the test machine and the industrial machine was defined and a quadratic model was obtained. Once DifRa is modeled, it is possible to predict roughness in a different industrial honing machine from results of the test machine by performing a few experiments in the industrial machine and translating the curves. This will reduce the number of tests to be performed in industrial machines. The suggested new methodology has been tested with two more roughness parameters, maximum height of profile Rz and core roughness depth Rk, proving its validity.