DSpace Collection:
http://hdl.handle.net/2117/3943
Fri, 27 Feb 2015 22:54:39 GMT2015-02-27T22:54:39Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoExtension of the asymptotic relative efficiency method to select the primary endpoint in a randomized clinical trial
http://hdl.handle.net/2117/26456
Title: Extension of the asymptotic relative efficiency method to select the primary endpoint in a randomized clinical trial
Authors: Plana-Ripoll, Oleguer; Gómez Melis, Guadalupe
Abstract: We extend the ARE method proposed in Gómez and Lagakos (2013) devised to decide which primary endpoint to choose when comparing two treatments in a randomized clinical trial. The ARE method is
based on the Asymptotic Relative Efficiency (ARE) between two logrank tests to compare two treatments: one is based on a relevant endpoint E1 while the other is based on a composite endpoint E* = E1 ¿ E2, where E2 is an additional endpoint. The ARE depends, besides some intuitive parameters, on the joint law of the times T1 and T2 from randomization to E1 and E2, respectively. Gómez and Lagakos (2013) characterize this joint law by means of Frank’s copula. In our work, several families of copulas can be chosen for the bivariate survival function of (T1, T2) so that different dependence struc- tures between T1 and T2 are feasible. We motivate the problem and show how to apply the method through a real cardiovascular clinical trial. We explore the influence of the
copula chosen into the ARE value by means of a simulation study. We conclude that the recommendation on whether or not to use
the composite endpoint as the primary endpoint for the investigation is, almost always, independent of the copula chosen.Fri, 20 Feb 2015 18:40:25 GMThttp://hdl.handle.net/2117/264562015-02-20T18:40:25ZPlana-Ripoll, Oleguer; Gómez Melis, GuadalupenoAsymptotic relative efficiency, Composite endpoint, Copulas, Logrank, Randomized clinical trialWe extend the ARE method proposed in Gómez and Lagakos (2013) devised to decide which primary endpoint to choose when comparing two treatments in a randomized clinical trial. The ARE method is
based on the Asymptotic Relative Efficiency (ARE) between two logrank tests to compare two treatments: one is based on a relevant endpoint E1 while the other is based on a composite endpoint E* = E1 ¿ E2, where E2 is an additional endpoint. The ARE depends, besides some intuitive parameters, on the joint law of the times T1 and T2 from randomization to E1 and E2, respectively. Gómez and Lagakos (2013) characterize this joint law by means of Frank’s copula. In our work, several families of copulas can be chosen for the bivariate survival function of (T1, T2) so that different dependence struc- tures between T1 and T2 are feasible. We motivate the problem and show how to apply the method through a real cardiovascular clinical trial. We explore the influence of the
copula chosen into the ARE value by means of a simulation study. We conclude that the recommendation on whether or not to use
the composite endpoint as the primary endpoint for the investigation is, almost always, independent of the copula chosen.Perspective reformulations of the CTA problem with L_2 distances
http://hdl.handle.net/2117/26342
Title: Perspective reformulations of the CTA problem with L_2 distances
Authors: Castro Pérez, Jordi; Frangioni, Antonio; Gentile, Claudio
Abstract: Any institution that disseminates data in aggregated form h
as the duty to ensure that individual confidential information is not disclosed, either by not releasing data or by perturbing the released data, while maintaining data utility. Controlled tabular adjustment (CTA) is a promising technique of the second type where a protected table that is close to the original one in some chosen distance is constructed. The choice of the specific distance shows a trade-off: while the Euclidean distance has been shown (and is confirmed here) to produce tables with greater “utility”, it gives rise to Mixed Integer Quadratic Problems (MIQPs) with pairs of linked semi-continuous variables that are more difficult to solve than the Mixed Integer Linear Problems corresponding to linear norms. We provide a novel analysis of Perspective Reformulations (PRs) for this special structure; in particular, we devise a Projected PR (P2 R) which is piecewiseconic but simplifies to a (nonseparable) MIQP when the instance is symmetric. We then compare different formulations of the CTA problem, show ing that the ones based on P2 R most often obtain better computational results.Fri, 13 Feb 2015 13:46:53 GMThttp://hdl.handle.net/2117/263422015-02-13T13:46:53ZCastro Pérez, Jordi; Frangioni, Antonio; Gentile, ClaudionoMixed Integer Quadratic Programming, Perspective Reformulation, Data
Privacy, Statistical Disclosure Control, Tabular Data Protection, Controlled Tabular AdjustmentAny institution that disseminates data in aggregated form h
as the duty to ensure that individual confidential information is not disclosed, either by not releasing data or by perturbing the released data, while maintaining data utility. Controlled tabular adjustment (CTA) is a promising technique of the second type where a protected table that is close to the original one in some chosen distance is constructed. The choice of the specific distance shows a trade-off: while the Euclidean distance has been shown (and is confirmed here) to produce tables with greater “utility”, it gives rise to Mixed Integer Quadratic Problems (MIQPs) with pairs of linked semi-continuous variables that are more difficult to solve than the Mixed Integer Linear Problems corresponding to linear norms. We provide a novel analysis of Perspective Reformulations (PRs) for this special structure; in particular, we devise a Projected PR (P2 R) which is piecewiseconic but simplifies to a (nonseparable) MIQP when the instance is symmetric. We then compare different formulations of the CTA problem, show ing that the ones based on P2 R most often obtain better computational results.Informe Final Fase 3 - Estimació Bossa Tipus de Catalunya
http://hdl.handle.net/2117/26234
Title: Informe Final Fase 3 - Estimació Bossa Tipus de Catalunya
Authors: Aluja Banet, Tomàs; Montero Mercadé, Lídia
Abstract: Projecte de l’Agència de Residus de Catalunya (ARC, en endavant) encarregat a la Universitat
Politècnica de Catalunya
, segons Conveni de 5 de Novembre del 2012.
L’objecte d’aquest projecte és proveir la
metodologia
estadística
per a l
’estimació
de la generació de
residus per habitant a Catalunya i la seva
composició
segons
un
a tipologia d’unes 16 categories,
p
ermetent
la
seva
territorialització
, així com donant les mesures de
variabilitat
dels estimadors
(desviacions tipus).
Aquest Conveni és
continuació
de l’en
g
egat durant els anys 2008 i 2009, el qual establí el disseny
mostral
a efectuar i la quantificació prèvia de la seva variabilitat. El C
onveni precedent es
concretava en dues fases, Fase 1 i 2, pel que el Conveni actual es presenta com
a Fase 3,
continuació
de les prèvies
.
La Fase 3 consi
s
teix per tant en l’actualització del
pla de mostreig efectuat en
les fases preced
e
nts i
en la definici
ó
de la metodologia de càlcul dels estimadors de la generació total per habitant, la seva
composició segons tipus i el càlcul dels errors estàndard corresponents i es concreta en les tasques
següents
:
Tasca 1: Actualització de les Unitats Primàries de Most
reig segons la metodologia de la Fase
1 i les noves dades disponibles a 2010.
Tasca 2: Primera estimació dels errors mostrals 2010.
Tasca 3: Criteris per a la definició dels parangons
Tasca 4: Disseny del pla de mostreig amb especificació del càlcul dels e
stimadors i el seu
error mostral.Thu, 05 Feb 2015 13:39:36 GMThttp://hdl.handle.net/2117/262342015-02-05T13:39:36ZAluja Banet, Tomàs; Montero Mercadé, LídianoProjecte de l’Agència de Residus de Catalunya (ARC, en endavant) encarregat a la Universitat
Politècnica de Catalunya
, segons Conveni de 5 de Novembre del 2012.
L’objecte d’aquest projecte és proveir la
metodologia
estadística
per a l
’estimació
de la generació de
residus per habitant a Catalunya i la seva
composició
segons
un
a tipologia d’unes 16 categories,
p
ermetent
la
seva
territorialització
, així com donant les mesures de
variabilitat
dels estimadors
(desviacions tipus).
Aquest Conveni és
continuació
de l’en
g
egat durant els anys 2008 i 2009, el qual establí el disseny
mostral
a efectuar i la quantificació prèvia de la seva variabilitat. El C
onveni precedent es
concretava en dues fases, Fase 1 i 2, pel que el Conveni actual es presenta com
a Fase 3,
continuació
de les prèvies
.
La Fase 3 consi
s
teix per tant en l’actualització del
pla de mostreig efectuat en
les fases preced
e
nts i
en la definici
ó
de la metodologia de càlcul dels estimadors de la generació total per habitant, la seva
composició segons tipus i el càlcul dels errors estàndard corresponents i es concreta en les tasques
següents
:
Tasca 1: Actualització de les Unitats Primàries de Most
reig segons la metodologia de la Fase
1 i les noves dades disponibles a 2010.
Tasca 2: Primera estimació dels errors mostrals 2010.
Tasca 3: Criteris per a la definició dels parangons
Tasca 4: Disseny del pla de mostreig amb especificació del càlcul dels e
stimadors i el seu
error mostral.Stochastic optimal sale bid for a wind power producer
http://hdl.handle.net/2117/23684
Title: Stochastic optimal sale bid for a wind power producer
Authors: Sacripante, Simona; Heredia, F.-Javier (Francisco Javier); Corchero García, Cristina
Abstract: Wind power generation has a key role in Spanish electricity system since it is a native source of energy that could help Spain to reduce its dependency on the exterior for the production of electricity. Apart from the great environmental benefits produced, wind energy reduce considerably spot energy price, reaching to cover 16,6 % of peninsular demand. Although, wind farms show high investment costs and need an efficient incentive scheme to be financed. If on one hand, Spain has been a leading country in Europe in developing a successful incentive scheme, nowadays tariff deficit and negative economic conjunctures asks for consistent reductions in the support mechanism and demand wind producers to be able to compete into the market with more mature technologies. The objective of this work is to find an optimal commercial strategy in the production market that would allow wind producer to maximize their daily profit. That can be achieved on one hand, increasing incomes in daily and intraday markets, on the other hand, reducing deviation costs due to error in generation predictions. We will previously analyze market features and common practices in use and then develop our own sale strategy solving a two-stage linear stochastic optimization problem. The first stage variable will be the sale bid in the day–ahead market while second stage variables will be the offers to the six sessions of intraday market. The model is implemented using real data from a wind producer leader in Spain.
Description: Research ReportWed, 27 Aug 2014 10:46:49 GMThttp://hdl.handle.net/2117/236842014-08-27T10:46:49ZSacripante, Simona; Heredia, F.-Javier (Francisco Javier); Corchero García, Cristinanoelectricity market, wind producer, stochastic programmingWind power generation has a key role in Spanish electricity system since it is a native source of energy that could help Spain to reduce its dependency on the exterior for the production of electricity. Apart from the great environmental benefits produced, wind energy reduce considerably spot energy price, reaching to cover 16,6 % of peninsular demand. Although, wind farms show high investment costs and need an efficient incentive scheme to be financed. If on one hand, Spain has been a leading country in Europe in developing a successful incentive scheme, nowadays tariff deficit and negative economic conjunctures asks for consistent reductions in the support mechanism and demand wind producers to be able to compete into the market with more mature technologies. The objective of this work is to find an optimal commercial strategy in the production market that would allow wind producer to maximize their daily profit. That can be achieved on one hand, increasing incomes in daily and intraday markets, on the other hand, reducing deviation costs due to error in generation predictions. We will previously analyze market features and common practices in use and then develop our own sale strategy solving a two-stage linear stochastic optimization problem. The first stage variable will be the sale bid in the day–ahead market while second stage variables will be the offers to the six sessions of intraday market. The model is implemented using real data from a wind producer leader in Spain.Review of multivariate survival data
http://hdl.handle.net/2117/22543
Title: Review of multivariate survival data
Authors: Gómez Melis, Guadalupe; Calle Rosingana, M. Luz; Serrat Piè, Carles; Espinal Berenguer, Anna
Abstract: This paper reviews some of the main contributions in the area of multivariate survival data and proposes some possible extensions. In particular, we have concentrated our search and study on those papers that are relevant to the situation where two (or more) consecutive variables are followed until a common day of analysis and subject to informative censoring.
The paper reviews bivariate nonparametric approaches and extend some of them to the case of two nonconsecutive times. We introduce the notation and construct the likelihood for the general problem of more than two consecutive survival times. We formulate the time dependencies and trends via a Bayesian approach. Finally, three regression models for multivariate survival times are discussed together with the differences among them which will be useful when the main interest is on the effect of covariates on the risk of failure.
Description: Document de recerca publicat per la UPC. Departament d'Estadística i Investigació operativaMon, 07 Apr 2014 14:05:21 GMThttp://hdl.handle.net/2117/225432014-04-07T14:05:21ZGómez Melis, Guadalupe; Calle Rosingana, M. Luz; Serrat Piè, Carles; Espinal Berenguer, AnnanoBivariate distributions, Bivariate survival estimator, Multivariate regression survival models
62N01 Statistics: Survival analysis and censored data: Censored data models
62N02 Statistics: Survival analysis and censored data: EstimationThis paper reviews some of the main contributions in the area of multivariate survival data and proposes some possible extensions. In particular, we have concentrated our search and study on those papers that are relevant to the situation where two (or more) consecutive variables are followed until a common day of analysis and subject to informative censoring.
The paper reviews bivariate nonparametric approaches and extend some of them to the case of two nonconsecutive times. We introduce the notation and construct the likelihood for the general problem of more than two consecutive survival times. We formulate the time dependencies and trends via a Bayesian approach. Finally, three regression models for multivariate survival times are discussed together with the differences among them which will be useful when the main interest is on the effect of covariates on the risk of failure.Bartering integer commodities with exogenous prices
http://hdl.handle.net/2117/21996
Title: Bartering integer commodities with exogenous prices
Authors: Nasini, Stefano; Castro Pérez, Jordi; Fonseca Casas, Pau
Abstract: The analysis of markets with indivisible goods and xed exogenous prices has played
an important role in economic models, especially in relation to wage rigidity and unemployment.
This paper provides a novel mathematical programming based approach to study pure
exchange economies where discrete amounts of commodities are exchanged at xed prices.
Barter processes, consisting in sequences of elementary reallocations of couple of commodities
among couples of agents, are formalized as local searches converging to equilibrium
allocations. A direct application of the analyzed processes in the context of computational
economics is provided, along with a Java implementation of the approaches described in this
paper: http://www-eio.upc.edu/~nasini/SER/launch.htmlTue, 11 Mar 2014 14:40:07 GMThttp://hdl.handle.net/2117/219962014-03-11T14:40:07ZNasini, Stefano; Castro Pérez, Jordi; Fonseca Casas, PaunoMicroeconomic Theory, Combinatorial optimization, Multiobjective
optimization, Multiagent systemsThe analysis of markets with indivisible goods and xed exogenous prices has played
an important role in economic models, especially in relation to wage rigidity and unemployment.
This paper provides a novel mathematical programming based approach to study pure
exchange economies where discrete amounts of commodities are exchanged at xed prices.
Barter processes, consisting in sequences of elementary reallocations of couple of commodities
among couples of agents, are formalized as local searches converging to equilibrium
allocations. A direct application of the analyzed processes in the context of computational
economics is provided, along with a Java implementation of the approaches described in this
paper: http://www-eio.upc.edu/~nasini/SER/launch.htmlRecommendations to choose the primary endpoint in cardiovascular clinical trials
http://hdl.handle.net/2117/21995
Title: Recommendations to choose the primary endpoint in cardiovascular clinical trials
Authors: Gómez Melis, Guadalupe; Gómez Mateu, Moisés; Dafni, Urania
Abstract: Background – A composite endpoint is often used as the primary endpoint to assess the efficacy of a new treatment in randomized clinical trials (RCT). In cardiovascular trials, the often rare event of the relevant primary endpoint
(individual or composite), such as cardiovascular death (CV death),
Myocardial Infarction (MI), or both, is combined with a more common
secondary endpoint, such as target lesion revascularization, with the aim to
increase the statistical power of the study.
Methods – Gómez and Lagakos developed statistical methodology to be used
at the design stage of a RCT for deciding whether to expand a study
relevant primary endpoint e1 to e*, the composite of e1 and a secondary
endpoint e2. The method uses the asymptotic relative efficiency of the
logrank test for comparing treatment groups based on e1 versus the logrank
test based on e*. The method is used to assess, in the cardiovascular
research area, the characteristics of the candidate individual endpoints that
should govern the choice of using a composite endpoint as the primary
endpoint in a clinical trial.
Results and conclusions – A set of recommendations is provided based on
the reported values of the frequencies of observing each candidate endpoint
as well as on the magnitude of the effect of treatment as expressed by the hazard ratio, supported by cardiovascular RCTs published in 2008.Tue, 11 Mar 2014 14:33:24 GMThttp://hdl.handle.net/2117/219952014-03-11T14:33:24ZGómez Melis, Guadalupe; Gómez Mateu, Moisés; Dafni, UranianoAsymptotic Relative Efficiency, Composite outcome, Logrank test, Cardiovascular, Randomized Clinical TrialBackground – A composite endpoint is often used as the primary endpoint to assess the efficacy of a new treatment in randomized clinical trials (RCT). In cardiovascular trials, the often rare event of the relevant primary endpoint
(individual or composite), such as cardiovascular death (CV death),
Myocardial Infarction (MI), or both, is combined with a more common
secondary endpoint, such as target lesion revascularization, with the aim to
increase the statistical power of the study.
Methods – Gómez and Lagakos developed statistical methodology to be used
at the design stage of a RCT for deciding whether to expand a study
relevant primary endpoint e1 to e*, the composite of e1 and a secondary
endpoint e2. The method uses the asymptotic relative efficiency of the
logrank test for comparing treatment groups based on e1 versus the logrank
test based on e*. The method is used to assess, in the cardiovascular
research area, the characteristics of the candidate individual endpoints that
should govern the choice of using a composite endpoint as the primary
endpoint in a clinical trial.
Results and conclusions – A set of recommendations is provided based on
the reported values of the frequencies of observing each candidate endpoint
as well as on the magnitude of the effect of treatment as expressed by the hazard ratio, supported by cardiovascular RCTs published in 2008.Exploiting total unimodularity for classes of random network problems
http://hdl.handle.net/2117/21031
Title: Exploiting total unimodularity for classes of random network problems
Authors: Castro Pérez, Jordi; Nasini, Stefano
Abstract: Network analysis is of great interest for the study of social
, biological and technolog-
ical networks, with applications, among others, in busines
s, marketing, epidemiology and
telecommunications. Researchers are often interested in a
ssessing whether an observed fea-
ture in some particular network is expected to be found withi
n families of networks under
some hypothesis (named conditional random networks, i.e.,
networks satisfying some linear
constraints). This work presents procedures to generate ne
tworks with specified structural
properties which rely on the solution of classes of integer o
ptimization problems. We show
that, for many of them, the constraints matrices are totally
unimodular, allowing the efficient
generation of conditional random networks by polynomial ti
me interior-point methods. The
computational results suggest that the proposed methods ca
n represent a general framework
for the efficient generation of random networks even beyond the
models analyzed in this pa-
per. This work also opens the possibility for other applicat
ions of mathematical programming
in the analysis of complex networks.Tue, 17 Dec 2013 12:38:37 GMThttp://hdl.handle.net/2117/210312013-12-17T12:38:37ZCastro Pérez, Jordi; Nasini, StefanonoNetwork analysis is of great interest for the study of social
, biological and technolog-
ical networks, with applications, among others, in busines
s, marketing, epidemiology and
telecommunications. Researchers are often interested in a
ssessing whether an observed fea-
ture in some particular network is expected to be found withi
n families of networks under
some hypothesis (named conditional random networks, i.e.,
networks satisfying some linear
constraints). This work presents procedures to generate ne
tworks with specified structural
properties which rely on the solution of classes of integer o
ptimization problems. We show
that, for many of them, the constraints matrices are totally
unimodular, allowing the efficient
generation of conditional random networks by polynomial ti
me interior-point methods. The
computational results suggest that the proposed methods ca
n represent a general framework
for the efficient generation of random networks even beyond the
models analyzed in this pa-
per. This work also opens the possibility for other applicat
ions of mathematical programming
in the analysis of complex networks.A fix-and-relax heuristic for controlled tabular adjustment
http://hdl.handle.net/2117/21030
Title: A fix-and-relax heuristic for controlled tabular adjustment
Authors: Baena, Daniel; Castro Pérez, Jordi
Abstract: Controlled tabular adjustment (CTA) is an emerging protect
ion technique for tabular data pro-
tection. CTA formulates a mixed integer linear programming
problem, which is tough for tables
of moderate size. Finding a feasible initial solution may ev
en be a challenging task for large
instances. On the other hand, end users of tabular data prote
ction techniques give priority to fast
executions and are thus satisfied in practice with suboptima
l solutions. In this work the fix-and-
relax strategy is applied to large CTA instances. Fix-and-r
elax is based on partitioning the set of
binary variables into clusters to selectively explore a sma
ller branch-and-cut tree. We report ex-
tensive computational results on a set of real and random CTA
instances. Fix-and-relax is shown
to be competitive compared to plain CPLEX branch-and-cut in
terms of quickly finding either a
feasible solution or a good upper bound in di
ffi
cult instances.Tue, 17 Dec 2013 12:34:38 GMThttp://hdl.handle.net/2117/210302013-12-17T12:34:38ZBaena, Daniel; Castro Pérez, JordinoControlled tabular adjustment (CTA) is an emerging protect
ion technique for tabular data pro-
tection. CTA formulates a mixed integer linear programming
problem, which is tough for tables
of moderate size. Finding a feasible initial solution may ev
en be a challenging task for large
instances. On the other hand, end users of tabular data prote
ction techniques give priority to fast
executions and are thus satisfied in practice with suboptima
l solutions. In this work the fix-and-
relax strategy is applied to large CTA instances. Fix-and-r
elax is based on partitioning the set of
binary variables into clusters to selectively explore a sma
ller branch-and-cut tree. We report ex-
tensive computational results on a set of real and random CTA
instances. Fix-and-relax is shown
to be competitive compared to plain CPLEX branch-and-cut in
terms of quickly finding either a
feasible solution or a good upper bound in di
ffi
cult instances.Optimal energy management for a residential microgrid including a vehicle-to-grid system
http://hdl.handle.net/2117/20642
Title: Optimal energy management for a residential microgrid including a vehicle-to-grid system
Authors: Igualada, Lucia; Corchero García, Cristina; Cruz Zambrano, Miguel; Heredia, F.-Javier (Francisco Javier)
Abstract: An optimization model is proposed to manage a
residential microgrid including a charging spot with a vehicle-togrid
system and renewable energy sources. In order to achieve a
realistic and convenient management, we take into account: (1)
the household load split into three different profiles depending
on the characteristics of the elements considered; (2) a realistic
approach to owner behavior by introducing the novel concept of
range anxiety; (3) the vehicle battery management considering
the mobility profile of the owner and (4) different domestic
renewable energy sources. We consider the microgrid operated
in grid-connected mode. The model is executed one-day-ahead
and generates a schedule for all components of the microgrid.
The results obtained show daily costs in the range of 2.82eto
3.33e; the proximity of these values to the actual energy costs
for Spanish households validate the modeling. The experimental
results of applying the designed managing strategies show daily
costs savings of nearly 10%.Mon, 18 Nov 2013 13:36:28 GMThttp://hdl.handle.net/2117/206422013-11-18T13:36:28ZIgualada, Lucia; Corchero García, Cristina; Cruz Zambrano, Miguel; Heredia, F.-Javier (Francisco Javier)noOptimal management, smart grids, vehicle-togrid (V2G), range anxiety, renewable generation, residential microgridsAn optimization model is proposed to manage a
residential microgrid including a charging spot with a vehicle-togrid
system and renewable energy sources. In order to achieve a
realistic and convenient management, we take into account: (1)
the household load split into three different profiles depending
on the characteristics of the elements considered; (2) a realistic
approach to owner behavior by introducing the novel concept of
range anxiety; (3) the vehicle battery management considering
the mobility profile of the owner and (4) different domestic
renewable energy sources. We consider the microgrid operated
in grid-connected mode. The model is executed one-day-ahead
and generates a schedule for all components of the microgrid.
The results obtained show daily costs in the range of 2.82eto
3.33e; the proximity of these values to the actual energy costs
for Spanish households validate the modeling. The experimental
results of applying the designed managing strategies show daily
costs savings of nearly 10%.Stochastic optimal generation bid to electricity markets with emission risk constraints
http://hdl.handle.net/2117/20640
Title: Stochastic optimal generation bid to electricity markets with emission risk constraints
Authors: Heredia, F.-Javier (Francisco Javier); Cifuentes Rubiano, Julián; Corchero García, Cristina
Abstract: There are many factors that influence the day-ahead market bidding strategies of a
generation company (GenCo) in the current energy market framework. Environmental
policy issues have become more and more important for fossil-fuelled power plants
and they have to be considered in their management, giving rise to emission limitations.
This work allows investigating the influence of the emission reduction plan, and
the incorporation of the derivatives medium-term commitments in the optimal generation
bidding strategy to the day-ahead electricity market. Two different technologies
have been considered: the coal thermal units, high-emission technology, and the combined
cycle gas turbine units, low-emission technology. The Iberian Electricity Market
(MIBEL) and the Spanish National Emission Reduction Plan (NERP) defines the environmental
framework to deal with by the day-ahead market bidding strategies. To
address emission limitations, some of the standard risk management methodologies
developed for financial markets, such as Value-at-Risk (VaR) and Conditional Valueat-
Risk (CVaR), have been extended giving rise to the new concept of Conditional
Emission at Risk (CEaR). This study offers to electricity generation utilities a mathematical
model to determinate the individual optimal generation bid to the wholesale
electricity market, for each one of their generation units that maximizes the long-run
profits of the utility abiding by the Iberian Electricity Market rules, as well as the environmental
restrictions set by the Spanish National Emissions Reduction Plan. The
economic implications for a GenCo of including the environmental restrictions of this
National Plan are analyzed, and the effect of the NERP in the expected profits and
optimal generation bid are analyzed.Mon, 18 Nov 2013 12:00:25 GMThttp://hdl.handle.net/2117/206402013-11-18T12:00:25ZHeredia, F.-Javier (Francisco Javier); Cifuentes Rubiano, Julián; Corchero García, CristinanoOR in Energy, Stochastic Programming, Risk Management, Electricity market, Emission reductionThere are many factors that influence the day-ahead market bidding strategies of a
generation company (GenCo) in the current energy market framework. Environmental
policy issues have become more and more important for fossil-fuelled power plants
and they have to be considered in their management, giving rise to emission limitations.
This work allows investigating the influence of the emission reduction plan, and
the incorporation of the derivatives medium-term commitments in the optimal generation
bidding strategy to the day-ahead electricity market. Two different technologies
have been considered: the coal thermal units, high-emission technology, and the combined
cycle gas turbine units, low-emission technology. The Iberian Electricity Market
(MIBEL) and the Spanish National Emission Reduction Plan (NERP) defines the environmental
framework to deal with by the day-ahead market bidding strategies. To
address emission limitations, some of the standard risk management methodologies
developed for financial markets, such as Value-at-Risk (VaR) and Conditional Valueat-
Risk (CVaR), have been extended giving rise to the new concept of Conditional
Emission at Risk (CEaR). This study offers to electricity generation utilities a mathematical
model to determinate the individual optimal generation bid to the wholesale
electricity market, for each one of their generation units that maximizes the long-run
profits of the utility abiding by the Iberian Electricity Market rules, as well as the environmental
restrictions set by the Spanish National Emissions Reduction Plan. The
economic implications for a GenCo of including the environmental restrictions of this
National Plan are analyzed, and the effect of the NERP in the expected profits and
optimal generation bid are analyzed.On assessing the disclosure risk of controlled adjustment methods for statistical tabular data
http://hdl.handle.net/2117/17954
Title: On assessing the disclosure risk of controlled adjustment methods for statistical tabular data
Authors: Castro Pérez, Jordi
Abstract: Minimum distance controlled tabular adjustment is a recent
perturbative approach
for statistical disclosure control in tabular data. Given a
table to be protected, it looks for
the closest safe table, using some particular distance. Con
trolled adjustment is known to
provide high data utility. However, the disclosure risk has
only been partially analyzed
using theoretical results from optimization. This work ext
ends these previous results,
providing both a more detailed theoretical analysis, and an
extensive empirical assess-
ment of the disclosure risk of the method. A set of 25 instance
s from the literature and
four different attacker scenarios are considered, with sever
al random replications for each
scenario, both for
L
1
and
L
2
distances. This amounts to the solution of more than 2000
optimization problems. The analysis of the results shows th
at the approach has low dis-
closure risk when the attacker has no good information on the
bounds of the optimization
problem. On the other hand, when the attacker has good estima
tes of the bounds, and
the only uncertainty is in the objective function (which is a
very strong assumption),
the disclosure risk of controlled adjustment is high and it s
hould be avoided.Fri, 22 Feb 2013 18:06:38 GMThttp://hdl.handle.net/2117/179542013-02-22T18:06:38ZCastro Pérez, JordinoMinimum distance controlled tabular adjustment is a recent
perturbative approach
for statistical disclosure control in tabular data. Given a
table to be protected, it looks for
the closest safe table, using some particular distance. Con
trolled adjustment is known to
provide high data utility. However, the disclosure risk has
only been partially analyzed
using theoretical results from optimization. This work ext
ends these previous results,
providing both a more detailed theoretical analysis, and an
extensive empirical assess-
ment of the disclosure risk of the method. A set of 25 instance
s from the literature and
four different attacker scenarios are considered, with sever
al random replications for each
scenario, both for
L
1
and
L
2
distances. This amounts to the solution of more than 2000
optimization problems. The analysis of the results shows th
at the approach has low dis-
closure risk when the attacker has no good information on the
bounds of the optimization
problem. On the other hand, when the attacker has good estima
tes of the bounds, and
the only uncertainty is in the objective function (which is a
very strong assumption),
the disclosure risk of controlled adjustment is high and it s
hould be avoided.Improving an interior-point approach for large block-angular problems by hybrid preconditioners
http://hdl.handle.net/2117/17953
Title: Improving an interior-point approach for large block-angular problems by hybrid preconditioners
Authors: Bocanegra, Silvana; Castro Pérez, Jordi; Oliveira, Aurelio R.L.
Abstract: The computational time required by interior-point methods
is often domi-
nated by the solution of linear systems of equations. An efficient spec
ialized
interior-point algorithm for primal block-angular proble
ms has been used to
solve these systems by combining Cholesky factorizations for the
block con-
straints and a conjugate gradient based on a power series precon
ditioner for
the linking constraints. In some problems this power series prec
onditioner re-
sulted to be inefficient on the last interior-point iterations, wh
en the systems
became ill-conditioned. In this work this approach is combi
ned with a split-
ting preconditioner based on LU factorization, which is main
ly appropriate
for the last interior-point iterations. Computational result
s are provided for
three classes of problems: multicommodity flows (oriented and no
noriented),
minimum-distance controlled tabular adjustment for statistic
al data protec-
tion, and the minimum congestion problem. The results show that
, in most
cases, the hybrid preconditioner improves the performance an
d robustness of
the interior-point solver. In particular, for some block-ang
ular problems the
solution time is reduced by a factor of 10.Fri, 22 Feb 2013 17:36:16 GMThttp://hdl.handle.net/2117/179532013-02-22T17:36:16ZBocanegra, Silvana; Castro Pérez, Jordi; Oliveira, Aurelio R.L.noThe computational time required by interior-point methods
is often domi-
nated by the solution of linear systems of equations. An efficient spec
ialized
interior-point algorithm for primal block-angular proble
ms has been used to
solve these systems by combining Cholesky factorizations for the
block con-
straints and a conjugate gradient based on a power series precon
ditioner for
the linking constraints. In some problems this power series prec
onditioner re-
sulted to be inefficient on the last interior-point iterations, wh
en the systems
became ill-conditioned. In this work this approach is combi
ned with a split-
ting preconditioner based on LU factorization, which is main
ly appropriate
for the last interior-point iterations. Computational result
s are provided for
three classes of problems: multicommodity flows (oriented and no
noriented),
minimum-distance controlled tabular adjustment for statistic
al data protec-
tion, and the minimum congestion problem. The results show that
, in most
cases, the hybrid preconditioner improves the performance an
d robustness of
the interior-point solver. In particular, for some block-ang
ular problems the
solution time is reduced by a factor of 10.An efficient hybrid iterated local search algorithm for the total tardiness blocking flow shop problem
http://hdl.handle.net/2117/17099
Title: An efficient hybrid iterated local search algorithm for the total tardiness blocking flow shop problem
Authors: Ribas Vila, Immaculada; Companys Pascual, Ramón; Tort-Martorell Llabrés, Xavier
Abstract: This paper deals with the blocking flow shop problem and proposes an Iterated Local Search (ILS) procedure combined with a variable neighbourhood search (VNS) for the total tardiness minimization. The proposed ILS makes use of a NEH-based procedure to generate the initial solution, uses a local search to intensify the exploration which combines the insertion and swap neighbourhood and uses a perturbation mechanism that applies, d times, three neighbourhood operators to the current solution to diversify the search. The computational evaluation has shown that the insertion neighbourhood is more effective than the swap one, but it also has shown that the combination of both is a good strategy to improve the obtained solutions. Finally, the comparison of the ILS with an Iterated greedy algorithm and with a greedy randomized adaptive search procedure has revealed its good performance.Tue, 11 Dec 2012 12:43:16 GMThttp://hdl.handle.net/2117/170992012-12-11T12:43:16ZRibas Vila, Immaculada; Companys Pascual, Ramón; Tort-Martorell Llabrés, XaviernoThis paper deals with the blocking flow shop problem and proposes an Iterated Local Search (ILS) procedure combined with a variable neighbourhood search (VNS) for the total tardiness minimization. The proposed ILS makes use of a NEH-based procedure to generate the initial solution, uses a local search to intensify the exploration which combines the insertion and swap neighbourhood and uses a perturbation mechanism that applies, d times, three neighbourhood operators to the current solution to diversify the search. The computational evaluation has shown that the insertion neighbourhood is more effective than the swap one, but it also has shown that the combination of both is a good strategy to improve the obtained solutions. Finally, the comparison of the ILS with an Iterated greedy algorithm and with a greedy randomized adaptive search procedure has revealed its good performance.Hybrid metaheuristics for the tardines blocking flow shop problem
http://hdl.handle.net/2117/17098
Title: Hybrid metaheuristics for the tardines blocking flow shop problem
Authors: Ribas Vila, Immaculada; Companys Pascual, Ramón; Tort-Martorell Llabrés, Xavier
Abstract: This paper proposes an Iterated Local Search (ILS) procedure and an Iterated Greedy (IG) algorithm, which are both combined with a variable neighbourhood search (VNS), for dealing with the flow shop problem with blocking, in order to minimize the total tardiness of jobs. The structure of both algorithms is very similar, but they differ in the way that the search is diversified in the space of solutions. In the ILS algorithm, the diversification is performed by a perturbation mechanism that takes into account some characteristics of the problem; whereas the perturbation in the IG is performed through a deconstruction and construction phase proposed in the literature that has been proven to be very effective in dealing also with the makespan criterion. Moreover, the algorithms have been tested with three initial solution procedures. The computation of these algorithms when evaluated against an algorithm from the literature has shown their good performance.Tue, 11 Dec 2012 12:29:09 GMThttp://hdl.handle.net/2117/170982012-12-11T12:29:09ZRibas Vila, Immaculada; Companys Pascual, Ramón; Tort-Martorell Llabrés, XaviernoThis paper proposes an Iterated Local Search (ILS) procedure and an Iterated Greedy (IG) algorithm, which are both combined with a variable neighbourhood search (VNS), for dealing with the flow shop problem with blocking, in order to minimize the total tardiness of jobs. The structure of both algorithms is very similar, but they differ in the way that the search is diversified in the space of solutions. In the ILS algorithm, the diversification is performed by a perturbation mechanism that takes into account some characteristics of the problem; whereas the perturbation in the IG is performed through a deconstruction and construction phase proposed in the literature that has been proven to be very effective in dealing also with the makespan criterion. Moreover, the algorithms have been tested with three initial solution procedures. The computation of these algorithms when evaluated against an algorithm from the literature has shown their good performance.