DSpace Collection:
http://hdl.handle.net/2117/3529
Sat, 25 Apr 2015 09:04:42 GMT
20150425T09:04:42Z
webmaster.bupc@upc.edu
Universitat Politècnica de Catalunya. Servei de Biblioteques i Documentació
no

On the representability of the biuniform matroid
http://hdl.handle.net/2117/24101
Title: On the representability of the biuniform matroid
Authors: Ball, Simeon Michael; Padró Laimon, Carles; Weiner, Zsuzsa; Xing, Chaoping
Abstract: Every biuniform matroid is representable over all sufficiently large fields. But it is not known exactly over which finite fields they are representable, and the existence of efficient methods to find a representation for every given biuniform matroid has not been proved. The interest of these problems is due to their implications to secret sharing. The existence of efficient methods to find representations for all biuniform matroids is proved here for the first time. The previously known efficient constructions apply only to a particular class of biuniform matroids, while the known general constructions were not proved to be efficient. In addition, our constructions provide in many cases representations over smaller finite fields.
© 2013, Society for Industrial and Applied Mathematics
Thu, 18 Sep 2014 16:05:12 GMT
http://hdl.handle.net/2117/24101
20140918T16:05:12Z
Ball, Simeon Michael; Padró Laimon, Carles; Weiner, Zsuzsa; Xing, Chaoping
no
matroid theory, representable matroid, biuniform matroid, secret sharing
Every biuniform matroid is representable over all sufficiently large fields. But it is not known exactly over which finite fields they are representable, and the existence of efficient methods to find a representation for every given biuniform matroid has not been proved. The interest of these problems is due to their implications to secret sharing. The existence of efficient methods to find representations for all biuniform matroids is proved here for the first time. The previously known efficient constructions apply only to a particular class of biuniform matroids, while the known general constructions were not proved to be efficient. In addition, our constructions provide in many cases representations over smaller finite fields.
© 2013, Society for Industrial and Applied Mathematics

Cropping Euler factors of modular Lfunctions
http://hdl.handle.net/2117/20759
Title: Cropping Euler factors of modular Lfunctions
Authors: González Rovira, Josep; Jiménez Urroz, Jorge; Lario Loyo, Joan Carles
Abstract: According to the Birch and SwinnertonDyer conjectures, if A/Q is an abelian variety, then its Lfunction must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and nonCM cases.
Mon, 25 Nov 2013 17:05:43 GMT
http://hdl.handle.net/2117/20759
20131125T17:05:43Z
González Rovira, Josep; Jiménez Urroz, Jorge; Lario Loyo, Joan Carles
no
Abelian varieties, Distribution of Frobenius elements, Lfunctions
According to the Birch and SwinnertonDyer conjectures, if A/Q is an abelian variety, then its Lfunction must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and nonCM cases.

More hybrid and secure protection of statistical data sets
http://hdl.handle.net/2117/17412
Title: More hybrid and secure protection of statistical data sets
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claims
Thu, 17 Jan 2013 18:24:07 GMT
http://hdl.handle.net/2117/17412
20130117T18:24:07Z
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
no
Statistical data sets protection, synthetic methods, hybrid methods, interval disclosure risk
Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claims

Kdtrees and the real disclosure risks of large statistical databases
http://hdl.handle.net/2117/16561
Title: Kdtrees and the real disclosure risks of large statistical databases
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: In data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effort
Tue, 25 Sep 2012 11:53:08 GMT
http://hdl.handle.net/2117/16561
20120925T11:53:08Z
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
no
In data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effort

Orders of CM elliptic curves modulo p with at most two primes
http://hdl.handle.net/2117/15793
Title: Orders of CM elliptic curves modulo p with at most two primes
Authors: Iwaniec, H.; Jiménez Urroz, Jorge
Abstract: Nowadays the generation of cryptosystems requires two main aspects. First
the security, and then the size of the keys involved in the construction and
comunication process. About the former one needs a di±cult mathematical
assumption which ensures your system will not be broken unless a well known
di±cult problem is solved. In this context one of the most famous assumption
underlying a wide variety of cryptosystems is the computation of logarithms in
¯nite ¯elds and the Di±e Hellman assumption. However it is also well known
that elliptic curves provide good examples of representation of abelian groups
reducing the size of keys needed to guarantee the same level of security as in
the ¯nite ¯eld case. The ¯rst thing one needs to perform elliptic logarithms
which are computationaly secure is to ¯x a ¯nite ¯eld, Fp, and one curve, E=Fp
de¯ned over the ¯eld, such that jE(Fp)j has a prime factor as large as possible.
In practice the problem of ¯nding such a pair, of curve and ¯eld, seems simple,
just take a curve with integer coe±cients and a prime p of good reduction at
random and see if jE(Fp)j has a big prime factor. However the theory that
makes the previous algorithm useful is by no means obvious, neither clear or
complete. For example it is well known that supersingular elliptic curves have
to be avoided in the previous process since they reduce the security of any
cryptosystem based on the Di±e Hellman assumption on the elliptic logarithm.
But more importantly, the process will be feasible whenever the probability to
¯nd a pair, (E; p), with a big prime factor qj jE(Fp)j is big enough. One problem
arises naturally from the above.
Tue, 08 May 2012 11:42:08 GMT
http://hdl.handle.net/2117/15793
20120508T11:42:08Z
Iwaniec, H.; Jiménez Urroz, Jorge
no
Nowadays the generation of cryptosystems requires two main aspects. First
the security, and then the size of the keys involved in the construction and
comunication process. About the former one needs a di±cult mathematical
assumption which ensures your system will not be broken unless a well known
di±cult problem is solved. In this context one of the most famous assumption
underlying a wide variety of cryptosystems is the computation of logarithms in
¯nite ¯elds and the Di±e Hellman assumption. However it is also well known
that elliptic curves provide good examples of representation of abelian groups
reducing the size of keys needed to guarantee the same level of security as in
the ¯nite ¯eld case. The ¯rst thing one needs to perform elliptic logarithms
which are computationaly secure is to ¯x a ¯nite ¯eld, Fp, and one curve, E=Fp
de¯ned over the ¯eld, such that jE(Fp)j has a prime factor as large as possible.
In practice the problem of ¯nding such a pair, of curve and ¯eld, seems simple,
just take a curve with integer coe±cients and a prime p of good reduction at
random and see if jE(Fp)j has a big prime factor. However the theory that
makes the previous algorithm useful is by no means obvious, neither clear or
complete. For example it is well known that supersingular elliptic curves have
to be avoided in the previous process since they reduce the security of any
cryptosystem based on the Di±e Hellman assumption on the elliptic logarithm.
But more importantly, the process will be feasible whenever the probability to
¯nd a pair, (E; p), with a big prime factor qj jE(Fp)j is big enough. One problem
arises naturally from the above.

Classifying data from protected statistical datasets
http://hdl.handle.net/2117/14416
Title: Classifying data from protected statistical datasets
Authors: Herranz Sotoca, Javier; Matwin, Stan; Nin Guerrero, Jordi; Torra i Reventós, Vicenç
Abstract: Statistical Disclosure Control (SDC) is an active research area in the recent years. The goal is to transform an original dataset X into a protected one X0, such that X0 does not reveal any relation between confidential and (quasi)identifier attributes and such that X0 can be
used to compute reliable statistical information about X. Many specific protection methods have been proposed and analyzed, with respect to the
levels of privacy and utility that they offer. However, when measuring utility, only differences between the statistical values of X and X0 are considered. This would indicate that datasets protected by SDC methods can be used only for statistical purposes.
We show in this paper that this is not the case, because a protected dataset X0 can be used to construct good classifiers for future data. To do so, we describe an extensive set of experiments that we have run with different SDC protection methods and different (real) datasets. In general, the resulting classifiers are very good, which is good news for both the SDC and the Privacypreserving Data Mining communities. In particular, our results question the necessity of some specific protection methods that have appeared in the
privacypreserving data mining (PPDM) literature with the clear goal of providing good classification.
Thu, 05 Jan 2012 13:01:13 GMT
http://hdl.handle.net/2117/14416
20120105T13:01:13Z
Herranz Sotoca, Javier; Matwin, Stan; Nin Guerrero, Jordi; Torra i Reventós, Vicenç
no
Statistical Disclosure Control (SDC) is an active research area in the recent years. The goal is to transform an original dataset X into a protected one X0, such that X0 does not reveal any relation between confidential and (quasi)identifier attributes and such that X0 can be
used to compute reliable statistical information about X. Many specific protection methods have been proposed and analyzed, with respect to the
levels of privacy and utility that they offer. However, when measuring utility, only differences between the statistical values of X and X0 are considered. This would indicate that datasets protected by SDC methods can be used only for statistical purposes.
We show in this paper that this is not the case, because a protected dataset X0 can be used to construct good classifiers for future data. To do so, we describe an extensive set of experiments that we have run with different SDC protection methods and different (real) datasets. In general, the resulting classifiers are very good, which is good news for both the SDC and the Privacypreserving Data Mining communities. In particular, our results question the necessity of some specific protection methods that have appeared in the
privacypreserving data mining (PPDM) literature with the clear goal of providing good classification.

On the disclosure risk of multivariate microaggregation
http://hdl.handle.net/2117/12852
Title: On the disclosure risk of multivariate microaggregation
Authors: Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
Abstract: The aim of data protection methods is to protect a microdata file both minimizing the disclosure risk and preserving the data utility. Microaggregation is one of the most popular such methods among statistical agencies. Record linkage is the standard mechanism used to measure the disclosure risk of a microdata protection method. However, only standard, and quite generic, record linkage methods are usually considered, whereas more specific record linkage techniques can be more appropriate to evaluate the disclosure risk of some protection methods.
In this paper we present a new record linkage technique, specific for microaggregation, which obtains more correct links than standard techniques. We have tested the new technique with MDAV microaggregation and two other microaggregation methods, based on projections, that we propose here for the first time. The direct consequence is that these microaggregation methods have a higher disclosure risk than believed up to now.
Fri, 01 Jul 2011 11:20:50 GMT
http://hdl.handle.net/2117/12852
20110701T11:20:50Z
Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
no
Privacy in statistical databases, Disclosure risk, Record linkage, Microaggregation
The aim of data protection methods is to protect a microdata file both minimizing the disclosure risk and preserving the data utility. Microaggregation is one of the most popular such methods among statistical agencies. Record linkage is the standard mechanism used to measure the disclosure risk of a microdata protection method. However, only standard, and quite generic, record linkage methods are usually considered, whereas more specific record linkage techniques can be more appropriate to evaluate the disclosure risk of some protection methods.
In this paper we present a new record linkage technique, specific for microaggregation, which obtains more correct links than standard techniques. We have tested the new technique with MDAV microaggregation and two other microaggregation methods, based on projections, that we propose here for the first time. The direct consequence is that these microaggregation methods have a higher disclosure risk than believed up to now.

How to group attributes in multivariate microaggregation
http://hdl.handle.net/2117/12851
Title: How to group attributes in multivariate microaggregation
Authors: Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
Abstract: Microaggregation is one of the most employed microdata protection methods. It builds clusters of at least k original records, and then replaces these records with the centroid
of the cluster. When the number of attributes of the dataset is large, one usually splits the dataset into smaller blocks of attributes, and then applies microaggregation to each block, successively and independently. In this way, the effect of the noise introduced by microaggregation is reduced, at the cost of losing the kanonymity property. In this work we show that, besides the specific microaggregation method, the value of the parameter k and the number of blocks in which the dataset is split, there exists another factor which influences the quality of the microaggregation: the way in which the attributes are grouped to form the blocks. When correlated attributes are grouped
in the same block, the statistical utility of the protected dataset is higher. In contrast, when correlated attributes are dispersed into different blocks, the achieved anonymity is higher, and so, the disclosure risk is lower. We present quantitative evaluations of such statements based on different experiments on real datasets.
Fri, 01 Jul 2011 10:03:47 GMT
http://hdl.handle.net/2117/12851
20110701T10:03:47Z
Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
no
Microaggregation, Attribute selection, Statistical disclosure control
Microaggregation is one of the most employed microdata protection methods. It builds clusters of at least k original records, and then replaces these records with the centroid
of the cluster. When the number of attributes of the dataset is large, one usually splits the dataset into smaller blocks of attributes, and then applies microaggregation to each block, successively and independently. In this way, the effect of the noise introduced by microaggregation is reduced, at the cost of losing the kanonymity property. In this work we show that, besides the specific microaggregation method, the value of the parameter k and the number of blocks in which the dataset is split, there exists another factor which influences the quality of the microaggregation: the way in which the attributes are grouped to form the blocks. When correlated attributes are grouped
in the same block, the statistical utility of the protected dataset is higher. In contrast, when correlated attributes are dispersed into different blocks, the achieved anonymity is higher, and so, the disclosure risk is lower. We present quantitative evaluations of such statements based on different experiments on real datasets.

The Diameter of undirected graphs associated to plane tessellations
http://hdl.handle.net/2117/12662
Title: The Diameter of undirected graphs associated to plane tessellations
Authors: Andrés Yebra, José Luis; Fiol Mora, Miquel Àngel; Morillo Bosch, M. Paz; Alegre de Miguel, Ignacio
Abstract: Thi s paper studi es the di ameter of sorne fami I i es of undirected
graphs that can be associated to plane tessel lations
which fu1 ly represent them. More precisely, we concentrate
upon maximizing the order of the graphs for given values of
their diameter and degree, where the study always leads to
the optima'l solutions.
Fri, 27 May 2011 08:21:51 GMT
http://hdl.handle.net/2117/12662
20110527T08:21:51Z
Andrés Yebra, José Luis; Fiol Mora, Miquel Àngel; Morillo Bosch, M. Paz; Alegre de Miguel, Ignacio
no
Thi s paper studi es the di ameter of sorne fami I i es of undirected
graphs that can be associated to plane tessel lations
which fu1 ly represent them. More precisely, we concentrate
upon maximizing the order of the graphs for given values of
their diameter and degree, where the study always leads to
the optima'l solutions.

On fields of definition of torsion points of elliptic curves with complex multiplication
http://hdl.handle.net/2117/12251
Title: On fields of definition of torsion points of elliptic curves with complex multiplication
Authors: Dieulefait, Luis Victor; Gonzalez Jimenez, Enrique; Jiménez Urroz, Jorge
Abstract: For any elliptic curve E defined over the rationals with complex multiplication (CM) and for every prime p, we describe the image of the mod p Galois representation attached to E. We deduce information about the field of definition of torsion points of these curves; in particular, we classify all cases
where there are torsion points over Galois number fields not containing the field of definition of the CM.
Tue, 05 Apr 2011 14:47:47 GMT
http://hdl.handle.net/2117/12251
20110405T14:47:47Z
Dieulefait, Luis Victor; Gonzalez Jimenez, Enrique; Jiménez Urroz, Jorge
no
For any elliptic curve E defined over the rationals with complex multiplication (CM) and for every prime p, we describe the image of the mod p Galois representation attached to E. We deduce information about the field of definition of torsion points of these curves; in particular, we classify all cases
where there are torsion points over Galois number fields not containing the field of definition of the CM.

On the optimization of bipartite secret sharing schemes
http://hdl.handle.net/2117/12185
Title: On the optimization of bipartite secret sharing schemes
Authors: Farras Ventura, Oriol; MetcalfBurton, Jessica Ruth; Padró Laimon, Carles; Vázquez González, Leonor
Abstract: Optimizing the ratio between the maximum length of the shares and the length of the secret value in secret sharing schemes for general access structures is an extremely difficult and longstanding open problem. In this paper, we study it for bipartite access structures, in which the set of participants
is divided in two parts, and all participants in each part play an equivalent role. We focus on the search of lower bounds by using a special class of polymatroids that is introduced here, the bipartite ones. We present a method based on linear programming to compute, for every given bipartite access structure, the best lower bound that can be obtained by this combinatorial method. In addition, we obtain some general lower bounds that improve the previously known ones, and we construct optimal secret sharing schemes for a family of bipartite access structures.
Thu, 31 Mar 2011 10:21:07 GMT
http://hdl.handle.net/2117/12185
20110331T10:21:07Z
Farras Ventura, Oriol; MetcalfBurton, Jessica Ruth; Padró Laimon, Carles; Vázquez González, Leonor
no
Optimizing the ratio between the maximum length of the shares and the length of the secret value in secret sharing schemes for general access structures is an extremely difficult and longstanding open problem. In this paper, we study it for bipartite access structures, in which the set of participants
is divided in two parts, and all participants in each part play an equivalent role. We focus on the search of lower bounds by using a special class of polymatroids that is introduced here, the bipartite ones. We present a method based on linear programming to compute, for every given bipartite access structure, the best lower bound that can be obtained by this combinatorial method. In addition, we obtain some general lower bounds that improve the previously known ones, and we construct optimal secret sharing schemes for a family of bipartite access structures.

On secret sharing schemes, matroids and polymatroids
http://hdl.handle.net/2117/11444
Title: On secret sharing schemes, matroids and polymatroids
Authors: Martí Farré, Jaume; Padró Laimon, Carles
Abstract: The complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. The optimization of this parameter for general access structures is an important and very difficult open problem
in secret sharing. We explore in this paper the connections of this open problem with
matroids and polymatroids.
Matroid ports were introduced by Lehman in 1964. A forbidden minor characterization
of matroid ports was given by Seymour in 1976. These results precede the invention of
secret sharing by Shamir in 1979. Important connections between ideal secret sharing
schemes and matroids were discovered by Brickell and Davenport in 1991. Their results
can be restated as follows: every ideal secret sharing scheme defines a matroid, and its access structure is a port of that matroid.
Our main result is a lower bound on the optimal complexity of access structures that
are not matroid ports. Namely, by using the aforementioned characterization of matroid
ports by Seymour, we generalize the result by Brickell and Davenport by proving that,
if the length of every share in a secret sharing scheme is less than 3/2 times the length of the secret, then its access structure is a matroid port. This generalizes and explains a phenomenon that was observed in several families of access structures.
In addition, we introduce a new parameter to represent the best lower bound on the
optimal complexity that can be obtained by taking into account that the joint Shannon
entropies of a set of random variables define a polymatroid. We prove that every bound that is obtained by this technique for an access structure applies to its dual as well.
Finally, we present a construction of linear secret sharing schemes for the ports of the
Vamos and the nonDesargues matroids. In this way new upper bounds on their optimal
complexity are obtained, which are a contribution on the search of access structures whose optimal complexity lies between 1 and 3/2.
Mon, 21 Feb 2011 11:19:18 GMT
http://hdl.handle.net/2117/11444
20110221T11:19:18Z
Martí Farré, Jaume; Padró Laimon, Carles
no
Secret sharing
The complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. The optimization of this parameter for general access structures is an important and very difficult open problem
in secret sharing. We explore in this paper the connections of this open problem with
matroids and polymatroids.
Matroid ports were introduced by Lehman in 1964. A forbidden minor characterization
of matroid ports was given by Seymour in 1976. These results precede the invention of
secret sharing by Shamir in 1979. Important connections between ideal secret sharing
schemes and matroids were discovered by Brickell and Davenport in 1991. Their results
can be restated as follows: every ideal secret sharing scheme defines a matroid, and its access structure is a port of that matroid.
Our main result is a lower bound on the optimal complexity of access structures that
are not matroid ports. Namely, by using the aforementioned characterization of matroid
ports by Seymour, we generalize the result by Brickell and Davenport by proving that,
if the length of every share in a secret sharing scheme is less than 3/2 times the length of the secret, then its access structure is a matroid port. This generalizes and explains a phenomenon that was observed in several families of access structures.
In addition, we introduce a new parameter to represent the best lower bound on the
optimal complexity that can be obtained by taking into account that the joint Shannon
entropies of a set of random variables define a polymatroid. We prove that every bound that is obtained by this technique for an access structure applies to its dual as well.
Finally, we present a construction of linear secret sharing schemes for the ports of the
Vamos and the nonDesargues matroids. In this way new upper bounds on their optimal
complexity are obtained, which are a contribution on the search of access structures whose optimal complexity lies between 1 and 3/2.

On server trust in private proxy auctions
http://hdl.handle.net/2117/11383
Title: On server trust in private proxy auctions
Authors: Di Crescenzo, Giovanni; Herranz Sotoca, Javier; Sáez Moreno, Germán
Abstract: We investigate proxy auctions, an auction model which is proving very successful for online businesses (e.g.http://www.ebay.com), where a trusted server manages bids from clients by continuously updating the current price of the item and the currently winning bid as well as keeping private the winning client’s maximum bid.
We propose techniques for reducing the trust in the server by defining and achieving
a security property, called server integrity. Informally, this property protects
clients from a novel and large class of attacks from a corrupted server by allowing
them to verify the correctness of updates to the current price and the currently
winning bid. Our new auction scheme achieves server integrity and satisfies two important
properties that are not enjoyed by previous work in the literature: it has minimal
interaction, and only requires a single trusted server. The main ingredients of
our scheme are two minimalround implementations of zeroknowledge proofs for
proving lower bounds on encrypted values: one based on discrete logarithms that is
more efficient but uses the random oracle assumption, and another based on quadratic
residuosity that only uses standard intractability assumptions but is less efficient.
Tue, 15 Feb 2011 12:52:38 GMT
http://hdl.handle.net/2117/11383
20110215T12:52:38Z
Di Crescenzo, Giovanni; Herranz Sotoca, Javier; Sáez Moreno, Germán
no
Subhastes electròniques, Probes de coneixement zero, Confiança en servidor, Criptografia, Seguretat de les comunicacions
We investigate proxy auctions, an auction model which is proving very successful for online businesses (e.g.http://www.ebay.com), where a trusted server manages bids from clients by continuously updating the current price of the item and the currently winning bid as well as keeping private the winning client’s maximum bid.
We propose techniques for reducing the trust in the server by defining and achieving
a security property, called server integrity. Informally, this property protects
clients from a novel and large class of attacks from a corrupted server by allowing
them to verify the correctness of updates to the current price and the currently
winning bid. Our new auction scheme achieves server integrity and satisfies two important
properties that are not enjoyed by previous work in the literature: it has minimal
interaction, and only requires a single trusted server. The main ingredients of
our scheme are two minimalround implementations of zeroknowledge proofs for
proving lower bounds on encrypted values: one based on discrete logarithms that is
more efficient but uses the random oracle assumption, and another based on quadratic
residuosity that only uses standard intractability assumptions but is less efficient.

Optimal symbol alignment distance: a new distance for sequences of symbols
http://hdl.handle.net/2117/11063
Title: Optimal symbol alignment distance: a new distance for sequences of symbols
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: Comparison functions for sequences (of symbols) are important components of many applications, for example clustering, data cleansing and integration. For years, many efforts have been made to improve the performance of such comparison functions. Improvements have been done either at the cost of reducing the accuracy of the comparison, or by compromising certain basic characteristics of the functions, such as the triangular inequality. In this paper, we propose a new distance for sequences of symbols (or strings) called Optimal Symbol Alignment distance (OSA distance, for short). This distance has a very low cost in practice, which makes it a suitable candidate for computing distances in applications with large amounts of (very long) sequences. After providing a mathematical proof that the OSA distance is a real distance, we present some experiments for different scenarios (DNA sequences, record linkage, ...), showing that the proposed distance outperforms, in terms of execution time and/or accuracy, other wellknown comparison functions such as the Edit or JaroWinkler distances.
Mon, 17 Jan 2011 11:44:43 GMT
http://hdl.handle.net/2117/11063
20110117T11:44:43Z
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
no
Comparison functions for sequences (of symbols) are important components of many applications, for example clustering, data cleansing and integration. For years, many efforts have been made to improve the performance of such comparison functions. Improvements have been done either at the cost of reducing the accuracy of the comparison, or by compromising certain basic characteristics of the functions, such as the triangular inequality. In this paper, we propose a new distance for sequences of symbols (or strings) called Optimal Symbol Alignment distance (OSA distance, for short). This distance has a very low cost in practice, which makes it a suitable candidate for computing distances in applications with large amounts of (very long) sequences. After providing a mathematical proof that the OSA distance is a real distance, we present some experiments for different scenarios (DNA sequences, record linkage, ...), showing that the proposed distance outperforms, in terms of execution time and/or accuracy, other wellknown comparison functions such as the Edit or JaroWinkler distances.

Squarefree discriminants of Frobenius rings
http://hdl.handle.net/2117/10419
Title: Squarefree discriminants of Frobenius rings
Authors: David, Chantal; Jiménez Urroz, Jorge
Abstract: Let E be an elliptic curve over Q. It is well known that the ring of endomorphisms
of $E_p$, the reduction of E modulo a prime p of ordinary reduction, is an order of
the quadratic imaginary field $Q(\pi_p)$ generated by the Frobenius element $\pi_p$. When the curve has complex multiplication (CM), this is always a fixed field as the prime varies. However, when the curve has no CM, very little is known, not only about the
order, but about the fields that might appear as algebra of endomorphisms varying
the prime. The ring of endomorphisms is obviously related with the arithmetic of
$a^2_p$−4p, the discriminant of the characteristic polynomial of the Frobenius element. In this paper, we are interested in the function $\pi^{sf}_{E,r,h}(\chi)$ counting the number of primes p up to x such that $a^2_p$ is squarefree and in the congruence class r modulo h.
We give in this paper the precise asymptotic for $\pi^{sf}_{E,r,h}(\chi)$ when averaging over elliptic curves defined over the rationals, and we discuss the relation of this result with the LangTrotter conjecture, and with some other problems related to the curve modulo p.
Fri, 26 Nov 2010 13:03:25 GMT
http://hdl.handle.net/2117/10419
20101126T13:03:25Z
David, Chantal; Jiménez Urroz, Jorge
no
Let E be an elliptic curve over Q. It is well known that the ring of endomorphisms
of $E_p$, the reduction of E modulo a prime p of ordinary reduction, is an order of
the quadratic imaginary field $Q(\pi_p)$ generated by the Frobenius element $\pi_p$. When the curve has complex multiplication (CM), this is always a fixed field as the prime varies. However, when the curve has no CM, very little is known, not only about the
order, but about the fields that might appear as algebra of endomorphisms varying
the prime. The ring of endomorphisms is obviously related with the arithmetic of
$a^2_p$−4p, the discriminant of the characteristic polynomial of the Frobenius element. In this paper, we are interested in the function $\pi^{sf}_{E,r,h}(\chi)$ counting the number of primes p up to x such that $a^2_p$ is squarefree and in the congruence class r modulo h.
We give in this paper the precise asymptotic for $\pi^{sf}_{E,r,h}(\chi)$ when averaging over elliptic curves defined over the rationals, and we discuss the relation of this result with the LangTrotter conjecture, and with some other problems related to the curve modulo p.