DSpace Collection:
http://hdl.handle.net/2117/3529
Thu, 31 Jul 2014 13:45:17 GMT2014-07-31T13:45:17Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoCropping Euler factors of modular L-functions
http://hdl.handle.net/2117/20759
Title: Cropping Euler factors of modular L-functions
Authors: González Rovira, Josep; Jiménez Urroz, Jorge; Lario Loyo, Joan Carles
Abstract: According to the Birch and Swinnerton-Dyer conjectures, if A/Q is an abelian variety, then its L-function must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and non-CM cases.Mon, 25 Nov 2013 17:05:43 GMThttp://hdl.handle.net/2117/207592013-11-25T17:05:43ZGonzález Rovira, Josep; Jiménez Urroz, Jorge; Lario Loyo, Joan CarlesnoAbelian varieties, Distribution of Frobenius elements, L-functionsAccording to the Birch and Swinnerton-Dyer conjectures, if A/Q is an abelian variety, then its L-function must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and non-CM cases.More hybrid and secure protection of statistical data sets
http://hdl.handle.net/2117/17412
Title: More hybrid and secure protection of statistical data sets
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claimsThu, 17 Jan 2013 18:24:07 GMThttp://hdl.handle.net/2117/174122013-01-17T18:24:07ZHerranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, MarcnoStatistical data sets protection, synthetic methods, hybrid methods, interval disclosure riskDifferent methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claimsKd-trees and the real disclosure risks of large statistical databases
http://hdl.handle.net/2117/16561
Title: Kd-trees and the real disclosure risks of large statistical databases
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: In data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effortTue, 25 Sep 2012 11:53:08 GMThttp://hdl.handle.net/2117/165612012-09-25T11:53:08ZHerranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, MarcnoIn data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effortOrders of CM elliptic curves modulo p with at most two primes
http://hdl.handle.net/2117/15793
Title: Orders of CM elliptic curves modulo p with at most two primes
Authors: Iwaniec, H.; Jiménez Urroz, Jorge
Abstract: Nowadays the generation of cryptosystems requires two main aspects. First
the security, and then the size of the keys involved in the construction and
comunication process. About the former one needs a di±cult mathematical
assumption which ensures your system will not be broken unless a well known
di±cult problem is solved. In this context one of the most famous assumption
underlying a wide variety of cryptosystems is the computation of logarithms in
¯nite ¯elds and the Di±e Hellman assumption. However it is also well known
that elliptic curves provide good examples of representation of abelian groups
reducing the size of keys needed to guarantee the same level of security as in
the ¯nite ¯eld case. The ¯rst thing one needs to perform elliptic logarithms
which are computationaly secure is to ¯x a ¯nite ¯eld, Fp, and one curve, E=Fp
de¯ned over the ¯eld, such that jE(Fp)j has a prime factor as large as possible.
In practice the problem of ¯nding such a pair, of curve and ¯eld, seems simple,
just take a curve with integer coe±cients and a prime p of good reduction at
random and see if jE(Fp)j has a big prime factor. However the theory that
makes the previous algorithm useful is by no means obvious, neither clear or
complete. For example it is well known that supersingular elliptic curves have
to be avoided in the previous process since they reduce the security of any
cryptosystem based on the Di±e Hellman assumption on the elliptic logarithm.
But more importantly, the process will be feasible whenever the probability to
¯nd a pair, (E; p), with a big prime factor qj jE(Fp)j is big enough. One problem
arises naturally from the above.Tue, 08 May 2012 11:42:08 GMThttp://hdl.handle.net/2117/157932012-05-08T11:42:08ZIwaniec, H.; Jiménez Urroz, JorgenoNowadays the generation of cryptosystems requires two main aspects. First
the security, and then the size of the keys involved in the construction and
comunication process. About the former one needs a di±cult mathematical
assumption which ensures your system will not be broken unless a well known
di±cult problem is solved. In this context one of the most famous assumption
underlying a wide variety of cryptosystems is the computation of logarithms in
¯nite ¯elds and the Di±e Hellman assumption. However it is also well known
that elliptic curves provide good examples of representation of abelian groups
reducing the size of keys needed to guarantee the same level of security as in
the ¯nite ¯eld case. The ¯rst thing one needs to perform elliptic logarithms
which are computationaly secure is to ¯x a ¯nite ¯eld, Fp, and one curve, E=Fp
de¯ned over the ¯eld, such that jE(Fp)j has a prime factor as large as possible.
In practice the problem of ¯nding such a pair, of curve and ¯eld, seems simple,
just take a curve with integer coe±cients and a prime p of good reduction at
random and see if jE(Fp)j has a big prime factor. However the theory that
makes the previous algorithm useful is by no means obvious, neither clear or
complete. For example it is well known that supersingular elliptic curves have
to be avoided in the previous process since they reduce the security of any
cryptosystem based on the Di±e Hellman assumption on the elliptic logarithm.
But more importantly, the process will be feasible whenever the probability to
¯nd a pair, (E; p), with a big prime factor qj jE(Fp)j is big enough. One problem
arises naturally from the above.Classifying data from protected statistical datasets
http://hdl.handle.net/2117/14416
Title: Classifying data from protected statistical datasets
Authors: Herranz Sotoca, Javier; Matwin, Stan; Nin Guerrero, Jordi; Torra i Reventós, Vicenç
Abstract: Statistical Disclosure Control (SDC) is an active research area in the recent years. The goal is to transform an original dataset X into a protected one X0, such that X0 does not reveal any relation between confidential and (quasi-)identifier attributes and such that X0 can be
used to compute reliable statistical information about X. Many specific protection methods have been proposed and analyzed, with respect to the
levels of privacy and utility that they offer. However, when measuring utility, only differences between the statistical values of X and X0 are considered. This would indicate that datasets protected by SDC methods can be used only for statistical purposes.
We show in this paper that this is not the case, because a protected dataset X0 can be used to construct good classifiers for future data. To do so, we describe an extensive set of experiments that we have run with different SDC protection methods and different (real) datasets. In general, the resulting classifiers are very good, which is good news for both the SDC and the Privacy-preserving Data Mining communities. In particular, our results question the necessity of some specific protection methods that have appeared in the
privacy-preserving data mining (PPDM) literature with the clear goal of providing good classification.Thu, 05 Jan 2012 13:01:13 GMThttp://hdl.handle.net/2117/144162012-01-05T13:01:13ZHerranz Sotoca, Javier; Matwin, Stan; Nin Guerrero, Jordi; Torra i Reventós, VicençnoStatistical Disclosure Control (SDC) is an active research area in the recent years. The goal is to transform an original dataset X into a protected one X0, such that X0 does not reveal any relation between confidential and (quasi-)identifier attributes and such that X0 can be
used to compute reliable statistical information about X. Many specific protection methods have been proposed and analyzed, with respect to the
levels of privacy and utility that they offer. However, when measuring utility, only differences between the statistical values of X and X0 are considered. This would indicate that datasets protected by SDC methods can be used only for statistical purposes.
We show in this paper that this is not the case, because a protected dataset X0 can be used to construct good classifiers for future data. To do so, we describe an extensive set of experiments that we have run with different SDC protection methods and different (real) datasets. In general, the resulting classifiers are very good, which is good news for both the SDC and the Privacy-preserving Data Mining communities. In particular, our results question the necessity of some specific protection methods that have appeared in the
privacy-preserving data mining (PPDM) literature with the clear goal of providing good classification.On the disclosure risk of multivariate microaggregation
http://hdl.handle.net/2117/12852
Title: On the disclosure risk of multivariate microaggregation
Authors: Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
Abstract: The aim of data protection methods is to protect a microdata file both minimizing the disclosure risk and preserving the data utility. Microaggregation is one of the most popular such methods among statistical agencies. Record linkage is the standard mechanism used to measure the disclosure risk of a microdata protection method. However, only standard, and quite generic, record linkage methods are usually considered, whereas more specific record linkage techniques can be more appropriate to evaluate the disclosure risk of some protection methods.
In this paper we present a new record linkage technique, specific for microaggregation, which obtains more correct links than standard techniques. We have tested the new technique with MDAV microaggregation and two other microaggregation methods, based on projections, that we propose here for the first time. The direct consequence is that these microaggregation methods have a higher disclosure risk than believed up to now.Fri, 01 Jul 2011 11:20:50 GMThttp://hdl.handle.net/2117/128522011-07-01T11:20:50ZNin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, VicençnoPrivacy in statistical databases, Disclosure risk, Record linkage, MicroaggregationThe aim of data protection methods is to protect a microdata file both minimizing the disclosure risk and preserving the data utility. Microaggregation is one of the most popular such methods among statistical agencies. Record linkage is the standard mechanism used to measure the disclosure risk of a microdata protection method. However, only standard, and quite generic, record linkage methods are usually considered, whereas more specific record linkage techniques can be more appropriate to evaluate the disclosure risk of some protection methods.
In this paper we present a new record linkage technique, specific for microaggregation, which obtains more correct links than standard techniques. We have tested the new technique with MDAV microaggregation and two other microaggregation methods, based on projections, that we propose here for the first time. The direct consequence is that these microaggregation methods have a higher disclosure risk than believed up to now.How to group attributes in multivariate microaggregation
http://hdl.handle.net/2117/12851
Title: How to group attributes in multivariate microaggregation
Authors: Nin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, Vicenç
Abstract: Microaggregation is one of the most employed microdata protection methods. It builds clusters of at least k original records, and then replaces these records with the centroid
of the cluster. When the number of attributes of the dataset is large, one usually splits the dataset into smaller blocks of attributes, and then applies microaggregation to each block, successively and independently. In this way, the effect of the noise introduced by microaggregation is reduced, at the cost of losing the k-anonymity property. In this work we show that, besides the specific microaggregation method, the value of the parameter k and the number of blocks in which the dataset is split, there exists another factor which influences the quality of the microaggregation: the way in which the attributes are grouped to form the blocks. When correlated attributes are grouped
in the same block, the statistical utility of the protected dataset is higher. In contrast, when correlated attributes are dispersed into different blocks, the achieved anonymity is higher, and so, the disclosure risk is lower. We present quantitative evaluations of such statements based on different experiments on real datasets.Fri, 01 Jul 2011 10:03:47 GMThttp://hdl.handle.net/2117/128512011-07-01T10:03:47ZNin Guerrero, Jordi; Herranz Sotoca, Javier; Torra i Reventós, VicençnoMicroaggregation, Attribute selection, Statistical disclosure controlMicroaggregation is one of the most employed microdata protection methods. It builds clusters of at least k original records, and then replaces these records with the centroid
of the cluster. When the number of attributes of the dataset is large, one usually splits the dataset into smaller blocks of attributes, and then applies microaggregation to each block, successively and independently. In this way, the effect of the noise introduced by microaggregation is reduced, at the cost of losing the k-anonymity property. In this work we show that, besides the specific microaggregation method, the value of the parameter k and the number of blocks in which the dataset is split, there exists another factor which influences the quality of the microaggregation: the way in which the attributes are grouped to form the blocks. When correlated attributes are grouped
in the same block, the statistical utility of the protected dataset is higher. In contrast, when correlated attributes are dispersed into different blocks, the achieved anonymity is higher, and so, the disclosure risk is lower. We present quantitative evaluations of such statements based on different experiments on real datasets.The Diameter of undirected graphs associated to plane tessellations
http://hdl.handle.net/2117/12662
Title: The Diameter of undirected graphs associated to plane tessellations
Authors: Andrés Yebra, José Luis; Fiol Mora, Miquel Àngel; Morillo Bosch, M. Paz; Alegre de Miguel, Ignacio
Abstract: Thi s paper studi es the di ameter of sorne fami I i es of undirected
graphs that can be associated to plane tessel lations
which fu1 ly represent them. More precisely, we concentrate
upon maximizing the order of the graphs for given values of
their diameter and degree, where the study always leads to
the optima'l solutions.Fri, 27 May 2011 08:21:51 GMThttp://hdl.handle.net/2117/126622011-05-27T08:21:51ZAndrés Yebra, José Luis; Fiol Mora, Miquel Àngel; Morillo Bosch, M. Paz; Alegre de Miguel, IgnacionoThi s paper studi es the di ameter of sorne fami I i es of undirected
graphs that can be associated to plane tessel lations
which fu1 ly represent them. More precisely, we concentrate
upon maximizing the order of the graphs for given values of
their diameter and degree, where the study always leads to
the optima'l solutions.On fields of definition of torsion points of elliptic curves with complex multiplication
http://hdl.handle.net/2117/12251
Title: On fields of definition of torsion points of elliptic curves with complex multiplication
Authors: Dieulefait, Luis Victor; Gonzalez Jimenez, Enrique; Jiménez Urroz, Jorge
Abstract: For any elliptic curve E defined over the rationals with complex multiplication (CM) and for every prime p, we describe the image of the mod p Galois representation attached to E. We deduce information about the field of definition of torsion points of these curves; in particular, we classify all cases
where there are torsion points over Galois number fields not containing the field of definition of the CM.Tue, 05 Apr 2011 14:47:47 GMThttp://hdl.handle.net/2117/122512011-04-05T14:47:47ZDieulefait, Luis Victor; Gonzalez Jimenez, Enrique; Jiménez Urroz, JorgenoFor any elliptic curve E defined over the rationals with complex multiplication (CM) and for every prime p, we describe the image of the mod p Galois representation attached to E. We deduce information about the field of definition of torsion points of these curves; in particular, we classify all cases
where there are torsion points over Galois number fields not containing the field of definition of the CM.On the optimization of bipartite secret sharing schemes
http://hdl.handle.net/2117/12185
Title: On the optimization of bipartite secret sharing schemes
Authors: Farras Ventura, Oriol; Metcalf-Burton, Jessica Ruth; Padró Laimon, Carles; Vázquez González, Leonor
Abstract: Optimizing the ratio between the maximum length of the shares and the length of the secret value in secret sharing schemes for general access structures is an extremely difficult and long-standing open problem. In this paper, we study it for bipartite access structures, in which the set of participants
is divided in two parts, and all participants in each part play an equivalent role. We focus on the search of lower bounds by using a special class of polymatroids that is introduced here, the bipartite ones. We present a method based on linear programming to compute, for every given bipartite access structure, the best lower bound that can be obtained by this combinatorial method. In addition, we obtain some general lower bounds that improve the previously known ones, and we construct optimal secret sharing schemes for a family of bipartite access structures.Thu, 31 Mar 2011 10:21:07 GMThttp://hdl.handle.net/2117/121852011-03-31T10:21:07ZFarras Ventura, Oriol; Metcalf-Burton, Jessica Ruth; Padró Laimon, Carles; Vázquez González, LeonornoOptimizing the ratio between the maximum length of the shares and the length of the secret value in secret sharing schemes for general access structures is an extremely difficult and long-standing open problem. In this paper, we study it for bipartite access structures, in which the set of participants
is divided in two parts, and all participants in each part play an equivalent role. We focus on the search of lower bounds by using a special class of polymatroids that is introduced here, the bipartite ones. We present a method based on linear programming to compute, for every given bipartite access structure, the best lower bound that can be obtained by this combinatorial method. In addition, we obtain some general lower bounds that improve the previously known ones, and we construct optimal secret sharing schemes for a family of bipartite access structures.On secret sharing schemes, matroids and polymatroids
http://hdl.handle.net/2117/11444
Title: On secret sharing schemes, matroids and polymatroids
Authors: Martí Farré, Jaume; Padró Laimon, Carles
Abstract: The complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. The optimization of this parameter for general access structures is an important and very difficult open problem
in secret sharing. We explore in this paper the connections of this open problem with
matroids and polymatroids.
Matroid ports were introduced by Lehman in 1964. A forbidden minor characterization
of matroid ports was given by Seymour in 1976. These results precede the invention of
secret sharing by Shamir in 1979. Important connections between ideal secret sharing
schemes and matroids were discovered by Brickell and Davenport in 1991. Their results
can be restated as follows: every ideal secret sharing scheme defines a matroid, and its access structure is a port of that matroid.
Our main result is a lower bound on the optimal complexity of access structures that
are not matroid ports. Namely, by using the aforementioned characterization of matroid
ports by Seymour, we generalize the result by Brickell and Davenport by proving that,
if the length of every share in a secret sharing scheme is less than 3/2 times the length of the secret, then its access structure is a matroid port. This generalizes and explains a phenomenon that was observed in several families of access structures.
In addition, we introduce a new parameter to represent the best lower bound on the
optimal complexity that can be obtained by taking into account that the joint Shannon
entropies of a set of random variables define a polymatroid. We prove that every bound that is obtained by this technique for an access structure applies to its dual as well.
Finally, we present a construction of linear secret sharing schemes for the ports of the
Vamos and the non-Desargues matroids. In this way new upper bounds on their optimal
complexity are obtained, which are a contribution on the search of access structures whose optimal complexity lies between 1 and 3/2.Mon, 21 Feb 2011 11:19:18 GMThttp://hdl.handle.net/2117/114442011-02-21T11:19:18ZMartí Farré, Jaume; Padró Laimon, CarlesnoSecret sharingThe complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. The optimization of this parameter for general access structures is an important and very difficult open problem
in secret sharing. We explore in this paper the connections of this open problem with
matroids and polymatroids.
Matroid ports were introduced by Lehman in 1964. A forbidden minor characterization
of matroid ports was given by Seymour in 1976. These results precede the invention of
secret sharing by Shamir in 1979. Important connections between ideal secret sharing
schemes and matroids were discovered by Brickell and Davenport in 1991. Their results
can be restated as follows: every ideal secret sharing scheme defines a matroid, and its access structure is a port of that matroid.
Our main result is a lower bound on the optimal complexity of access structures that
are not matroid ports. Namely, by using the aforementioned characterization of matroid
ports by Seymour, we generalize the result by Brickell and Davenport by proving that,
if the length of every share in a secret sharing scheme is less than 3/2 times the length of the secret, then its access structure is a matroid port. This generalizes and explains a phenomenon that was observed in several families of access structures.
In addition, we introduce a new parameter to represent the best lower bound on the
optimal complexity that can be obtained by taking into account that the joint Shannon
entropies of a set of random variables define a polymatroid. We prove that every bound that is obtained by this technique for an access structure applies to its dual as well.
Finally, we present a construction of linear secret sharing schemes for the ports of the
Vamos and the non-Desargues matroids. In this way new upper bounds on their optimal
complexity are obtained, which are a contribution on the search of access structures whose optimal complexity lies between 1 and 3/2.On server trust in private proxy auctions
http://hdl.handle.net/2117/11383
Title: On server trust in private proxy auctions
Authors: Di Crescenzo, Giovanni; Herranz Sotoca, Javier; Sáez Moreno, Germán
Abstract: We investigate proxy auctions, an auction model which is proving very successful for on-line businesses (e.g.http://www.ebay.com), where a trusted server manages bids from clients by continuously updating the current price of the item and the currently winning bid as well as keeping private the winning client’s maximum bid.
We propose techniques for reducing the trust in the server by defining and achieving
a security property, called server integrity. Informally, this property protects
clients from a novel and large class of attacks from a corrupted server by allowing
them to verify the correctness of updates to the current price and the currently
winning bid. Our new auction scheme achieves server integrity and satisfies two important
properties that are not enjoyed by previous work in the literature: it has minimal
interaction, and only requires a single trusted server. The main ingredients of
our scheme are two minimal-round implementations of zero-knowledge proofs for
proving lower bounds on encrypted values: one based on discrete logarithms that is
more efficient but uses the random oracle assumption, and another based on quadratic
residuosity that only uses standard intractability assumptions but is less efficient.Tue, 15 Feb 2011 12:52:38 GMThttp://hdl.handle.net/2117/113832011-02-15T12:52:38ZDi Crescenzo, Giovanni; Herranz Sotoca, Javier; Sáez Moreno, GermánnoSubhastes electròniques, Probes de coneixement zero, Confiança en servidor, Criptografia, Seguretat de les comunicacionsWe investigate proxy auctions, an auction model which is proving very successful for on-line businesses (e.g.http://www.ebay.com), where a trusted server manages bids from clients by continuously updating the current price of the item and the currently winning bid as well as keeping private the winning client’s maximum bid.
We propose techniques for reducing the trust in the server by defining and achieving
a security property, called server integrity. Informally, this property protects
clients from a novel and large class of attacks from a corrupted server by allowing
them to verify the correctness of updates to the current price and the currently
winning bid. Our new auction scheme achieves server integrity and satisfies two important
properties that are not enjoyed by previous work in the literature: it has minimal
interaction, and only requires a single trusted server. The main ingredients of
our scheme are two minimal-round implementations of zero-knowledge proofs for
proving lower bounds on encrypted values: one based on discrete logarithms that is
more efficient but uses the random oracle assumption, and another based on quadratic
residuosity that only uses standard intractability assumptions but is less efficient.Optimal symbol alignment distance: a new distance for sequences of symbols
http://hdl.handle.net/2117/11063
Title: Optimal symbol alignment distance: a new distance for sequences of symbols
Authors: Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Abstract: Comparison functions for sequences (of symbols) are important components of many applications, for example clustering, data cleansing and integration. For years, many efforts have been made to improve the performance of such comparison functions. Improvements have been done either at the cost of reducing the accuracy of the comparison, or by compromising certain basic characteristics of the functions, such as the triangular inequality. In this paper, we propose a new distance for sequences of symbols (or strings) called Optimal Symbol Alignment distance (OSA distance, for short). This distance has a very low cost in practice, which makes it a suitable candidate for computing distances in applications with large amounts of (very long) sequences. After providing a mathematical proof that the OSA distance is a real distance, we present some experiments for different scenarios (DNA sequences, record linkage, ...), showing that the proposed distance outperforms, in terms of execution time and/or accuracy, other well-known comparison functions such as the Edit or Jaro-Winkler distances.Mon, 17 Jan 2011 11:44:43 GMThttp://hdl.handle.net/2117/110632011-01-17T11:44:43ZHerranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, MarcnoComparison functions for sequences (of symbols) are important components of many applications, for example clustering, data cleansing and integration. For years, many efforts have been made to improve the performance of such comparison functions. Improvements have been done either at the cost of reducing the accuracy of the comparison, or by compromising certain basic characteristics of the functions, such as the triangular inequality. In this paper, we propose a new distance for sequences of symbols (or strings) called Optimal Symbol Alignment distance (OSA distance, for short). This distance has a very low cost in practice, which makes it a suitable candidate for computing distances in applications with large amounts of (very long) sequences. After providing a mathematical proof that the OSA distance is a real distance, we present some experiments for different scenarios (DNA sequences, record linkage, ...), showing that the proposed distance outperforms, in terms of execution time and/or accuracy, other well-known comparison functions such as the Edit or Jaro-Winkler distances.Square-free discriminants of Frobenius rings
http://hdl.handle.net/2117/10419
Title: Square-free discriminants of Frobenius rings
Authors: David, Chantal; Jiménez Urroz, Jorge
Abstract: Let E be an elliptic curve over Q. It is well known that the ring of endomorphisms
of $E_p$, the reduction of E modulo a prime p of ordinary reduction, is an order of
the quadratic imaginary field $Q(\pi_p)$ generated by the Frobenius element $\pi_p$. When the curve has complex multiplication (CM), this is always a fixed field as the prime varies. However, when the curve has no CM, very little is known, not only about the
order, but about the fields that might appear as algebra of endomorphisms varying
the prime. The ring of endomorphisms is obviously related with the arithmetic of
$a^2_p$−4p, the discriminant of the characteristic polynomial of the Frobenius element. In this paper, we are interested in the function $\pi^{sf}_{E,r,h}(\chi)$ counting the number of primes p up to x such that $a^2_p$ is square-free and in the congruence class r modulo h.
We give in this paper the precise asymptotic for $\pi^{sf}_{E,r,h}(\chi)$ when averaging over elliptic curves defined over the rationals, and we discuss the relation of this result with the Lang-Trotter conjecture, and with some other problems related to the curve modulo p.Fri, 26 Nov 2010 13:03:25 GMThttp://hdl.handle.net/2117/104192010-11-26T13:03:25ZDavid, Chantal; Jiménez Urroz, JorgenoLet E be an elliptic curve over Q. It is well known that the ring of endomorphisms
of $E_p$, the reduction of E modulo a prime p of ordinary reduction, is an order of
the quadratic imaginary field $Q(\pi_p)$ generated by the Frobenius element $\pi_p$. When the curve has complex multiplication (CM), this is always a fixed field as the prime varies. However, when the curve has no CM, very little is known, not only about the
order, but about the fields that might appear as algebra of endomorphisms varying
the prime. The ring of endomorphisms is obviously related with the arithmetic of
$a^2_p$−4p, the discriminant of the characteristic polynomial of the Frobenius element. In this paper, we are interested in the function $\pi^{sf}_{E,r,h}(\chi)$ counting the number of primes p up to x such that $a^2_p$ is square-free and in the congruence class r modulo h.
We give in this paper the precise asymptotic for $\pi^{sf}_{E,r,h}(\chi)$ when averaging over elliptic curves defined over the rationals, and we discuss the relation of this result with the Lang-Trotter conjecture, and with some other problems related to the curve modulo p.Ideal homogeneous access structures constructed from graphs
http://hdl.handle.net/2117/9851
Title: Ideal homogeneous access structures constructed from graphs
Authors: Herranz Sotoca, Javier
Abstract: Starting from a new relation between graphs and secret sharing schemes introduced by Xiao, Liu and Zhang, we show a method to construct more general ideal homogeneous access structures. The method has some advantages: it efficiently gives an ideal homogeneous access structure for the desired rank, and some conditions can be imposed (such as forbidden or necessary subsets of players), even if the exact composition of the resulting access structure cannot be fully controlled. The number of homogeneous access structures that can be constructed in this way is quite limited; for example, we show that (t, l)-threshold access structures can be constructed from a graph only when t = 1, t = l - 1 or t = l.Wed, 20 Oct 2010 11:01:41 GMThttp://hdl.handle.net/2117/98512010-10-20T11:01:41ZHerranz Sotoca, JaviernoStarting from a new relation between graphs and secret sharing schemes introduced by Xiao, Liu and Zhang, we show a method to construct more general ideal homogeneous access structures. The method has some advantages: it efficiently gives an ideal homogeneous access structure for the desired rank, and some conditions can be imposed (such as forbidden or necessary subsets of players), even if the exact composition of the resulting access structure cannot be fully controlled. The number of homogeneous access structures that can be constructed in this way is quite limited; for example, we show that (t, l)-threshold access structures can be constructed from a graph only when t = 1, t = l - 1 or t = l.