Articles de revista
http://hdl.handle.net/2117/3529
Fri, 29 Apr 2016 18:22:09 GMT
20160429T18:22:09Z

Attributebased versions of Schnorr and ElGamal
http://hdl.handle.net/2117/86060
Attributebased versions of Schnorr and ElGamal
Herranz Sotoca, Javier
We design in this paper the first attributebased cryptosystems that work in the classical discrete logarithm, pairingfree, setting. The attributebased signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the discrete logarithm assumption, in the random oracle model. The attributebased encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the decisional Diffie–Hellman assumption, in the standard model. The proposed schemes are secure only in a bounded model: the systems admit L secret keys, at most, for a bound L that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound L. Although this is an important drawback that can limit the applicability of the proposed schemes in some reallife applications, it turns out that the bounded security of our keypolicy attributebased encryption scheme (in particular, with L=1L=1) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC’2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the decisional Diffie–Hellman assumption.
The final publication is available at Springer via http://dx.doi.org/10.1007/s0020001502707
Thu, 21 Apr 2016 12:03:25 GMT
http://hdl.handle.net/2117/86060
20160421T12:03:25Z
Herranz Sotoca, Javier
We design in this paper the first attributebased cryptosystems that work in the classical discrete logarithm, pairingfree, setting. The attributebased signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the discrete logarithm assumption, in the random oracle model. The attributebased encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the decisional Diffie–Hellman assumption, in the standard model. The proposed schemes are secure only in a bounded model: the systems admit L secret keys, at most, for a bound L that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound L. Although this is an important drawback that can limit the applicability of the proposed schemes in some reallife applications, it turns out that the bounded security of our keypolicy attributebased encryption scheme (in particular, with L=1L=1) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC’2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the decisional Diffie–Hellman assumption.

Secret sharing, rank inequalities, and information inequalities
http://hdl.handle.net/2117/86051
Secret sharing, rank inequalities, and information inequalities
Martín Mollevi, Sebastià; Padró Laimon, Carles; Yang, An
Beimel and Orlov proved that all information
inequalities on four or five variables, together with all information
inequalities on more than five variables that are known to date,
provide lower bounds on the size of the shares in secret sharing
schemes that are at most linear on the number of participants.
We present here another two negative results about the power of
information inequalities in the search for lower bounds in secret
sharing. First, we prove that all information inequalities on a
bounded number of variables can only provide lower bounds that
are polynomial on the number of participants. Second, we prove
that the rank inequalities that are derived from the existence of
two common informations can provide only lower bounds that
are at most cubic in the number of participants.
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Thu, 21 Apr 2016 11:07:34 GMT
http://hdl.handle.net/2117/86051
20160421T11:07:34Z
Martín Mollevi, Sebastià
Padró Laimon, Carles
Yang, An
Beimel and Orlov proved that all information
inequalities on four or five variables, together with all information
inequalities on more than five variables that are known to date,
provide lower bounds on the size of the shares in secret sharing
schemes that are at most linear on the number of participants.
We present here another two negative results about the power of
information inequalities in the search for lower bounds in secret
sharing. First, we prove that all information inequalities on a
bounded number of variables can only provide lower bounds that
are polynomial on the number of participants. Second, we prove
that the rank inequalities that are derived from the existence of
two common informations can provide only lower bounds that
are at most cubic in the number of participants.

Revisiting distancebased record linkage for privacypreserving release of statistical datasets
http://hdl.handle.net/2117/85339
Revisiting distancebased record linkage for privacypreserving release of statistical datasets
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Rodríguez, Pablo; Tassa, Tamir
Statistical Disclosure Control (SDC, for short) studies the problem of privacypreserving data publishing in cases where the data is expected to be used for statistical analysis. An original dataset T containing sensitive information is transformed into a sanitized version T' which is released to the public. Both utility and privacy aspects are very important in this setting. For utility, T' must allow data miners or statisticians to obtain similar results to those which would have been obtained from the original dataset T. For privacy, T' must significantly reduce the ability of an adversary to infer sensitive information on the data subjects in T. One of the main aposteriori measures that the SDC community has considered up to now when analyzing the privacy offered by a given protection method is the DistanceBased Record Linkage (DBRL) risk measure. In this work, we argue that the classical DBRL risk measure is insufficient. For this reason, we introduce the novel Global DistanceBased Record Linkage (GDBRL) risk measure. We claim that this new measure must be evaluated alongside the classical DBRL measure in order to better assess the risk in publishing T' instead of T. After that, we describe how this new measure can be computed by the data owner and discuss the scalability of those computations. We conclude by extensive experimentation where we compare the risk assessments offered by our novel measure as well as by the classical one, using wellknown SDC protection methods. Those experiments validate our hypothesis that the GDBRL risk measure issues, in many cases, higher risk assessments than the classical DBRL measure. In other words, relying solely on the classical DBRL measure for risk assessment might be misleading, as the true risk may be in fact higher. Hence, we strongly recommend that the SDC community considers the new GDBRL risk measure as an additional measure when analyzing the privacy offered by SDC protection algorithms.
Thu, 07 Apr 2016 10:24:46 GMT
http://hdl.handle.net/2117/85339
20160407T10:24:46Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Rodríguez, Pablo
Tassa, Tamir
Statistical Disclosure Control (SDC, for short) studies the problem of privacypreserving data publishing in cases where the data is expected to be used for statistical analysis. An original dataset T containing sensitive information is transformed into a sanitized version T' which is released to the public. Both utility and privacy aspects are very important in this setting. For utility, T' must allow data miners or statisticians to obtain similar results to those which would have been obtained from the original dataset T. For privacy, T' must significantly reduce the ability of an adversary to infer sensitive information on the data subjects in T. One of the main aposteriori measures that the SDC community has considered up to now when analyzing the privacy offered by a given protection method is the DistanceBased Record Linkage (DBRL) risk measure. In this work, we argue that the classical DBRL risk measure is insufficient. For this reason, we introduce the novel Global DistanceBased Record Linkage (GDBRL) risk measure. We claim that this new measure must be evaluated alongside the classical DBRL measure in order to better assess the risk in publishing T' instead of T. After that, we describe how this new measure can be computed by the data owner and discuss the scalability of those computations. We conclude by extensive experimentation where we compare the risk assessments offered by our novel measure as well as by the classical one, using wellknown SDC protection methods. Those experiments validate our hypothesis that the GDBRL risk measure issues, in many cases, higher risk assessments than the classical DBRL measure. In other words, relying solely on the classical DBRL measure for risk assessment might be misleading, as the true risk may be in fact higher. Hence, we strongly recommend that the SDC community considers the new GDBRL risk measure as an additional measure when analyzing the privacy offered by SDC protection algorithms.

Vote validatability in MixNetbased eVoting
http://hdl.handle.net/2117/82926
Vote validatability in MixNetbased eVoting
Bibiloni, Pedro; Escala Ribas, Alex; Morillo Bosch, M. Paz
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions; One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions
Mon, 15 Feb 2016 12:22:39 GMT
http://hdl.handle.net/2117/82926
20160215T12:22:39Z
Bibiloni, Pedro
Escala Ribas, Alex
Morillo Bosch, M. Paz
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions

Secure and efficient anonymization of distributed confidential databases
http://hdl.handle.net/2117/76549
Secure and efficient anonymization of distributed confidential databases
Herranz Sotoca, Javier; Nin Guerrero, Jordi
Let us consider the following situation: t entities (e.g., hospitals) hold different databases containing different records for the same type of confidential (e.g., medical) data. They want to deliver a protected version of this data to third parties (e.g., pharmaceutical researchers), preserving in some way both the utility and the privacy of the original data. This can be done by applying a statistical disclosure control (SDC) method. One possibility is that each entity protects its own database individually, but this strategy provides less utility and privacy than a collective strategy where the entities cooperate, by means of a distributed protocol, to produce a global protected dataset. In this paper, we investigate the problem of distributed protocols for SDC protection methods. We propose a simple, efficient and secure distributed protocol for the specific SDC method of rank shuffling. We run some experiments to evaluate the quality of this protocol and to compare the individual and collective strategies for solving the problem of protecting a distributed database. With respect to other distributed versions of SDC methods, the new protocol provides either more security or more efficiency, as we discuss through the paper.
Wed, 02 Sep 2015 08:12:33 GMT
http://hdl.handle.net/2117/76549
20150902T08:12:33Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Let us consider the following situation: t entities (e.g., hospitals) hold different databases containing different records for the same type of confidential (e.g., medical) data. They want to deliver a protected version of this data to third parties (e.g., pharmaceutical researchers), preserving in some way both the utility and the privacy of the original data. This can be done by applying a statistical disclosure control (SDC) method. One possibility is that each entity protects its own database individually, but this strategy provides less utility and privacy than a collective strategy where the entities cooperate, by means of a distributed protocol, to produce a global protected dataset. In this paper, we investigate the problem of distributed protocols for SDC protection methods. We propose a simple, efficient and secure distributed protocol for the specific SDC method of rank shuffling. We run some experiments to evaluate the quality of this protocol and to compare the individual and collective strategies for solving the problem of protecting a distributed database. With respect to other distributed versions of SDC methods, the new protocol provides either more security or more efficiency, as we discuss through the paper.

New results and applications for multisecret sharing schemes
http://hdl.handle.net/2117/27633
New results and applications for multisecret sharing schemes
Herranz Sotoca, Javier; Ruiz Rodríguez, Alexandre; Sáez Moreno, Germán
In a multisecret sharing scheme (MSSS), different secrets are distributed among the players in some set , each one according to an access structure. The trivial solution to this problem is to run independent instances of a standard secret sharing scheme, one for each secret. In this solution, the length of the secret share to be stored by each player grows linearly with (when keeping all other parameters fixed). Multisecret sharing schemes have been studied by the cryptographic community mostly from a theoretical perspective: different models and definitions have been proposed, for both unconditional (informationtheoretic) and computational security. In the case of unconditional security, there are two different definitions. It has been proved that, for some particular cases of access structures that include the threshold case, a MSSS with the strongest level of unconditional security must have shares with length linear in . Therefore, the optimal solution in this case is equivalent to the trivial one. In this work we prove that, even for a more relaxed notion of unconditional security, and for some kinds of access structures (in particular, threshold ones), we have the same efficiency problem: the length of each secret share must grow linearly with . Since we want more efficient solutions, we move to the scenario of MSSSs with computational security. We propose a new MSSS, where each secret share has constant length (just one element), and we formally prove its computational security in the random oracle model. To the best of our knowledge, this is the first formal analysis on the computational security of a MSSS. We show the utility of the new MSSS by using it as a key ingredient in the design of two schemes for two new functionalities: multipolicy signatures and multipolicy decryption. We prove the security of these two new multipolicy cryptosystems in a formal security model. The two new primitives provide similar functionalities as attributebased cryptosystems, with some advantages and some drawbacks that we discuss at the end of this work.
Tue, 28 Apr 2015 17:42:09 GMT
http://hdl.handle.net/2117/27633
20150428T17:42:09Z
Herranz Sotoca, Javier
Ruiz Rodríguez, Alexandre
Sáez Moreno, Germán
In a multisecret sharing scheme (MSSS), different secrets are distributed among the players in some set , each one according to an access structure. The trivial solution to this problem is to run independent instances of a standard secret sharing scheme, one for each secret. In this solution, the length of the secret share to be stored by each player grows linearly with (when keeping all other parameters fixed). Multisecret sharing schemes have been studied by the cryptographic community mostly from a theoretical perspective: different models and definitions have been proposed, for both unconditional (informationtheoretic) and computational security. In the case of unconditional security, there are two different definitions. It has been proved that, for some particular cases of access structures that include the threshold case, a MSSS with the strongest level of unconditional security must have shares with length linear in . Therefore, the optimal solution in this case is equivalent to the trivial one. In this work we prove that, even for a more relaxed notion of unconditional security, and for some kinds of access structures (in particular, threshold ones), we have the same efficiency problem: the length of each secret share must grow linearly with . Since we want more efficient solutions, we move to the scenario of MSSSs with computational security. We propose a new MSSS, where each secret share has constant length (just one element), and we formally prove its computational security in the random oracle model. To the best of our knowledge, this is the first formal analysis on the computational security of a MSSS. We show the utility of the new MSSS by using it as a key ingredient in the design of two schemes for two new functionalities: multipolicy signatures and multipolicy decryption. We prove the security of these two new multipolicy cryptosystems in a formal security model. The two new primitives provide similar functionalities as attributebased cryptosystems, with some advantages and some drawbacks that we discuss at the end of this work.

On the representability of the biuniform matroid
http://hdl.handle.net/2117/24101
On the representability of the biuniform matroid
Ball, Simeon Michael; Padró Laimon, Carles; Weiner, Zsuzsa; Xing, Chaoping
Every biuniform matroid is representable over all sufficiently large fields. But it is not known exactly over which finite fields they are representable, and the existence of efficient methods to find a representation for every given biuniform matroid has not been proved. The interest of these problems is due to their implications to secret sharing. The existence of efficient methods to find representations for all biuniform matroids is proved here for the first time. The previously known efficient constructions apply only to a particular class of biuniform matroids, while the known general constructions were not proved to be efficient. In addition, our constructions provide in many cases representations over smaller finite fields.
© 2013, Society for Industrial and Applied Mathematics
Thu, 18 Sep 2014 16:05:12 GMT
http://hdl.handle.net/2117/24101
20140918T16:05:12Z
Ball, Simeon Michael
Padró Laimon, Carles
Weiner, Zsuzsa
Xing, Chaoping
Every biuniform matroid is representable over all sufficiently large fields. But it is not known exactly over which finite fields they are representable, and the existence of efficient methods to find a representation for every given biuniform matroid has not been proved. The interest of these problems is due to their implications to secret sharing. The existence of efficient methods to find representations for all biuniform matroids is proved here for the first time. The previously known efficient constructions apply only to a particular class of biuniform matroids, while the known general constructions were not proved to be efficient. In addition, our constructions provide in many cases representations over smaller finite fields.
© 2013, Society for Industrial and Applied Mathematics

Cropping Euler factors of modular Lfunctions
http://hdl.handle.net/2117/20759
Cropping Euler factors of modular Lfunctions
González Rovira, Josep; Jiménez Urroz, Jorge; Lario Loyo, Joan Carles
According to the Birch and SwinnertonDyer conjectures, if A/Q is an abelian variety, then its Lfunction must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and nonCM cases.
Mon, 25 Nov 2013 17:05:43 GMT
http://hdl.handle.net/2117/20759
20131125T17:05:43Z
González Rovira, Josep
Jiménez Urroz, Jorge
Lario Loyo, Joan Carles
According to the Birch and SwinnertonDyer conjectures, if A/Q is an abelian variety, then its Lfunction must capture a substantial part of the properties of A. The smallest number field L where A has all its endomorphisms defined must also play a role. This article deals with the relationship between these two objects in the specific case of modular abelian varieties Af =Q associated to weight 2 newforms for the group t1(N). Specifically, our goal is to relate ords=1 L(Af =Q, s), with the order at s D 1 of Euler products restricted to primes that split completely in L. This is attained when a power of Af is isogenous over Q to the Weil restriction of the building block of Af . We give separated formulae for the CM and nonCM cases.

More hybrid and secure protection of statistical data sets
http://hdl.handle.net/2117/17412
More hybrid and secure protection of statistical data sets
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claims
Thu, 17 Jan 2013 18:24:07 GMT
http://hdl.handle.net/2117/17412
20130117T18:24:07Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Solé Simó, Marc
Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and
studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users
to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and
the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this
work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we
conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they
also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any
data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of
experiments performed on reference data sets that support our claims

Kdtrees and the real disclosure risks of large statistical databases
http://hdl.handle.net/2117/16561
Kdtrees and the real disclosure risks of large statistical databases
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Solé Simó, Marc
In data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effort
Tue, 25 Sep 2012 11:53:08 GMT
http://hdl.handle.net/2117/16561
20120925T11:53:08Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Solé Simó, Marc
In data privacy, record linkage can be used as an estimator of the disclosure risk of protected data. To
model the worst case scenario one normally attempts to link records from the original data to the protected
data. In this paper we introduce a parametrization of record linkage in terms of a weighted mean
and its weights, and provide a supervised learning method to determine the optimum weights for the
linkage process. That is, the parameters yielding a maximal record linkage between the protected and original
data. We compare our method to standard record linkage with data from several protection methods
widely used in statistical disclosure control, and study the results taking into account the
performance in the linkage process, and its computational effort