Articles de revista
http://hdl.handle.net/2117/3529
20170116T12:56:10Z

Soft and hard modelling methods for decovolution of mixtures of Raman spectra for pigment analysis: a qualitative and quantitative approach
http://hdl.handle.net/2117/98513
Soft and hard modelling methods for decovolution of mixtures of Raman spectra for pigment analysis: a qualitative and quantitative approach
Coma, L; Breitman Mansilla, Mónica Celia; Ruiz Moreno, Sergio
Raman spectroscopy provides a means for the detection and identification of pictorial materials on artworks. As a nondestructive, applicable in situ and nonambiguous technique, it is one of the most preferred to analyse the pigmentation of any kind of artwork: from paintings [1] and papyrus [2] to polychromes on woods [3]. A common problem, however, is lack of spatial resolution on some systems due to large focal distances, which degrades the theoretical high resolution of the system, which involves the resolution of mixtures of individual Raman spectra. In this work, we will present the advantages and disadvantages of two methods for the separation of mixtures of Raman spectra [4] and [5], and we present a new solution to overcome the problems of the above. To such an end, we will provide qualitative (identification of the species) and quantitative (determine their concentration profiles) results of the methods. The experimental analyses have been carried out in two steps: first we calibrate the methods with known mixtures of two compounds prepared in the laboratory. Second, we test the methods with a real artwork supposed to be from ‘El Greco’. Procedures to minimise problems, such as extreme fluorescence and noise, that arise on real artworks are also presented.
20161216T18:07:04Z
Coma, L
Breitman Mansilla, Mónica Celia
Ruiz Moreno, Sergio
Raman spectroscopy provides a means for the detection and identification of pictorial materials on artworks. As a nondestructive, applicable in situ and nonambiguous technique, it is one of the most preferred to analyse the pigmentation of any kind of artwork: from paintings [1] and papyrus [2] to polychromes on woods [3]. A common problem, however, is lack of spatial resolution on some systems due to large focal distances, which degrades the theoretical high resolution of the system, which involves the resolution of mixtures of individual Raman spectra. In this work, we will present the advantages and disadvantages of two methods for the separation of mixtures of Raman spectra [4] and [5], and we present a new solution to overcome the problems of the above. To such an end, we will provide qualitative (identification of the species) and quantitative (determine their concentration profiles) results of the methods. The experimental analyses have been carried out in two steps: first we calibrate the methods with known mixtures of two compounds prepared in the laboratory. Second, we test the methods with a real artwork supposed to be from ‘El Greco’. Procedures to minimise problems, such as extreme fluorescence and noise, that arise on real artworks are also presented.

La espectroscopía raman aplicada a la identificación de materiales pictóricos
http://hdl.handle.net/2117/97876
La espectroscopía raman aplicada a la identificación de materiales pictóricos
Ruiz Moreno, Sergio; Yúfera Gomez, José Manuel; Soneira Ferrando, M. José; Breitman Mansilla, Mónica Celia; Morillo Bosch, M. Paz; Gràcia Rivas, Ignacio
20161207T14:46:16Z
Ruiz Moreno, Sergio
Yúfera Gomez, José Manuel
Soneira Ferrando, M. José
Breitman Mansilla, Mónica Celia
Morillo Bosch, M. Paz
Gràcia Rivas, Ignacio

An algebraic framework for Diffie–Hellman assumptions
http://hdl.handle.net/2117/91050
An algebraic framework for Diffie–Hellman assumptions
Escala Ribas, Alex; Herold, Gottfried; Kiltz, Eike; Ràfols Salvador, Carla; Villar Santos, Jorge Luis
We put forward a new algebraic framework to generalize and analyze DiffieHellman like Decisional Assumptions which allows us to argue about security and applications by considering only algebraic properties. Our D`,kMDDH assumption states that it is hard to decide whether a vector in ¿ìs linearly dependent of the columns of some matrix in ¿`×k sampled according to distribution D`,k. It covers known assumptions such as DDH, 2Lin (linear assumption), and kLin (the klinear assumption). Using our algebraic viewpoint, we can relate the generic hardness of our assumptions in mlinear groups to the irreducibility of certain polynomials which describe the output of D`,k. We use the hardness results to find new distributions for which the D`,kMDDHAssumption holds generically in mlinear groups. In particular, our new assumptions 2SCasc and 2ILin are generically hard in bilinear groups and, compared to 2Lin, have shorter description size, which is a relevant parameter for efficiency in many applications. These results support using our new assumptions as natural replacements for the 2Lin Assumption which was already used in a large number of applications. To illustrate the conceptual advantages of our algebraic framework, we construct several fundamental primitives based on any MDDHAssumption. In particular, we can give many instantiations of a primitive in a compact way, including publickey encryption, hashproof systems, pseudorandom functions, and GrothSahai NIZK and NIWI proofs. As an independent contribution we give more efficient NIZK and NIWI proofs for membership in a subgroup of ¿` . The results imply very significant efficiency improvements for a large number of schemes.
20161025T09:11:30Z
Escala Ribas, Alex
Herold, Gottfried
Kiltz, Eike
Ràfols Salvador, Carla
Villar Santos, Jorge Luis
We put forward a new algebraic framework to generalize and analyze DiffieHellman like Decisional Assumptions which allows us to argue about security and applications by considering only algebraic properties. Our D`,kMDDH assumption states that it is hard to decide whether a vector in ¿ìs linearly dependent of the columns of some matrix in ¿`×k sampled according to distribution D`,k. It covers known assumptions such as DDH, 2Lin (linear assumption), and kLin (the klinear assumption). Using our algebraic viewpoint, we can relate the generic hardness of our assumptions in mlinear groups to the irreducibility of certain polynomials which describe the output of D`,k. We use the hardness results to find new distributions for which the D`,kMDDHAssumption holds generically in mlinear groups. In particular, our new assumptions 2SCasc and 2ILin are generically hard in bilinear groups and, compared to 2Lin, have shorter description size, which is a relevant parameter for efficiency in many applications. These results support using our new assumptions as natural replacements for the 2Lin Assumption which was already used in a large number of applications. To illustrate the conceptual advantages of our algebraic framework, we construct several fundamental primitives based on any MDDHAssumption. In particular, we can give many instantiations of a primitive in a compact way, including publickey encryption, hashproof systems, pseudorandom functions, and GrothSahai NIZK and NIWI proofs. As an independent contribution we give more efficient NIZK and NIWI proofs for membership in a subgroup of ¿` . The results imply very significant efficiency improvements for a large number of schemes.

Extending BrickellDavenport theorem to nonperfect secret sharing schemes
http://hdl.handle.net/2117/86923
Extending BrickellDavenport theorem to nonperfect secret sharing schemes
Farràs Ventura, Oriol; Padró Laimon, Carles
One important result in secret sharing is the BrickellDavenport Theorem: every ideal perfect secret sharing scheme de nes a matroid that is uniquely determined by the access structure. Even though a few attempts have been made, there is no satisfactory de nition of ideal secret sharing scheme for the general case, in which nonperfect schemes are considered as well. Without providing another unsatisfactory de nition of ideal nonperfect secret sharing scheme, we present a generalization of the BrickellDavenport Theorem to the general case. After analyzing that result under a new point of view and identifying its combinatorial nature, we present a characterization of the (not necessarily perfect) secret sharing schemes that are associated to matroids. Some optimality properties of such schemes are discussed.
20160511T10:34:11Z
Farràs Ventura, Oriol
Padró Laimon, Carles
One important result in secret sharing is the BrickellDavenport Theorem: every ideal perfect secret sharing scheme de nes a matroid that is uniquely determined by the access structure. Even though a few attempts have been made, there is no satisfactory de nition of ideal secret sharing scheme for the general case, in which nonperfect schemes are considered as well. Without providing another unsatisfactory de nition of ideal nonperfect secret sharing scheme, we present a generalization of the BrickellDavenport Theorem to the general case. After analyzing that result under a new point of view and identifying its combinatorial nature, we present a characterization of the (not necessarily perfect) secret sharing schemes that are associated to matroids. Some optimality properties of such schemes are discussed.

On secret sharing with nonlinear product reconstruction
http://hdl.handle.net/2117/86921
On secret sharing with nonlinear product reconstruction
Cascudo, Ignacio; Cramer, Ronald; Mirandola, Diego; Padró Laimon, Carles; Xing, Chaoping
Multiplicative linear secret sharing is a fundamental notion in the area of secure multi party computation (MPC) and, since recently, in the area of twoparty cryptography as well. In a nutshell, this notion guarantees that \the product of two secrets is obtained as a linear function of the vector consisting of the coordinatewise product of two respective sharevectors
20160511T10:22:41Z
Cascudo, Ignacio
Cramer, Ronald
Mirandola, Diego
Padró Laimon, Carles
Xing, Chaoping
Multiplicative linear secret sharing is a fundamental notion in the area of secure multi party computation (MPC) and, since recently, in the area of twoparty cryptography as well. In a nutshell, this notion guarantees that \the product of two secrets is obtained as a linear function of the vector consisting of the coordinatewise product of two respective sharevectors

Attributebased versions of Schnorr and ElGamal
http://hdl.handle.net/2117/86060
Attributebased versions of Schnorr and ElGamal
Herranz Sotoca, Javier
We design in this paper the first attributebased cryptosystems that work in the classical discrete logarithm, pairingfree, setting. The attributebased signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the discrete logarithm assumption, in the random oracle model. The attributebased encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the decisional Diffie–Hellman assumption, in the standard model. The proposed schemes are secure only in a bounded model: the systems admit L secret keys, at most, for a bound L that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound L. Although this is an important drawback that can limit the applicability of the proposed schemes in some reallife applications, it turns out that the bounded security of our keypolicy attributebased encryption scheme (in particular, with L=1L=1) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC’2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the decisional Diffie–Hellman assumption.
The final publication is available at Springer via http://dx.doi.org/10.1007/s0020001502707
20160421T12:03:25Z
Herranz Sotoca, Javier
We design in this paper the first attributebased cryptosystems that work in the classical discrete logarithm, pairingfree, setting. The attributebased signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the discrete logarithm assumption, in the random oracle model. The attributebased encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the decisional Diffie–Hellman assumption, in the standard model. The proposed schemes are secure only in a bounded model: the systems admit L secret keys, at most, for a bound L that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound L. Although this is an important drawback that can limit the applicability of the proposed schemes in some reallife applications, it turns out that the bounded security of our keypolicy attributebased encryption scheme (in particular, with L=1L=1) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC’2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the decisional Diffie–Hellman assumption.

Secret sharing, rank inequalities, and information inequalities
http://hdl.handle.net/2117/86051
Secret sharing, rank inequalities, and information inequalities
Martín Mollevi, Sebastià; Padró Laimon, Carles; Yang, An
Beimel and Orlov proved that all information
inequalities on four or five variables, together with all information
inequalities on more than five variables that are known to date,
provide lower bounds on the size of the shares in secret sharing
schemes that are at most linear on the number of participants.
We present here another two negative results about the power of
information inequalities in the search for lower bounds in secret
sharing. First, we prove that all information inequalities on a
bounded number of variables can only provide lower bounds that
are polynomial on the number of participants. Second, we prove
that the rank inequalities that are derived from the existence of
two common informations can provide only lower bounds that
are at most cubic in the number of participants.
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
20160421T11:07:34Z
Martín Mollevi, Sebastià
Padró Laimon, Carles
Yang, An
Beimel and Orlov proved that all information
inequalities on four or five variables, together with all information
inequalities on more than five variables that are known to date,
provide lower bounds on the size of the shares in secret sharing
schemes that are at most linear on the number of participants.
We present here another two negative results about the power of
information inequalities in the search for lower bounds in secret
sharing. First, we prove that all information inequalities on a
bounded number of variables can only provide lower bounds that
are polynomial on the number of participants. Second, we prove
that the rank inequalities that are derived from the existence of
two common informations can provide only lower bounds that
are at most cubic in the number of participants.

Revisiting distancebased record linkage for privacypreserving release of statistical datasets
http://hdl.handle.net/2117/85339
Revisiting distancebased record linkage for privacypreserving release of statistical datasets
Herranz Sotoca, Javier; Nin Guerrero, Jordi; Rodríguez, Pablo; Tassa, Tamir
Statistical Disclosure Control (SDC, for short) studies the problem of privacypreserving data publishing in cases where the data is expected to be used for statistical analysis. An original dataset T containing sensitive information is transformed into a sanitized version T' which is released to the public. Both utility and privacy aspects are very important in this setting. For utility, T' must allow data miners or statisticians to obtain similar results to those which would have been obtained from the original dataset T. For privacy, T' must significantly reduce the ability of an adversary to infer sensitive information on the data subjects in T. One of the main aposteriori measures that the SDC community has considered up to now when analyzing the privacy offered by a given protection method is the DistanceBased Record Linkage (DBRL) risk measure. In this work, we argue that the classical DBRL risk measure is insufficient. For this reason, we introduce the novel Global DistanceBased Record Linkage (GDBRL) risk measure. We claim that this new measure must be evaluated alongside the classical DBRL measure in order to better assess the risk in publishing T' instead of T. After that, we describe how this new measure can be computed by the data owner and discuss the scalability of those computations. We conclude by extensive experimentation where we compare the risk assessments offered by our novel measure as well as by the classical one, using wellknown SDC protection methods. Those experiments validate our hypothesis that the GDBRL risk measure issues, in many cases, higher risk assessments than the classical DBRL measure. In other words, relying solely on the classical DBRL measure for risk assessment might be misleading, as the true risk may be in fact higher. Hence, we strongly recommend that the SDC community considers the new GDBRL risk measure as an additional measure when analyzing the privacy offered by SDC protection algorithms.
20160407T10:24:46Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Rodríguez, Pablo
Tassa, Tamir
Statistical Disclosure Control (SDC, for short) studies the problem of privacypreserving data publishing in cases where the data is expected to be used for statistical analysis. An original dataset T containing sensitive information is transformed into a sanitized version T' which is released to the public. Both utility and privacy aspects are very important in this setting. For utility, T' must allow data miners or statisticians to obtain similar results to those which would have been obtained from the original dataset T. For privacy, T' must significantly reduce the ability of an adversary to infer sensitive information on the data subjects in T. One of the main aposteriori measures that the SDC community has considered up to now when analyzing the privacy offered by a given protection method is the DistanceBased Record Linkage (DBRL) risk measure. In this work, we argue that the classical DBRL risk measure is insufficient. For this reason, we introduce the novel Global DistanceBased Record Linkage (GDBRL) risk measure. We claim that this new measure must be evaluated alongside the classical DBRL measure in order to better assess the risk in publishing T' instead of T. After that, we describe how this new measure can be computed by the data owner and discuss the scalability of those computations. We conclude by extensive experimentation where we compare the risk assessments offered by our novel measure as well as by the classical one, using wellknown SDC protection methods. Those experiments validate our hypothesis that the GDBRL risk measure issues, in many cases, higher risk assessments than the classical DBRL measure. In other words, relying solely on the classical DBRL measure for risk assessment might be misleading, as the true risk may be in fact higher. Hence, we strongly recommend that the SDC community considers the new GDBRL risk measure as an additional measure when analyzing the privacy offered by SDC protection algorithms.

Vote validatability in MixNetbased eVoting
http://hdl.handle.net/2117/82926
Vote validatability in MixNetbased eVoting
Bibiloni, Pedro; Escala Ribas, Alex; Morillo Bosch, M. Paz
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions; One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions
20160215T12:22:39Z
Bibiloni, Pedro
Escala Ribas, Alex
Morillo Bosch, M. Paz
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions
One way to build secure electronic voting systems is to use MixNets, which break any correlation between voters and their votes. One of the characteristics of MixNetbased eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by nontechnical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the MixNet. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e  cient concrete instantiation based on wellstudied assumptions

Secure and efficient anonymization of distributed confidential databases
http://hdl.handle.net/2117/76549
Secure and efficient anonymization of distributed confidential databases
Herranz Sotoca, Javier; Nin Guerrero, Jordi
Let us consider the following situation: t entities (e.g., hospitals) hold different databases containing different records for the same type of confidential (e.g., medical) data. They want to deliver a protected version of this data to third parties (e.g., pharmaceutical researchers), preserving in some way both the utility and the privacy of the original data. This can be done by applying a statistical disclosure control (SDC) method. One possibility is that each entity protects its own database individually, but this strategy provides less utility and privacy than a collective strategy where the entities cooperate, by means of a distributed protocol, to produce a global protected dataset. In this paper, we investigate the problem of distributed protocols for SDC protection methods. We propose a simple, efficient and secure distributed protocol for the specific SDC method of rank shuffling. We run some experiments to evaluate the quality of this protocol and to compare the individual and collective strategies for solving the problem of protecting a distributed database. With respect to other distributed versions of SDC methods, the new protocol provides either more security or more efficiency, as we discuss through the paper.
20150902T08:12:33Z
Herranz Sotoca, Javier
Nin Guerrero, Jordi
Let us consider the following situation: t entities (e.g., hospitals) hold different databases containing different records for the same type of confidential (e.g., medical) data. They want to deliver a protected version of this data to third parties (e.g., pharmaceutical researchers), preserving in some way both the utility and the privacy of the original data. This can be done by applying a statistical disclosure control (SDC) method. One possibility is that each entity protects its own database individually, but this strategy provides less utility and privacy than a collective strategy where the entities cooperate, by means of a distributed protocol, to produce a global protected dataset. In this paper, we investigate the problem of distributed protocols for SDC protection methods. We propose a simple, efficient and secure distributed protocol for the specific SDC method of rank shuffling. We run some experiments to evaluate the quality of this protocol and to compare the individual and collective strategies for solving the problem of protecting a distributed database. With respect to other distributed versions of SDC methods, the new protocol provides either more security or more efficiency, as we discuss through the paper.