ISG  Grup de Seguretat de la Informació
http://hdl.handle.net/2117/79755
20191121T00:41:18Z

Criptografía y seguridad en comunicaciones
http://hdl.handle.net/2117/171922
Criptografía y seguridad en comunicaciones
Forné Muñoz, Jorge; Melus Moreno, José Luis; Soriano Ibáñez, Miguel
20191107T14:17:52Z
Forné Muñoz, Jorge
Melus Moreno, José Luis
Soriano Ibáñez, Miguel

Simplified probabilistic model for maximum traffic load from weighinmotion data
http://hdl.handle.net/2117/168283
Simplified probabilistic model for maximum traffic load from weighinmotion data
Soriano Ibáñez, Miguel; Casas Rius, Joan Ramon; Ghosn, Michel
This paper reviews the simplified procedure proposed by Ghosn and Sivakumar to model the maximum expected traffic load effect on highway bridges and illustrates the methodology using a set of WeighInMotion (WIM) data collected on one site in the U.S. The paper compares different approaches for implementing the procedure and explores the effects of limitations in the sitespecific data on the projected maximum live load effect for different bridge service lives. A sensitivity analysis is carried out on the most representative variables involved in the WIM data collection and calculation of the maximum load effect. The procedure is implemented on a set of WIM data collected in
Slovenia to study the maximum load effect on existing Slovenian highway bridges and how it compares with the values obtained from the Eurocode of actions.
20190916T22:43:19Z
Soriano Ibáñez, Miguel
Casas Rius, Joan Ramon
Ghosn, Michel
This paper reviews the simplified procedure proposed by Ghosn and Sivakumar to model the maximum expected traffic load effect on highway bridges and illustrates the methodology using a set of WeighInMotion (WIM) data collected on one site in the U.S. The paper compares different approaches for implementing the procedure and explores the effects of limitations in the sitespecific data on the projected maximum live load effect for different bridge service lives. A sensitivity analysis is carried out on the most representative variables involved in the WIM data collection and calculation of the maximum load effect. The procedure is implemented on a set of WIM data collected in
Slovenia to study the maximum load effect on existing Slovenian highway bridges and how it compares with the values obtained from the Eurocode of actions.

Efficient kanonymous microaggregation of multivariate numerical data via principal component analysis
http://hdl.handle.net/2117/166168
Efficient kanonymous microaggregation of multivariate numerical data via principal component analysis
RebolloMonedero, David; Mezher, Ahmad Mohamad; Casanova, Xavier; Forné Muñoz, Jorge; Soriano Ibáñez, Miguel
kAnonymous microaggregation is a widespread technique to address the problem of protecting the privacy of the respondents involved beyond the mere suppression of their identifiers, in applications where preserving the utility of the information disclosed is critical. Unfortunately, microaggregation methods with high data utility may impose stringent computational demands when dealing with datasets containing a large number of records and attributes.
This work proposes and analyzes various anonymization methods which draw upon the algebraicstatistical technique of principal component analysis (PCA), in order to effectively reduce the number of attributes processed, that is, the dimension of the multivariate microaggregation problem at hand. By preserving to a high degree the energy of the numerical dataset and carefully choosing the number of dominant components to process, we manage to achieve remarkable reductions in running time and memory usage with negligible impact in information utility. Our methods are readily applicable to highutility SDC of largescale datasets with numerical demographic attributes.
© <2019>. This manuscript version is made available under the CCBYNCND 4.0 license http://creativecommons.org/licenses/byncnd/4.0/
20190715T07:16:08Z
RebolloMonedero, David
Mezher, Ahmad Mohamad
Casanova, Xavier
Forné Muñoz, Jorge
Soriano Ibáñez, Miguel
kAnonymous microaggregation is a widespread technique to address the problem of protecting the privacy of the respondents involved beyond the mere suppression of their identifiers, in applications where preserving the utility of the information disclosed is critical. Unfortunately, microaggregation methods with high data utility may impose stringent computational demands when dealing with datasets containing a large number of records and attributes.
This work proposes and analyzes various anonymization methods which draw upon the algebraicstatistical technique of principal component analysis (PCA), in order to effectively reduce the number of attributes processed, that is, the dimension of the multivariate microaggregation problem at hand. By preserving to a high degree the energy of the numerical dataset and carefully choosing the number of dominant components to process, we manage to achieve remarkable reductions in running time and memory usage with negligible impact in information utility. Our methods are readily applicable to highutility SDC of largescale datasets with numerical demographic attributes.

Knowledge sharing in the health scenario
http://hdl.handle.net/2117/126996
Knowledge sharing in the health scenario
LLuch Ariet, Magi; Brugues de la Torre, Albert; Vallverdú Bayés, Sisco; Pegueroles Vallés, Josep R.
The understanding of certain data often requires the collection of similar data from different places to be analysed and interpreted. Interoperability standards and ontologies, are facilitating data interchange around the world. However, beyond the existing networks and advances for data transfer, data sharing protocols to support multilateral agreements are useful to exploit the knowledge of distributed Data Warehouses. The access to a certain data set in a federated Data Warehouse may be constrained by the requirement to deliver another specific data set. When bilateral agreements between two nodes of a network are not enough to solve the constraints for accessing to a certain data set, multilateral agreements for data exchange are needed.
We present the implementation of a MultiAgent System for multilateral exchange agreements of clinical data, and evaluate how those multilateral agreements increase the percentage of data collected by a single node from the total amount of data available in the network. Different strategies to reduce the number of messages needed to achieve an agreement are also considered. The results show that with this collaborative sharing scenario the percentage of data collected dramaticaly improve from bilateral agreements to multilateral ones, up to reach almost all data available in the network.
20190116T17:55:11Z
LLuch Ariet, Magi
Brugues de la Torre, Albert
Vallverdú Bayés, Sisco
Pegueroles Vallés, Josep R.
The understanding of certain data often requires the collection of similar data from different places to be analysed and interpreted. Interoperability standards and ontologies, are facilitating data interchange around the world. However, beyond the existing networks and advances for data transfer, data sharing protocols to support multilateral agreements are useful to exploit the knowledge of distributed Data Warehouses. The access to a certain data set in a federated Data Warehouse may be constrained by the requirement to deliver another specific data set. When bilateral agreements between two nodes of a network are not enough to solve the constraints for accessing to a certain data set, multilateral agreements for data exchange are needed.
We present the implementation of a MultiAgent System for multilateral exchange agreements of clinical data, and evaluate how those multilateral agreements increase the percentage of data collected by a single node from the total amount of data available in the network. Different strategies to reduce the number of messages needed to achieve an agreement are also considered. The results show that with this collaborative sharing scenario the percentage of data collected dramaticaly improve from bilateral agreements to multilateral ones, up to reach almost all data available in the network.

Incremental kAnonymous microaggregation in largescale electronic surveys with optimized scheduling
http://hdl.handle.net/2117/123435
Incremental kAnonymous microaggregation in largescale electronic surveys with optimized scheduling
Rebollo Monedero, David; Forné Muñoz, Jorge; Soriano Ibáñez, Miguel; Hernández Baigorri, César
Improvements in technology have led to enormous volumes of detailed personal information made available for any number of statistical studies. This has stimulated the need for anonymization techniques striving to attain a difficult compromise between the usefulness of the data and the protection of our privacy. kAnonymous microaggregation permits releasing a dataset where each person remains indistinguishable from other k–1 individuals, through the aggregation of demographic attributes, otherwise a potential culprit for respondent reidentification. Although privacy guarantees are by no means absolute, the elegant simplicity of the kanonymity criterion and the excellent preservation of information utility of microaggregation algorithms has turned them into widely popular approaches whenever data utility is critical. Unfortunately, highutility algorithms on large datasets inherently require extensive computation. This work addresses the need of running kanonymous microaggregation efficiently with mild distortion loss, exploiting the fact that the data may arrive over an extended period of time. Specifically, we propose to split the original dataset into two portions that will be processed subsequently, allowing the first process to start before the entire dataset is received, while leveraging the superlinearity of the microaggregation algorithms involved. A detailed mathematical formulation enables us to calculate the optimal time for the fastest anonymization, as well as for minimum distortion under a given deadline. Two incremental microaggregation algorithms are devised, for which extensive experimentation is reported. The theoretical methodology presented should prove invaluable in numerous datacollection applications, including largescale electronic surveys in which computation is possible as the data comes in.
20181031T19:32:27Z
Rebollo Monedero, David
Forné Muñoz, Jorge
Soriano Ibáñez, Miguel
Hernández Baigorri, César
Improvements in technology have led to enormous volumes of detailed personal information made available for any number of statistical studies. This has stimulated the need for anonymization techniques striving to attain a difficult compromise between the usefulness of the data and the protection of our privacy. kAnonymous microaggregation permits releasing a dataset where each person remains indistinguishable from other k–1 individuals, through the aggregation of demographic attributes, otherwise a potential culprit for respondent reidentification. Although privacy guarantees are by no means absolute, the elegant simplicity of the kanonymity criterion and the excellent preservation of information utility of microaggregation algorithms has turned them into widely popular approaches whenever data utility is critical. Unfortunately, highutility algorithms on large datasets inherently require extensive computation. This work addresses the need of running kanonymous microaggregation efficiently with mild distortion loss, exploiting the fact that the data may arrive over an extended period of time. Specifically, we propose to split the original dataset into two portions that will be processed subsequently, allowing the first process to start before the entire dataset is received, while leveraging the superlinearity of the microaggregation algorithms involved. A detailed mathematical formulation enables us to calculate the optimal time for the fastest anonymization, as well as for minimum distortion under a given deadline. Two incremental microaggregation algorithms are devised, for which extensive experimentation is reported. The theoretical methodology presented should prove invaluable in numerous datacollection applications, including largescale electronic surveys in which computation is possible as the data comes in.

Constructions of almost secure frameproof codes with applications to fingerprinting schemes
http://hdl.handle.net/2117/122239
Constructions of almost secure frameproof codes with applications to fingerprinting schemes
Moreira, Jose; Fernández Muñoz, Marcel; Kabatiansky, Grigory
This paper presents explicit constructions of fingerprinting codes. The proposed constructions use a class of codes called almost secure frameproof codes. An almost secure frameproof code is a relaxed version of a secure frameproof code, which in turn is the same as a separating code. This relaxed version is the object of our interest because it gives rise to fingerprinting codes of higher rate than fingerprinting codes derived from separating codes. The construction of almost secure frameproof codes discussed here is based on weakly biased arrays, a class of combinatorial objects tightly related to weakly dependent random variables.
The final publication is available at Springer via http://dx.doi.org/10.1007/s106230170359z
20181011T12:46:02Z
Moreira, Jose
Fernández Muñoz, Marcel
Kabatiansky, Grigory
This paper presents explicit constructions of fingerprinting codes. The proposed constructions use a class of codes called almost secure frameproof codes. An almost secure frameproof code is a relaxed version of a secure frameproof code, which in turn is the same as a separating code. This relaxed version is the object of our interest because it gives rise to fingerprinting codes of higher rate than fingerprinting codes derived from separating codes. The construction of almost secure frameproof codes discussed here is based on weakly biased arrays, a class of combinatorial objects tightly related to weakly dependent random variables.

Transient analysis of idle time in VANETs using Markovreward models
http://hdl.handle.net/2117/116842
Transient analysis of idle time in VANETs using Markovreward models
Martín Faus, Isabel Victoria; Urquiza Aguiar, Luis; Aguilar Igartua, Mónica; GuérinLassous, Isabelle
The development of analytical models to analyze the behavior of vehicular ad hoc networks (VANETs) is a challenging aim. Adaptive methods are suitable for many algorithms (e.g., choice of forwarding paths, dynamic resource allocation, channel control congestion) and services (e.g., provision of multimedia services, message dissemination). These adaptive algorithms help the network to maintain a desired performance level. However, this is a difficult goal to achieve, especially in VANETs due to fast position changes of the VANET nodes. Adaptive decisions should be taken according to the current conditions of the VANET. Therefore, evaluation of transient measures is required for the characterization of VANETs. In the literature, different works address the characterization and measurement of the idle (or busy) time to be used in different proposals to attain a more efficient usage of wireless network. This paper focuses on the idle time of the link between two VANET nodes, which we denote as Tidle. Specifically, we have developed an analytical model based on a straightforward Markov reward chain to obtain transient measurements of Tidle. Numerical results from the analytical model fit well with simulation results.
20180430T18:08:53Z
Martín Faus, Isabel Victoria
Urquiza Aguiar, Luis
Aguilar Igartua, Mónica
GuérinLassous, Isabelle
The development of analytical models to analyze the behavior of vehicular ad hoc networks (VANETs) is a challenging aim. Adaptive methods are suitable for many algorithms (e.g., choice of forwarding paths, dynamic resource allocation, channel control congestion) and services (e.g., provision of multimedia services, message dissemination). These adaptive algorithms help the network to maintain a desired performance level. However, this is a difficult goal to achieve, especially in VANETs due to fast position changes of the VANET nodes. Adaptive decisions should be taken according to the current conditions of the VANET. Therefore, evaluation of transient measures is required for the characterization of VANETs. In the literature, different works address the characterization and measurement of the idle (or busy) time to be used in different proposals to attain a more efficient usage of wireless network. This paper focuses on the idle time of the link between two VANET nodes, which we denote as Tidle. Specifically, we have developed an analytical model based on a straightforward Markov reward chain to obtain transient measurements of Tidle. Numerical results from the analytical model fit well with simulation results.

Multimedia fingerprinting with noise via signature codes for weighted noisy adder channels and compressed sensing
http://hdl.handle.net/2117/114691
Multimedia fingerprinting with noise via signature codes for weighted noisy adder channels and compressed sensing
Egorova, Elena; Fernández Muñoz, Marcel; Moon Ho, Lee
We propose a new coding scheme for multimedia fingerprinting resistant to noise. Our scheme is based on signature codes for
weighted noisy adder channel. The scheme (codes) can trace the entire coalition of pirates and provides significantly better rate than previously known fingerprinting schemes. We also establish a relationship between these two problems and the compressed sensing problem.
20180301T14:06:53Z
Egorova, Elena
Fernández Muñoz, Marcel
Moon Ho, Lee
We propose a new coding scheme for multimedia fingerprinting resistant to noise. Our scheme is based on signature codes for
weighted noisy adder channel. The scheme (codes) can trace the entire coalition of pirates and provides significantly better rate than previously known fingerprinting schemes. We also establish a relationship between these two problems and the compressed sensing problem.

Improved existence bounds on IPP codes using the Clique Lovász Local Lemma
http://hdl.handle.net/2117/114551
Improved existence bounds on IPP codes using the Clique Lovász Local Lemma
Aranda, Castor; Fernández Muñoz, Marcel
Nowadays, one of the biggest problems that challenges distributors of digital content is, precisely, protecting such content against redistribution. Those who trade with any of the multiple digital information formats (audio, video, software, text, etc.) face one danger: once a copy of the content is purchased by a client, this user may illegally redistribute it. This way, a distributor of digital content with intellectual property rights must take some steps to ensure the preservation of its business and, given that the use and benefit of a legally purchased copy of the content by a client implies reading the data (either with a computer, a DVD player, or any other device), anticopy protection of such copyrighted content is not viable. Here is where new mechanics such as fingerprinting (first presented in [18]) take place: instead of distributing identical copies of the data, fingerprinting consists of embedding a series of marks on each copy, with each mark unique for each user and keeping record of which mark is received by which user, thus dissuading them from redistributing the files, under the threat of being caught. Once the distributor intercepts an illegally distributed copy, he will be able to read the marks to determine which user is guilty, and therefore, the user is labelled as a traitor.
20180227T14:34:23Z
Aranda, Castor
Fernández Muñoz, Marcel
Nowadays, one of the biggest problems that challenges distributors of digital content is, precisely, protecting such content against redistribution. Those who trade with any of the multiple digital information formats (audio, video, software, text, etc.) face one danger: once a copy of the content is purchased by a client, this user may illegally redistribute it. This way, a distributor of digital content with intellectual property rights must take some steps to ensure the preservation of its business and, given that the use and benefit of a legally purchased copy of the content by a client implies reading the data (either with a computer, a DVD player, or any other device), anticopy protection of such copyrighted content is not viable. Here is where new mechanics such as fingerprinting (first presented in [18]) take place: instead of distributing identical copies of the data, fingerprinting consists of embedding a series of marks on each copy, with each mark unique for each user and keeping record of which mark is received by which user, thus dissuading them from redistributing the files, under the threat of being caught. Once the distributor intercepts an illegally distributed copy, he will be able to read the marks to determine which user is guilty, and therefore, the user is labelled as a traitor.

Gametheoretical design of an adaptive distributed dissemination protocol for VANETs
http://hdl.handle.net/2117/114511
Gametheoretical design of an adaptive distributed dissemination protocol for VANETs
Iza Paredes, Cristhian; Mezher, Ahmad Mohamad; Aguilar Igartua, Mónica; Forné Muñoz, Jorge
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two gametheoretical mechanisms. Given the nonstationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower endtoend delay and reduced overhead while maintaining high delivery ratio, compared to other proposals
20180226T16:00:23Z
Iza Paredes, Cristhian
Mezher, Ahmad Mohamad
Aguilar Igartua, Mónica
Forné Muñoz, Jorge
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two gametheoretical mechanisms. Given the nonstationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower endtoend delay and reduced overhead while maintaining high delivery ratio, compared to other proposals