CRAAX - Centre de Recerca d'Arquitectures Avançades de Xarxeshttp://hdl.handle.net/2117/797212024-03-29T06:13:58Z2024-03-29T06:13:58ZInnovative predictive approach towards a personalized oxygen dosing systemPascual Saldaña, HeribertMasip Bruin, XavierAsensio Garcia, AdrianAlonso Beltran, AlbertBlanco Vich, Isabelhttp://hdl.handle.net/2117/4054372024-03-27T10:15:17Z2024-03-27T10:13:32ZInnovative predictive approach towards a personalized oxygen dosing system
Pascual Saldaña, Heribert; Masip Bruin, Xavier; Asensio Garcia, Adrian; Alonso Beltran, Albert; Blanco Vich, Isabel
Despite the large impact chronic obstructive pulmonary disease (COPD) that has on the population, the implementation of new technologies for diagnosis and treatment remains limited. Current practices in ambulatory oxygen therapy used in COPD rely on fixed doses overlooking the diverse activities which patients engage in. To address this challenge, we propose a software architecture aimed at delivering patient-personalized edge-based artificial intelligence (AI)-assisted models that are built upon data collected from patients’ previous experiences along with an evaluation function. The main objectives reside in proactively administering precise oxygen dosages in real time to the patient (the edge), leveraging individual patient data, previous experiences, and actual activity levels, thereby representing a substantial advancement over conventional oxygen dosing. Through a pilot test using vital sign data from a cohort of five patients, the limitations of a one-size-fits-all approach are demonstrated, thus highlighting the need for personalized treatment strategies. This study underscores the importance of adopting advanced technological approaches for ambulatory oxygen therapy.
2024-03-27T10:13:32ZPascual Saldaña, HeribertMasip Bruin, XavierAsensio Garcia, AdrianAlonso Beltran, AlbertBlanco Vich, IsabelDespite the large impact chronic obstructive pulmonary disease (COPD) that has on the population, the implementation of new technologies for diagnosis and treatment remains limited. Current practices in ambulatory oxygen therapy used in COPD rely on fixed doses overlooking the diverse activities which patients engage in. To address this challenge, we propose a software architecture aimed at delivering patient-personalized edge-based artificial intelligence (AI)-assisted models that are built upon data collected from patients’ previous experiences along with an evaluation function. The main objectives reside in proactively administering precise oxygen dosages in real time to the patient (the edge), leveraging individual patient data, previous experiences, and actual activity levels, thereby representing a substantial advancement over conventional oxygen dosing. Through a pilot test using vital sign data from a cohort of five patients, the limitations of a one-size-fits-all approach are demonstrated, thus highlighting the need for personalized treatment strategies. This study underscores the importance of adopting advanced technological approaches for ambulatory oxygen therapy.Validation, Verification, and Testing (VVT) of future RISC-V powered cloud infrastructures: the Vitamin-V Horizon Europe Project perspectiveAlonso García, MartíAndreu Gerique, DavidCanal Corretger, RamonDi Carlo, StefanoChenet, CristianoCosta Prats, Juan JoséGironès, AndreuGizopoulos, DimitrisKarakostas, VasileiosOtero Calviño, BeatrizPapadimitriou, GeorgeRodríguez Luna, EvaSavino, Alessandrohttp://hdl.handle.net/2117/4045032024-03-14T08:49:21Z2024-03-14T08:46:44ZValidation, Verification, and Testing (VVT) of future RISC-V powered cloud infrastructures: the Vitamin-V Horizon Europe Project perspective
Alonso García, Martí; Andreu Gerique, David; Canal Corretger, Ramon; Di Carlo, Stefano; Chenet, Cristiano; Costa Prats, Juan José; Gironès, Andreu; Gizopoulos, Dimitris; Karakostas, Vasileios; Otero Calviño, Beatriz; Papadimitriou, George; Rodríguez Luna, Eva; Savino, Alessandro
Vitamin-V is a project funded under the Horizon Europe program for the period 2023-2025. The project aims to create a complete open-source software stack for RISC-V that can be used for cloud services. This software stack is intended to have the same level of performance as the x86 architecture, which is currently dominant in the cloud computing industry. In addition, the project aims to create a powerful virtual execution environment that can be used for software development, validation, verification, and testing. The virtual environment will consider the relevant RISC-V ISA extensions required for cloud deployment. Commercial cloud systems use hardware features currently unavailable in RISC-V virtual environments, including virtualization, cryptography, and vectorization. To address this, Vitamin-V will support these features in three virtual environments: QEMU, gem5, and cloud-FPGA prototype platforms. The project will focus on providing support for EPI-based RISC-V designs for both the main CPUs and cloud-important accelerators, such as memory compression. The project will add the compiler (LLVM-based) and toolchain support for the ISA extensions. Moreover, Vitamin-V will develop novel approaches for validating, verifying, and testing software trustworthiness. This paper focuses on the plans and visions that the Vitamin-V project has to support validation, verification, and testing for cloud applications, particularly emphasizing the hardware support that will be provided.
2024-03-14T08:46:44ZAlonso García, MartíAndreu Gerique, DavidCanal Corretger, RamonDi Carlo, StefanoChenet, CristianoCosta Prats, Juan JoséGironès, AndreuGizopoulos, DimitrisKarakostas, VasileiosOtero Calviño, BeatrizPapadimitriou, GeorgeRodríguez Luna, EvaSavino, AlessandroVitamin-V is a project funded under the Horizon Europe program for the period 2023-2025. The project aims to create a complete open-source software stack for RISC-V that can be used for cloud services. This software stack is intended to have the same level of performance as the x86 architecture, which is currently dominant in the cloud computing industry. In addition, the project aims to create a powerful virtual execution environment that can be used for software development, validation, verification, and testing. The virtual environment will consider the relevant RISC-V ISA extensions required for cloud deployment. Commercial cloud systems use hardware features currently unavailable in RISC-V virtual environments, including virtualization, cryptography, and vectorization. To address this, Vitamin-V will support these features in three virtual environments: QEMU, gem5, and cloud-FPGA prototype platforms. The project will focus on providing support for EPI-based RISC-V designs for both the main CPUs and cloud-important accelerators, such as memory compression. The project will add the compiler (LLVM-based) and toolchain support for the ISA extensions. Moreover, Vitamin-V will develop novel approaches for validating, verifying, and testing software trustworthiness. This paper focuses on the plans and visions that the Vitamin-V project has to support validation, verification, and testing for cloud applications, particularly emphasizing the hardware support that will be provided.NEUROPULS: NEUROmorphic energy-efficient secure accelerators based on Phase change materials aUgmented siLicon photonicSPavanello, FabioMarchand, CedricO’Connor, IanOrobtchouk, RegisMandorlo, FabienLetartre, XavierCueff, SebastienBrando Guillaumes, AxelCazorla Almeida, Francisco JavierCanal Corretger, Ramonhttp://hdl.handle.net/2117/4045022024-03-17T08:33:11Z2024-03-14T08:06:16ZNEUROPULS: NEUROmorphic energy-efficient secure accelerators based on Phase change materials aUgmented siLicon photonicS
Pavanello, Fabio; Marchand, Cedric; O’Connor, Ian; Orobtchouk, Regis; Mandorlo, Fabien; Letartre, Xavier; Cueff, Sebastien; Brando Guillaumes, Axel; Cazorla Almeida, Francisco Javier; Canal Corretger, Ramon
This special session paper introduces the Horizon Europe NEUROPULS project, which targets the development of secure and energy-efficient RISC-V interfaced neuromorphic accelerators using augmented silicon photonics technology. Our approach aims to develop an augmented silicon photonics platform, an FPGA-powered RISC-V-connected computing platform, and a complete simulation platform to demonstrate the neuromorphic accelerator capabilities. In particular, their main advantages and limitations will be addressed concerning the underpinning technology for each platform. Then, we will discuss three targeted use cases for edge-computing applications: Global National Satellite System (GNSS) anti-jamming, autonomous driving, and anomaly detection in edge devices. Finally, we will address the reliability and security aspects of the stand-alone accelerator implementation and the project use cases.
2024-03-14T08:06:16ZPavanello, FabioMarchand, CedricO’Connor, IanOrobtchouk, RegisMandorlo, FabienLetartre, XavierCueff, SebastienBrando Guillaumes, AxelCazorla Almeida, Francisco JavierCanal Corretger, RamonThis special session paper introduces the Horizon Europe NEUROPULS project, which targets the development of secure and energy-efficient RISC-V interfaced neuromorphic accelerators using augmented silicon photonics technology. Our approach aims to develop an augmented silicon photonics platform, an FPGA-powered RISC-V-connected computing platform, and a complete simulation platform to demonstrate the neuromorphic accelerator capabilities. In particular, their main advantages and limitations will be addressed concerning the underpinning technology for each platform. Then, we will discuss three targeted use cases for edge-computing applications: Global National Satellite System (GNSS) anti-jamming, autonomous driving, and anomaly detection in edge devices. Finally, we will address the reliability and security aspects of the stand-alone accelerator implementation and the project use cases.A 3D terrain generator: Enhancing robotics simulations with GANsArellano García, SilviaOtero Calviño, BeatrizKucner, Tomasz PiotrCanal Corretger, Ramonhttp://hdl.handle.net/2117/4027992024-02-22T13:53:50Z2024-02-22T13:50:23ZA 3D terrain generator: Enhancing robotics simulations with GANs
Arellano García, Silvia; Otero Calviño, Beatriz; Kucner, Tomasz Piotr; Canal Corretger, Ramon
Simulation is essential in robotics to evaluate models and techniques in a controlled setting before conducting experiments on tangible agents. However, developing simulation environments can be a challenging and time-consuming task. To address this issue, a proposed solution involves building a functional pipeline that generates 3D realistic terrains using Generative Adversarial Networks (GANs). By using GANs to create terrain, the pipeline can quickly and efficiently generate detailed surfaces, saving researchers time and effort in developing simulation environments for their experiments. The proposed model utilizes a Deep Convolutional Generative Adversarial Network (DCGAN) to generate heightmaps, which are trained on a custom database consisting of real heightmaps. Furthermore, an Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) is used to improve the resolution of the resulting heightmaps, enhancing their visual quality and realism. To generate a texture according to the topography of the heightmap, chroma keying is used with previously selected textures. The heightmap and texture are then rendered and integrated, resulting in a realistic 3D terrain. Together, these techniques enable the model to generate high-quality, realistic 3D terrains for use in robotic simulators, allowing for more accurate and effective evaluations of robotics models and techniques.
2024-02-22T13:50:23ZArellano García, SilviaOtero Calviño, BeatrizKucner, Tomasz PiotrCanal Corretger, RamonSimulation is essential in robotics to evaluate models and techniques in a controlled setting before conducting experiments on tangible agents. However, developing simulation environments can be a challenging and time-consuming task. To address this issue, a proposed solution involves building a functional pipeline that generates 3D realistic terrains using Generative Adversarial Networks (GANs). By using GANs to create terrain, the pipeline can quickly and efficiently generate detailed surfaces, saving researchers time and effort in developing simulation environments for their experiments. The proposed model utilizes a Deep Convolutional Generative Adversarial Network (DCGAN) to generate heightmaps, which are trained on a custom database consisting of real heightmaps. Furthermore, an Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) is used to improve the resolution of the resulting heightmaps, enhancing their visual quality and realism. To generate a texture according to the topography of the heightmap, chroma keying is used with previously selected textures. The heightmap and texture are then rendered and integrated, resulting in a realistic 3D terrain. Together, these techniques enable the model to generate high-quality, realistic 3D terrains for use in robotic simulators, allowing for more accurate and effective evaluations of robotics models and techniques.Exploring image transformations with diffusion models: A survey of applications and implementation codeArellano García, SilviaOtero Calviño, BeatrizTous Liesa, Rubénhttp://hdl.handle.net/2117/4026302024-02-22T10:22:57Z2024-02-22T10:20:24ZExploring image transformations with diffusion models: A survey of applications and implementation code
Arellano García, Silvia; Otero Calviño, Beatriz; Tous Liesa, Rubén
Diffusion Models have become increasingly popular in recent years and their applications span a wide range of fields. This survey focuses on the use of diffusion models in computer vision, specially in the branch of image transformations. The objective of this survey is to provide an overview of state-of-the-art applications of diffusion models in image transformations, including image inpainting, super-resolution, restoration, translation, and editing. This survey presents a selection of notable papers and repositories including practical applications of diffusion models for image transformations. The applications are presented in a practical and concise manner, facilitating the understanding of concepts behind diffusion models and how they function. Additionally, it includes a curated collection of GitHub repositories featuring popular examples of these subjects.
2024-02-22T10:20:24ZArellano García, SilviaOtero Calviño, BeatrizTous Liesa, RubénDiffusion Models have become increasingly popular in recent years and their applications span a wide range of fields. This survey focuses on the use of diffusion models in computer vision, specially in the branch of image transformations. The objective of this survey is to provide an overview of state-of-the-art applications of diffusion models in image transformations, including image inpainting, super-resolution, restoration, translation, and editing. This survey presents a selection of notable papers and repositories including practical applications of diffusion models for image transformations. The applications are presented in a practical and concise manner, facilitating the understanding of concepts behind diffusion models and how they function. Additionally, it includes a curated collection of GitHub repositories featuring popular examples of these subjects.Leveraging network data analytics function and machine learning for data collection, resource optimization, security and privacy in 6G networksGkonis, PanagiotisNomikos, NikolaosTrakadas, PanagiotisSarakis, LambrosXylouris, GeorgeMasip Bruin, XavierMartrat, Josephttp://hdl.handle.net/2117/4021012024-02-18T21:55:57Z2024-02-16T13:03:45ZLeveraging network data analytics function and machine learning for data collection, resource optimization, security and privacy in 6G networks
Gkonis, Panagiotis; Nomikos, Nikolaos; Trakadas, Panagiotis; Sarakis, Lambros; Xylouris, George; Masip Bruin, Xavier; Martrat, Josep
The full deployment of sixth-generation (6G) networks is inextricably connected with a holistic network redesign able to deal with various emerging challenges, such as integration of heterogeneous technologies and devices, as well as support of latency and bandwidth demanding applications. In such a complex environment, resource optimization, and security and privacy enhancement can be quite demanding, due to the vast and diverse data generation endpoints and associated hardware elements. Therefore, efficient data collection mechanisms are needed that can be deployed at any network infrastructure. In this context, the network data analytics function (NWDAF) has already been defined in the fifth-generation (5G) architecture from Release 15 of 3GPP, that can perform data collection from various network functions (NFs). When combined with advanced machine learning (ML) techniques, a full-scale network optimization can be supported, according to traffic demands and service requirements. In addition, the collected data from NWDAF can be used for anomaly detection and thus, security and privacy enhancement. Therefore, the main goal of this paper is to present the current state-of-the-art on the role of the NWDAF towards data collection, resource optimization and security enhancement in next generation broadband networks. Furthermore, various key enabling technologies for data collection and threat mitigation in the 6G framework are identified and categorized, along with advanced ML approaches. Finally, a high level architectural approach is presented and discussed, based on the NWDAF, for efficient data collection and ML model training in large scale heterogeneous environments.
2024-02-16T13:03:45ZGkonis, PanagiotisNomikos, NikolaosTrakadas, PanagiotisSarakis, LambrosXylouris, GeorgeMasip Bruin, XavierMartrat, JosepThe full deployment of sixth-generation (6G) networks is inextricably connected with a holistic network redesign able to deal with various emerging challenges, such as integration of heterogeneous technologies and devices, as well as support of latency and bandwidth demanding applications. In such a complex environment, resource optimization, and security and privacy enhancement can be quite demanding, due to the vast and diverse data generation endpoints and associated hardware elements. Therefore, efficient data collection mechanisms are needed that can be deployed at any network infrastructure. In this context, the network data analytics function (NWDAF) has already been defined in the fifth-generation (5G) architecture from Release 15 of 3GPP, that can perform data collection from various network functions (NFs). When combined with advanced machine learning (ML) techniques, a full-scale network optimization can be supported, according to traffic demands and service requirements. In addition, the collected data from NWDAF can be used for anomaly detection and thus, security and privacy enhancement. Therefore, the main goal of this paper is to present the current state-of-the-art on the role of the NWDAF towards data collection, resource optimization and security enhancement in next generation broadband networks. Furthermore, various key enabling technologies for data collection and threat mitigation in the 6G framework are identified and categorized, along with advanced ML approaches. Finally, a high level architectural approach is presented and discussed, based on the NWDAF, for efficient data collection and ML model training in large scale heterogeneous environments.Cybersecurity in supply chain systems: the farm-to-fork use caseLeligou, Helen C.Lakka, AlexandraKarkazis, PanagiotisPita Costa, JoaoMarín Tordera, EvaDinis Santos, Henrique ManuelAlvarez Romero, Antoniohttp://hdl.handle.net/2117/4000972024-01-28T22:15:17Z2024-01-24T09:13:28ZCybersecurity in supply chain systems: the farm-to-fork use case
Leligou, Helen C.; Lakka, Alexandra; Karkazis, Panagiotis; Pita Costa, Joao; Marín Tordera, Eva; Dinis Santos, Henrique Manuel; Alvarez Romero, Antonio
Modern supply chains comprise an increasing number of actors which deploy different information technology systems that capture information of a diverse nature and diverse sources (from sensors to order information). While the benefits of the automatic exchange of information between these systems have been recognized and have led to their interconnection, protecting the whole supply chain from potential attacks is a challenging issue given the attack proliferation reported in the literature. In this paper, we present the FISHY platform, which anticipates protecting the whole supply chain from potential attacks by (a) adopting novel technologies and approaches including machine learning-based tools to detect security threats and recommend mitigation policies and (b) employing blockchain-based tools to provide evidence of the captured events and suggested policies. This platform is also easily expandable to protect against additional attacks in the future. We experiment with this platform in the farm-to-fork supply chain to prove its operation and capabilities. The results show that the FISHY platform can effectively be used to protect the supply chain and offers high flexibility to its users.
2024-01-24T09:13:28ZLeligou, Helen C.Lakka, AlexandraKarkazis, PanagiotisPita Costa, JoaoMarín Tordera, EvaDinis Santos, Henrique ManuelAlvarez Romero, AntonioModern supply chains comprise an increasing number of actors which deploy different information technology systems that capture information of a diverse nature and diverse sources (from sensors to order information). While the benefits of the automatic exchange of information between these systems have been recognized and have led to their interconnection, protecting the whole supply chain from potential attacks is a challenging issue given the attack proliferation reported in the literature. In this paper, we present the FISHY platform, which anticipates protecting the whole supply chain from potential attacks by (a) adopting novel technologies and approaches including machine learning-based tools to detect security threats and recommend mitigation policies and (b) employing blockchain-based tools to provide evidence of the captured events and suggested policies. This platform is also easily expandable to protect against additional attacks in the future. We experiment with this platform in the farm-to-fork supply chain to prove its operation and capabilities. The results show that the FISHY platform can effectively be used to protect the supply chain and offers high flexibility to its users.A survey on IoT-edge-cloud continuum systems: Status, challenges, use cases, and open issuesGkonis, PanagiotisGiannopoulos, AnastasiosTrakadas, PanagiotisMasip Bruin, XavierD'Andria, Francescohttp://hdl.handle.net/2117/3990892024-01-16T05:45:08Z2024-01-10T13:20:08ZA survey on IoT-edge-cloud continuum systems: Status, challenges, use cases, and open issues
Gkonis, Panagiotis; Giannopoulos, Anastasios; Trakadas, Panagiotis; Masip Bruin, Xavier; D'Andria, Francesco
The rapid growth in the number of interconnected devices on the Internet (referred to as the Internet of Things—IoT), along with the huge volume of data that are exchanged and processed, has created a new landscape in network design and operation. Due to the limited battery size and computational capabilities of IoT nodes, data processing usually takes place on external devices. Since latency minimization is a key concept in modern-era networks, edge servers that are in close proximity to IoT nodes gather and process related data, while in some cases data offloading in the cloud might have to take place. The interconnection of a vast number of heterogeneous IoT devices with the edge servers and the cloud, where the IoT, edge, and cloud converge to form a computing continuum, is also known as the IoT-edge-cloud (IEC) continuum. Several key challenges are associated with this new computing systems’ architectural approach, including (i) the design of connection and programming protocols aimed at properly manipulating a huge number of heterogeneous devices over diverse infrastructures; (ii) the design of efficient task offloading algorithms aimed at optimizing services execution; (iii) the support for security and privacy enhancements during data transfer to deal with the existent and even unforeseen attacks and threats landscape; (iv) scalability, flexibility, and reliability guarantees to face the expected mobility for IoT systems; and (v) the design of optimal resource allocation mechanisms to make the most out of the available resources. These challenges will become even more significant towards the new era of sixth-generation (6G) networks, which will be based on the integration of various cutting-edge heterogeneous technologies. Therefore, the goal of this survey paper is to present all recent developments in the field of IEC continuum systems, with respect to the aforementioned deployment challenges. In the same context, potential limitations and future challenges are highlighted as well. Finally, indicative use cases are also presented from an IEC continuum perspective.
2024-01-10T13:20:08ZGkonis, PanagiotisGiannopoulos, AnastasiosTrakadas, PanagiotisMasip Bruin, XavierD'Andria, FrancescoThe rapid growth in the number of interconnected devices on the Internet (referred to as the Internet of Things—IoT), along with the huge volume of data that are exchanged and processed, has created a new landscape in network design and operation. Due to the limited battery size and computational capabilities of IoT nodes, data processing usually takes place on external devices. Since latency minimization is a key concept in modern-era networks, edge servers that are in close proximity to IoT nodes gather and process related data, while in some cases data offloading in the cloud might have to take place. The interconnection of a vast number of heterogeneous IoT devices with the edge servers and the cloud, where the IoT, edge, and cloud converge to form a computing continuum, is also known as the IoT-edge-cloud (IEC) continuum. Several key challenges are associated with this new computing systems’ architectural approach, including (i) the design of connection and programming protocols aimed at properly manipulating a huge number of heterogeneous devices over diverse infrastructures; (ii) the design of efficient task offloading algorithms aimed at optimizing services execution; (iii) the support for security and privacy enhancements during data transfer to deal with the existent and even unforeseen attacks and threats landscape; (iv) scalability, flexibility, and reliability guarantees to face the expected mobility for IoT systems; and (v) the design of optimal resource allocation mechanisms to make the most out of the available resources. These challenges will become even more significant towards the new era of sixth-generation (6G) networks, which will be based on the integration of various cutting-edge heterogeneous technologies. Therefore, the goal of this survey paper is to present all recent developments in the field of IEC continuum systems, with respect to the aforementioned deployment challenges. In the same context, potential limitations and future challenges are highlighted as well. Finally, indicative use cases are also presented from an IEC continuum perspective.A differential privacy protection-based federated deep learning framework to fog-embedded architecturesGutiérrez Escobar, NormaOtero Calviño, BeatrizRodríguez Luna, EvaUtrera Iglesias, Gladys MiriamMus León, SergiCanal Corretger, Ramonhttp://hdl.handle.net/2117/3984532024-02-05T09:20:32Z2023-12-21T10:48:09ZA differential privacy protection-based federated deep learning framework to fog-embedded architectures
Gutiérrez Escobar, Norma; Otero Calviño, Beatriz; Rodríguez Luna, Eva; Utrera Iglesias, Gladys Miriam; Mus León, Sergi; Canal Corretger, Ramon
Nowadays, companies collect massive quantities of data to enhance their operations, often at the expense of sharing user sensible information. This data is widely used to train Deep Learning (DL) neural networks to model, classify, or recognize complex data. These activities enable companies to offer an array of services to users, such as precise advertising and optimal location services. This study explores potential solutions for preserving privacy while utilizing DL applications. To address the privacy issue, we develop a privacy-preserving framework specifically designed for fog computing environments. Unlike traditional cloud computing architectures, fog embedded architectures only share a small portion of user data with a nearby fog node, ensuring that the majority of sensitive data remains secure. Within these fog nodes, we incorporate two additional algorithms, namely Generalization and Threshold, to enhance the privacy-preserving capabilities of the framework. The first algorithm, Generalization, introduces a validation dataset within the fog nodes which not only increases the accuracy of the fog-embedded framework but also ensures that user data is preserved. The second algorithm, Threshold, is responsible for protecting user data samples and reducing the amount of information sent to the server. By combining these two algorithms, we are able to provide an additional layer of protection for user privacy while still maintaining the accuracy of the model. We conduct an evaluation to test its effectiveness using two separate datasets. In addition, we analyze them through a Feed Forward Neural Network (FFNN) and compare the results with a traditional centralized architecture to validate the effectiveness of the proposed framework. The results of our evaluation demonstrate that the proposed privacy-preserving framework, when combined with the Generalization and Threshold algorithms, can preserve up to 38.44% of user data. Additionally, we were able to extend the framework to multiple fog nodes without compromising the network’s accuracy, as we only observed a 0.1% decrease in accuracy when using the proposed architecture. This study emphasizes the importance of preserving user information while using DL applications and provides a solution that trains the desired network without violating user privacy, hence preserving their anonymity. Overall, the study highlights the potential of Federated Deep Learning to improve the accuracy and privacy of DL applications in fog computing environments.
2023-12-21T10:48:09ZGutiérrez Escobar, NormaOtero Calviño, BeatrizRodríguez Luna, EvaUtrera Iglesias, Gladys MiriamMus León, SergiCanal Corretger, RamonNowadays, companies collect massive quantities of data to enhance their operations, often at the expense of sharing user sensible information. This data is widely used to train Deep Learning (DL) neural networks to model, classify, or recognize complex data. These activities enable companies to offer an array of services to users, such as precise advertising and optimal location services. This study explores potential solutions for preserving privacy while utilizing DL applications. To address the privacy issue, we develop a privacy-preserving framework specifically designed for fog computing environments. Unlike traditional cloud computing architectures, fog embedded architectures only share a small portion of user data with a nearby fog node, ensuring that the majority of sensitive data remains secure. Within these fog nodes, we incorporate two additional algorithms, namely Generalization and Threshold, to enhance the privacy-preserving capabilities of the framework. The first algorithm, Generalization, introduces a validation dataset within the fog nodes which not only increases the accuracy of the fog-embedded framework but also ensures that user data is preserved. The second algorithm, Threshold, is responsible for protecting user data samples and reducing the amount of information sent to the server. By combining these two algorithms, we are able to provide an additional layer of protection for user privacy while still maintaining the accuracy of the model. We conduct an evaluation to test its effectiveness using two separate datasets. In addition, we analyze them through a Feed Forward Neural Network (FFNN) and compare the results with a traditional centralized architecture to validate the effectiveness of the proposed framework. The results of our evaluation demonstrate that the proposed privacy-preserving framework, when combined with the Generalization and Threshold algorithms, can preserve up to 38.44% of user data. Additionally, we were able to extend the framework to multiple fog nodes without compromising the network’s accuracy, as we only observed a 0.1% decrease in accuracy when using the proposed architecture. This study emphasizes the importance of preserving user information while using DL applications and provides a solution that trains the desired network without violating user privacy, hence preserving their anonymity. Overall, the study highlights the potential of Federated Deep Learning to improve the accuracy and privacy of DL applications in fog computing environments.A dispersion analysis of uniformly high order, interior and boundaries, mimetic finite difference solutions of wave propagation problemsRojas, OtilioMendoza, LarryOtero Calviño, BeatrizVillamizar Morales, JorgeCalderón, GiovanniCastillo, JoséMiranda, Guillermohttp://hdl.handle.net/2117/3980142024-03-03T21:26:19Z2023-12-14T13:42:46ZA dispersion analysis of uniformly high order, interior and boundaries, mimetic finite difference solutions of wave propagation problems
Rojas, Otilio; Mendoza, Larry; Otero Calviño, Beatriz; Villamizar Morales, Jorge; Calderón, Giovanni; Castillo, José; Miranda, Guillermo
A preliminary stability and dispersion study for wave propagation problems is developed for mimetic finite difference discretizations. The discretization framework corresponds to the fourth-order staggered-grid Castillo-Grone operators that offer a sextuple of free parameters. The parameter-dependent mimetic stencils allow problem discretization at domain boundaries and at the neighbor grid cells. For arbitrary parameter sets, these boundary and near-boundary mimetic stencils are lateral, and we here draw first steps on the parametric dependency of the stability and dispersion properties of such discretizations. As a reference, our analyses also present results based on Castillo-Grone parameters leading to mimetic operators of minimum bandwidth that have been previously applied in similar physical problems. The most interior parameter-dependent mimetic stencils exhibit a specific Toeplitz-like structure, which reduces to the standard central finite difference formula for staggered differentiation at grid interior. Thus, our results apply to the whole discretization grid. The study done for the 1-D problem could be applied to the discretization of a free surface boundary condition along an orthogonal gridline to this boundary.
2023-12-14T13:42:46ZRojas, OtilioMendoza, LarryOtero Calviño, BeatrizVillamizar Morales, JorgeCalderón, GiovanniCastillo, JoséMiranda, GuillermoA preliminary stability and dispersion study for wave propagation problems is developed for mimetic finite difference discretizations. The discretization framework corresponds to the fourth-order staggered-grid Castillo-Grone operators that offer a sextuple of free parameters. The parameter-dependent mimetic stencils allow problem discretization at domain boundaries and at the neighbor grid cells. For arbitrary parameter sets, these boundary and near-boundary mimetic stencils are lateral, and we here draw first steps on the parametric dependency of the stability and dispersion properties of such discretizations. As a reference, our analyses also present results based on Castillo-Grone parameters leading to mimetic operators of minimum bandwidth that have been previously applied in similar physical problems. The most interior parameter-dependent mimetic stencils exhibit a specific Toeplitz-like structure, which reduces to the standard central finite difference formula for staggered differentiation at grid interior. Thus, our results apply to the whole discretization grid. The study done for the 1-D problem could be applied to the discretization of a free surface boundary condition along an orthogonal gridline to this boundary.