Articles de revistahttp://hdl.handle.net/2117/1199802024-03-29T00:03:01Z2024-03-29T00:03:01ZTransfer-learning-based intrusion detection framework in IoT networksRodríguez Luna, EvaValls, PolOtero Calviño, BeatrizCosta Prats, Juan JoséVerdú Mulà, JavierPajuelo González, Manuel AlejandroCanal Corretger, Ramonhttp://hdl.handle.net/2117/3754632022-11-06T20:41:09Z2022-11-03T08:21:09ZTransfer-learning-based intrusion detection framework in IoT networks
Rodríguez Luna, Eva; Valls, Pol; Otero Calviño, Beatriz; Costa Prats, Juan José; Verdú Mulà, Javier; Pajuelo González, Manuel Alejandro; Canal Corretger, Ramon
Cyberattacks in the Internet of Things (IoT) are growing exponentially, especially zero-day attacks mostly driven by security weaknesses on IoT networks. Traditional intrusion detection systems (IDSs) adopted machine learning (ML), especially deep Learning (DL), to improve the detection of cyberattacks. DL-based IDSs require balanced datasets with large amounts of labeled data; however, there is a lack of such large collections in IoT networks. This paper proposes an efficient intrusion detection framework based on transfer learning (TL), knowledge transfer, and model refinement, for the effective detection of zero-day attacks. The framework is tailored to 5G IoT scenarios with unbalanced and scarce labeled datasets. The TL model is based on convolutional neural networks (CNNs). The framework was evaluated to detect a wide range of zero-day attacks. To this end, three specialized datasets were created. Experimental results show that the proposed TL-based framework achieves high accuracy and low false prediction rate (FPR). The proposed solution has better detection rates for the different families of known and zero-day attacks than any previous DL-based IDS. These results demonstrate that TL is effective in the detection of cyberattacks in IoT environments.
2022-11-03T08:21:09ZRodríguez Luna, EvaValls, PolOtero Calviño, BeatrizCosta Prats, Juan JoséVerdú Mulà, JavierPajuelo González, Manuel AlejandroCanal Corretger, RamonCyberattacks in the Internet of Things (IoT) are growing exponentially, especially zero-day attacks mostly driven by security weaknesses on IoT networks. Traditional intrusion detection systems (IDSs) adopted machine learning (ML), especially deep Learning (DL), to improve the detection of cyberattacks. DL-based IDSs require balanced datasets with large amounts of labeled data; however, there is a lack of such large collections in IoT networks. This paper proposes an efficient intrusion detection framework based on transfer learning (TL), knowledge transfer, and model refinement, for the effective detection of zero-day attacks. The framework is tailored to 5G IoT scenarios with unbalanced and scarce labeled datasets. The TL model is based on convolutional neural networks (CNNs). The framework was evaluated to detect a wide range of zero-day attacks. To this end, three specialized datasets were created. Experimental results show that the proposed TL-based framework achieves high accuracy and low false prediction rate (FPR). The proposed solution has better detection rates for the different families of known and zero-day attacks than any previous DL-based IDS. These results demonstrate that TL is effective in the detection of cyberattacks in IoT environments.Small-layered feed-forward and convolutional neural networks for efficient P wave earthquake detectionMus León, SergiOtero Calviño, BeatrizAlvarado Vivas, LeonardoCanal Corretger, RamonRojas Ulacio, Otiliohttp://hdl.handle.net/2117/3753452022-11-06T20:35:15Z2022-11-02T09:07:29ZSmall-layered feed-forward and convolutional neural networks for efficient P wave earthquake detection
Mus León, Sergi; Otero Calviño, Beatriz; Alvarado Vivas, Leonardo; Canal Corretger, Ramon; Rojas Ulacio, Otilio
The number and efficiency of seismic networks have steadily increase over time delivering large datasets to be analyzed for earthquake occurrence. Automatic tools for accurate earthquake detection are under emerging and intense development. This paper first proposes a new windowing procedure of seismic traces that highly facilitates earthquake detection. This procedure applies regular trace filtering and normalization, but also performs a strict window alignment to P wave onset. These event-aligned windows represent the input data to our P wave detection networks, with relatively small or moderate number of layers. We then develop Feed-Forward (FFNN) and Convolutional (CNN) neural networks and explore multiple architecture configurations to find relevant hyperparameter patterns for better detection. To assess network performance, we adopt the widely used metrics of accuracy (ACC) and the area under the curve (AUC) of the Receiver Operating Characteristic function. In terms of ACC, the best FFNN and CNN reach performances of 91% and 98%, respectively. On the other hand, the best FFNN and CNN in terms of AUC achieve performances of 96% and 99%, respectively. Thus, our novel trace windowing procedure allows developing networks with few hyperparameters, for correct earthquake detection under low computational costs. Finally, we use the CNN with best AUC performance as an effective trace filtering with the purpose of P wave arrival time estimation.
2022-11-02T09:07:29ZMus León, SergiOtero Calviño, BeatrizAlvarado Vivas, LeonardoCanal Corretger, RamonRojas Ulacio, OtilioThe number and efficiency of seismic networks have steadily increase over time delivering large datasets to be analyzed for earthquake occurrence. Automatic tools for accurate earthquake detection are under emerging and intense development. This paper first proposes a new windowing procedure of seismic traces that highly facilitates earthquake detection. This procedure applies regular trace filtering and normalization, but also performs a strict window alignment to P wave onset. These event-aligned windows represent the input data to our P wave detection networks, with relatively small or moderate number of layers. We then develop Feed-Forward (FFNN) and Convolutional (CNN) neural networks and explore multiple architecture configurations to find relevant hyperparameter patterns for better detection. To assess network performance, we adopt the widely used metrics of accuracy (ACC) and the area under the curve (AUC) of the Receiver Operating Characteristic function. In terms of ACC, the best FFNN and CNN reach performances of 91% and 98%, respectively. On the other hand, the best FFNN and CNN in terms of AUC achieve performances of 96% and 99%, respectively. Thus, our novel trace windowing procedure allows developing networks with few hyperparameters, for correct earthquake detection under low computational costs. Finally, we use the CNN with best AUC performance as an effective trace filtering with the purpose of P wave arrival time estimation.Fast and accurate SER estimation for large combinational blocks in early stages of the designAnglada Sánchez, MartíCanal Corretger, RamonAragón Alcaraz, Juan LuisGonzález Colás, Antonio Maríahttp://hdl.handle.net/2117/3618372022-05-17T10:36:50Z2022-02-07T12:05:08ZFast and accurate SER estimation for large combinational blocks in early stages of the design
Anglada Sánchez, Martí; Canal Corretger, Ramon; Aragón Alcaraz, Juan Luis; González Colás, Antonio María
Soft Error Rate (SER) estimation is an important challenge for integrated circuits because of the increased vulnerability brought by technology scaling. This paper presents a methodology to estimate in early stages of the design the susceptibility of combinational circuits to particle strikes. In the core of the framework lies MASkIt , a novel approach that combines signal probabilities with technology characterization to swiftly compute the logical, electrical, and timing masking effects of the circuit under study taking into account all input combinations and pulse widths at once. Signal probabilities are estimated applying a new hybrid approach that integrates heuristics along with selective simulation of reconvergent subnetworks. The experimental results validate our proposed technique, showing a speedup of two orders of magnitude in comparison with traditional fault injection estimation with an average estimation error of 5 percent. Finally, we analyze the vulnerability of the Decoder, Scheduler, ALU, and FPU of an out-of-order, superscalar processor design.
2022-02-07T12:05:08ZAnglada Sánchez, MartíCanal Corretger, RamonAragón Alcaraz, Juan LuisGonzález Colás, Antonio MaríaSoft Error Rate (SER) estimation is an important challenge for integrated circuits because of the increased vulnerability brought by technology scaling. This paper presents a methodology to estimate in early stages of the design the susceptibility of combinational circuits to particle strikes. In the core of the framework lies MASkIt , a novel approach that combines signal probabilities with technology characterization to swiftly compute the logical, electrical, and timing masking effects of the circuit under study taking into account all input combinations and pulse widths at once. Signal probabilities are estimated applying a new hybrid approach that integrates heuristics along with selective simulation of reconvergent subnetworks. The experimental results validate our proposed technique, showing a speedup of two orders of magnitude in comparison with traditional fault injection estimation with an average estimation error of 5 percent. Finally, we analyze the vulnerability of the Decoder, Scheduler, ALU, and FPU of an out-of-order, superscalar processor design.A survey of deep learning techniques for cybersecurity in mobile networksRodríguez Luna, EvaOtero Calviño, BeatrizGutiérrez Escobar, NormaCanal Corretger, Ramonhttp://hdl.handle.net/2117/3555162021-11-07T23:29:48Z2021-11-04T07:32:33ZA survey of deep learning techniques for cybersecurity in mobile networks
Rodríguez Luna, Eva; Otero Calviño, Beatriz; Gutiérrez Escobar, Norma; Canal Corretger, Ramon
The widespread use of mobile devices, as well as the increasing popularity of mobile services has raised serious cybersecurity challenges. In the last years, the number of cyberattacks has grown dramatically, as well as their complexity. Traditional cybersecurity systems have failed to detect complex attacks, unknown malware, and they do not guarantee the preservation of user privacy. Consequently, cybersecurity systems have embraced Deep Learning (DL) models as they provide efficient detection of novel attacks and better accuracy. This paper presents a comprehensive survey of recent cybersecurity works that use DL in mobile and wireless networks. It covers all cybersecurity aspects: infrastructure threads and attacks, software attacks and privacy preservation. First, we provide a detailed overview of DL techniques applied, or with potential applications, to cybersecurity. Then, we review cybersecurity works based on DL. For each cybersecurity threat or attack, we discuss the challenges for using DL methods. For each contribution, we review the implementation details and the performance of the solution. In a nutshell, this paper constitutes the first survey that provides a complete review of the DL methods for cybersecurity. Given the analysis performed, we identify the most effective DL methods for the different threats and attacks.
2021-11-04T07:32:33ZRodríguez Luna, EvaOtero Calviño, BeatrizGutiérrez Escobar, NormaCanal Corretger, RamonThe widespread use of mobile devices, as well as the increasing popularity of mobile services has raised serious cybersecurity challenges. In the last years, the number of cyberattacks has grown dramatically, as well as their complexity. Traditional cybersecurity systems have failed to detect complex attacks, unknown malware, and they do not guarantee the preservation of user privacy. Consequently, cybersecurity systems have embraced Deep Learning (DL) models as they provide efficient detection of novel attacks and better accuracy. This paper presents a comprehensive survey of recent cybersecurity works that use DL in mobile and wireless networks. It covers all cybersecurity aspects: infrastructure threads and attacks, software attacks and privacy preservation. First, we provide a detailed overview of DL techniques applied, or with potential applications, to cybersecurity. Then, we review cybersecurity works based on DL. For each cybersecurity threat or attack, we discuss the challenges for using DL methods. For each contribution, we review the implementation details and the performance of the solution. In a nutshell, this paper constitutes the first survey that provides a complete review of the DL methods for cybersecurity. Given the analysis performed, we identify the most effective DL methods for the different threats and attacks.Deep neural networks for earthquake detection and source region estimation in north-central VenezuelaTous Liesa, RubénAlvarado Bermúdez, LeonardoOtero Calviño, BeatrizCruz de la Cruz, Stalin LeonelRojas Ulacio, Otiliohttp://hdl.handle.net/2117/3367782023-07-30T06:43:50Z2021-02-03T09:56:09ZDeep neural networks for earthquake detection and source region estimation in north-central Venezuela
Tous Liesa, Rubén; Alvarado Bermúdez, Leonardo; Otero Calviño, Beatriz; Cruz de la Cruz, Stalin Leonel; Rojas Ulacio, Otilio
Reliable earthquake detection algorithms are necessary to properly analyze and catalog the continuously growing seismic records. We report the results of applying a deep convolutional neural network, called UPC‐UCV (Universitat Politecnica de Catalunya ‐ Universidad Central de Venezuela), over single‐station three‐channel signal windows for P‐wave earthquake detection and source region estimation in north‐central Venezuela. The analysis is performed on a new dataset of handpicked arrivals of P waves from local events, named CARABOBO, built and made public for reproducibility and benchmarking purposes. The CARABOBO dataset consists of three‐channel continuous data recorded by the broadband stations of the Venezuelan Foundation for Seismological Research in the region of 9.5°–11.5°N and 67.0°–69.0°W during the time period from April 2018 to April 2019. During this period, 949 earthquakes were recorded in that area, corresponding to earthquakes with magnitudes in the range from Mw 1.1 to 5.2. To estimate the epicentral source region of a detected event, the proposed network employs geographical distribution of the CARABOBO dataset into K clusters as a basis. This geographical partitioning is automatically performed by the k‐means algorithm, and the optimality of the K‐values for our dataset has been assessed using the elbow (K=5) and silhouette (K=3) methods. For target seismicity, the proposed network achieves 95.27% detection accuracy and 93.36% source region estimation accuracy, when using K=5 geographic clusters. The location accuracy slightly increases to 95.68% in the case of K=3 geographic partitions. The detection capability of this network has also been tested on the OKLAHOMA dataset, which compiles more than 2000 local earthquakes that occurred in this U.S. state. Without any modification, the proposed network yields excellent detection results when trained and evaluated on that dataset (98.21% accuracy; ConvNetQuake, fine‐tuned for this dataset, achieves a 97.32% accuracy), corresponding to a totally different geographical region.
Publicat en accés obert després de sis mesos d'embargament amb el permís de l'editor / Published in open access after six months of embargo with the permission of the publisher.
2021-02-03T09:56:09ZTous Liesa, RubénAlvarado Bermúdez, LeonardoOtero Calviño, BeatrizCruz de la Cruz, Stalin LeonelRojas Ulacio, OtilioReliable earthquake detection algorithms are necessary to properly analyze and catalog the continuously growing seismic records. We report the results of applying a deep convolutional neural network, called UPC‐UCV (Universitat Politecnica de Catalunya ‐ Universidad Central de Venezuela), over single‐station three‐channel signal windows for P‐wave earthquake detection and source region estimation in north‐central Venezuela. The analysis is performed on a new dataset of handpicked arrivals of P waves from local events, named CARABOBO, built and made public for reproducibility and benchmarking purposes. The CARABOBO dataset consists of three‐channel continuous data recorded by the broadband stations of the Venezuelan Foundation for Seismological Research in the region of 9.5°–11.5°N and 67.0°–69.0°W during the time period from April 2018 to April 2019. During this period, 949 earthquakes were recorded in that area, corresponding to earthquakes with magnitudes in the range from Mw 1.1 to 5.2. To estimate the epicentral source region of a detected event, the proposed network employs geographical distribution of the CARABOBO dataset into K clusters as a basis. This geographical partitioning is automatically performed by the k‐means algorithm, and the optimality of the K‐values for our dataset has been assessed using the elbow (K=5) and silhouette (K=3) methods. For target seismicity, the proposed network achieves 95.27% detection accuracy and 93.36% source region estimation accuracy, when using K=5 geographic clusters. The location accuracy slightly increases to 95.68% in the case of K=3 geographic partitions. The detection capability of this network has also been tested on the OKLAHOMA dataset, which compiles more than 2000 local earthquakes that occurred in this U.S. state. Without any modification, the proposed network yields excellent detection results when trained and evaluated on that dataset (98.21% accuracy; ConvNetQuake, fine‐tuned for this dataset, achieves a 97.32% accuracy), corresponding to a totally different geographical region.Securing RSA hardware accelerators through residue checkingLasheras Mas, AnaCanal Corretger, RamonRodríguez Luna, EvaCassano, Lucahttp://hdl.handle.net/2117/3347342022-12-16T01:25:27Z2020-12-21T13:07:22ZSecuring RSA hardware accelerators through residue checking
Lasheras Mas, Ana; Canal Corretger, Ramon; Rodríguez Luna, Eva; Cassano, Luca
Circuits for the hardware acceleration of cryptographic algorithms are ubiquitously deployed in consumer and industrial products. Although being secure from a mathematical point of view, such accelerators may expose several vulnerabilities strictly related to the hardware implementation. Differential fault analysis (DFA) and hardware Trojan horses (HWTs) may be exploited to steal secret information from the circuit or to interfere with its nominal functioning. It is therefore important to protect cryptographic hardware accelerators against such attacks in an efficient way. In this paper, we propose a lightweight technique for protecting circuits implementing the RSA algorithm against DFA and HWTs at runtime. The proposed solution relies on residue checking which is a well-known technique belonging to traditional fault tolerance. Residue checking is here applied to RSA circuits in order to detect any modification of the output of the circuit possibly induced by the occurrence of a fault or the activation of a HWT. When this happens, the protection technique reacts to the attack by obfuscating the circuit's output (i.e. generating a random output). An experimental campaign (99% confidence and 1% error) demonstrated that, when dealing with DFA, the proposed solution detected 100% of the fault attacks that leaked information to the attacker. Moreover, we applied the proposed technique to all the HWT infected implementations of the RSA algorithm available in the Trust-Hub benchmark suite achieving a 100% HWT detection. The overhead introduced by the proposed solution is a maximum area increase below 3%, about 18% dynamic power consumption increase while it has no impact on the operating frequency.
2020-12-21T13:07:22ZLasheras Mas, AnaCanal Corretger, RamonRodríguez Luna, EvaCassano, LucaCircuits for the hardware acceleration of cryptographic algorithms are ubiquitously deployed in consumer and industrial products. Although being secure from a mathematical point of view, such accelerators may expose several vulnerabilities strictly related to the hardware implementation. Differential fault analysis (DFA) and hardware Trojan horses (HWTs) may be exploited to steal secret information from the circuit or to interfere with its nominal functioning. It is therefore important to protect cryptographic hardware accelerators against such attacks in an efficient way. In this paper, we propose a lightweight technique for protecting circuits implementing the RSA algorithm against DFA and HWTs at runtime. The proposed solution relies on residue checking which is a well-known technique belonging to traditional fault tolerance. Residue checking is here applied to RSA circuits in order to detect any modification of the output of the circuit possibly induced by the occurrence of a fault or the activation of a HWT. When this happens, the protection technique reacts to the attack by obfuscating the circuit's output (i.e. generating a random output). An experimental campaign (99% confidence and 1% error) demonstrated that, when dealing with DFA, the proposed solution detected 100% of the fault attacks that leaked information to the attacker. Moreover, we applied the proposed technique to all the HWT infected implementations of the RSA algorithm available in the Trust-Hub benchmark suite achieving a 100% HWT detection. The overhead introduced by the proposed solution is a maximum area increase below 3%, about 18% dynamic power consumption increase while it has no impact on the operating frequency.The RECIPE approach to challenges in deeply heterogeneous high performance systemsAgosta, GiovanniFornaciari, WilliamAtienza, DavidCanal Corretger, RamonCilardo, AlessandroFlich Cardo, JoséHernández Luz, CarlesKulczewski, MichalMassari, GiuseppeTornero Gavilá, RafaelZapater Sancho, Marinahttp://hdl.handle.net/2117/3313772022-06-26T00:32:22Z2020-11-04T14:01:07ZThe RECIPE approach to challenges in deeply heterogeneous high performance systems
Agosta, Giovanni; Fornaciari, William; Atienza, David; Canal Corretger, Ramon; Cilardo, Alessandro; Flich Cardo, José; Hernández Luz, Carles; Kulczewski, Michal; Massari, Giuseppe; Tornero Gavilá, Rafael; Zapater Sancho, Marina
RECIPE (REliable power and time-ConstraInts-aware Predictive management of heterogeneous Exascale systems) is a recently started project funded within the H2020 FETHPC programme, which is expressly targeted at exploring new High-Performance Computing (HPC) technologies. RECIPE aims at introducing a hierarchical runtime resource management infrastructure to optimize energy efficiency and minimize the occurrence of thermal hotspots, while enforcing the time constraints imposed by the applications and ensuring reliability for both time-critical and throughput-oriented computation that run on deeply heterogeneous accelerator-based systems. This paper presents a detailed overview of RECIPE, identifying the fundamental challenges as well as the key innovations addressed by the project. In particular, the need for predictive reliability approaches to maximizing hardware lifetime and guarantee application performance is identified as the key concern for RECIPE. We address it through hierarchical resource management of the heterogeneous architectural components of the system, driven by estimates of the application latency and hardware reliability obtained respectively through timing analysis and modeling thermal properties and mean-time-to-failure of subsystems. We show the impact of prediction accuracy on the overheads imposed by the checkpointing policy, as well as a possible application to a weather forecasting use case.
2020-11-04T14:01:07ZAgosta, GiovanniFornaciari, WilliamAtienza, DavidCanal Corretger, RamonCilardo, AlessandroFlich Cardo, JoséHernández Luz, CarlesKulczewski, MichalMassari, GiuseppeTornero Gavilá, RafaelZapater Sancho, MarinaRECIPE (REliable power and time-ConstraInts-aware Predictive management of heterogeneous Exascale systems) is a recently started project funded within the H2020 FETHPC programme, which is expressly targeted at exploring new High-Performance Computing (HPC) technologies. RECIPE aims at introducing a hierarchical runtime resource management infrastructure to optimize energy efficiency and minimize the occurrence of thermal hotspots, while enforcing the time constraints imposed by the applications and ensuring reliability for both time-critical and throughput-oriented computation that run on deeply heterogeneous accelerator-based systems. This paper presents a detailed overview of RECIPE, identifying the fundamental challenges as well as the key innovations addressed by the project. In particular, the need for predictive reliability approaches to maximizing hardware lifetime and guarantee application performance is identified as the key concern for RECIPE. We address it through hierarchical resource management of the heterogeneous architectural components of the system, driven by estimates of the application latency and hardware reliability obtained respectively through timing analysis and modeling thermal properties and mean-time-to-failure of subsystems. We show the impact of prediction accuracy on the overheads imposed by the checkpointing policy, as well as a possible application to a weather forecasting use case.Predictive reliability and fault management in exascale systems: State of the art and perspectivesCanal Corretger, RamonHernández Luz, CarlesTornero Gavilá, RafaelCilardo, AlessandroMassari, GiuseppeReghenzani, FedericoFornaciari, WilliamZapater Sancho, MarinaAtienza, DavidOleksiak, ArielWojciech Piatek, PoznanAbella Ferrer, Jaumehttp://hdl.handle.net/2117/3303522022-05-17T10:08:54Z2020-10-16T07:57:06ZPredictive reliability and fault management in exascale systems: State of the art and perspectives
Canal Corretger, Ramon; Hernández Luz, Carles; Tornero Gavilá, Rafael; Cilardo, Alessandro; Massari, Giuseppe; Reghenzani, Federico; Fornaciari, William; Zapater Sancho, Marina; Atienza, David; Oleksiak, Ariel; Wojciech Piatek, Poznan; Abella Ferrer, Jaume
Performance and power constraints come together with Complementary Metal Oxide Semiconductor technology scaling in future Exascale systems. Technology scaling makes each individual transistor more prone to faults and, due to the exponential increase in the number of devices per chip, to higher system fault rates. Consequently, High-performance Computing (HPC) systems need to integrate prediction, detection, and recovery mechanisms to cope with faults efficiently. This article reviews fault detection, fault prediction, and recovery techniques in HPC systems, from electronics to system level. We analyze their strengths and limitations. Finally, we identify the promising paths to meet the reliability levels of Exascale systems.
2020-10-16T07:57:06ZCanal Corretger, RamonHernández Luz, CarlesTornero Gavilá, RafaelCilardo, AlessandroMassari, GiuseppeReghenzani, FedericoFornaciari, WilliamZapater Sancho, MarinaAtienza, DavidOleksiak, ArielWojciech Piatek, PoznanAbella Ferrer, JaumePerformance and power constraints come together with Complementary Metal Oxide Semiconductor technology scaling in future Exascale systems. Technology scaling makes each individual transistor more prone to faults and, due to the exponential increase in the number of devices per chip, to higher system fault rates. Consequently, High-performance Computing (HPC) systems need to integrate prediction, detection, and recovery mechanisms to cope with faults efficiently. This article reviews fault detection, fault prediction, and recovery techniques in HPC systems, from electronics to system level. We analyze their strengths and limitations. Finally, we identify the promising paths to meet the reliability levels of Exascale systems.A cost-efficient QoS-aware analytical model of future software content delivery networksOtero Calviño, BeatrizRodríguez Luna, EvaRojas, OtilioVerdú Mulà, JavierCosta Prats, Juan JoséPajuelo González, Manuel AlejandroCanal Corretger, Ramonhttp://hdl.handle.net/2117/3286202021-09-12T00:52:52Z2020-09-10T07:15:19ZA cost-efficient QoS-aware analytical model of future software content delivery networks
Otero Calviño, Beatriz; Rodríguez Luna, Eva; Rojas, Otilio; Verdú Mulà, Javier; Costa Prats, Juan José; Pajuelo González, Manuel Alejandro; Canal Corretger, Ramon
Freelance, part-time, work-at-home, and other flexible jobs are changing the concept of workplace, and bringing information and content exchange problems to companies. Geographically spread corporations may use remote distribution of software and data to attend employees' demands, by exploiting emerging delivery technologies. In this context, cost-efficient software distribution is crucial to allow business evolution and make IT infrastructures more agile. On the other hand, container based virtualization technology is shaping the new trends of software deployment and infrastructure design. We envision current and future enterprise IT management trends evolving towards container based software delivery over Hybrid CDNs. This paper presents a novel cost-efficient QoS aware analytical model and a Hybrid CDN-P2P architecture for enterprise software distribution.
The model would allow delivery cost minimization for a wide range of companies, from big multinationals to SMEs, using CDN-P2P distribution under various industrial hypothetical scenarios. Model constraints guarantee acceptable deployment times and keep interchanged content amounts below the bandwidth and storage network limits in our scenarios. Indeed, key model parameters account for network bandwidth, storage limits and rental prices, which are empirically determined from their offered values by the commercial delivery networks KeyCDN, MaxCDN, CDN77 and BunnyCDN. This preliminary study indicates that MaxCDN offers the best cost-QoS trade-off. The model is implemented in the network simulation tool PeerSim, and then applied to diverse testing scenarios by varying company types, number and profile (either, technical or administrative) of employees and the number and size of content requests. Hybrid simulation results show overall economic savings between 5\% and 20\%, compared to just hiring resources from a commercial CDN, while guaranteeing satisfactory QoS levels in terms of deployment times and number of served requests.
2020-09-10T07:15:19ZOtero Calviño, BeatrizRodríguez Luna, EvaRojas, OtilioVerdú Mulà, JavierCosta Prats, Juan JoséPajuelo González, Manuel AlejandroCanal Corretger, RamonFreelance, part-time, work-at-home, and other flexible jobs are changing the concept of workplace, and bringing information and content exchange problems to companies. Geographically spread corporations may use remote distribution of software and data to attend employees' demands, by exploiting emerging delivery technologies. In this context, cost-efficient software distribution is crucial to allow business evolution and make IT infrastructures more agile. On the other hand, container based virtualization technology is shaping the new trends of software deployment and infrastructure design. We envision current and future enterprise IT management trends evolving towards container based software delivery over Hybrid CDNs. This paper presents a novel cost-efficient QoS aware analytical model and a Hybrid CDN-P2P architecture for enterprise software distribution.
The model would allow delivery cost minimization for a wide range of companies, from big multinationals to SMEs, using CDN-P2P distribution under various industrial hypothetical scenarios. Model constraints guarantee acceptable deployment times and keep interchanged content amounts below the bandwidth and storage network limits in our scenarios. Indeed, key model parameters account for network bandwidth, storage limits and rental prices, which are empirically determined from their offered values by the commercial delivery networks KeyCDN, MaxCDN, CDN77 and BunnyCDN. This preliminary study indicates that MaxCDN offers the best cost-QoS trade-off. The model is implemented in the network simulation tool PeerSim, and then applied to diverse testing scenarios by varying company types, number and profile (either, technical or administrative) of employees and the number and size of content requests. Hybrid simulation results show overall economic savings between 5\% and 20\%, compared to just hiring resources from a commercial CDN, while guaranteeing satisfactory QoS levels in terms of deployment times and number of served requests.Alternating direction implicit time integrations for finite difference acoustic wave propagation: parallelization and convergenceOtero Calviño, BeatrizRojas, OtilioMoya, FerránCastillo, Joséhttp://hdl.handle.net/2117/1904952022-05-15T00:28:15Z2020-06-11T09:12:56ZAlternating direction implicit time integrations for finite difference acoustic wave propagation: parallelization and convergence
Otero Calviño, Beatriz; Rojas, Otilio; Moya, Ferrán; Castillo, José
This work studies the parallelization and empirical convergence of two finite difference acoustic wave propagation methods on 2-D rectangular grids, that use the same alternating direction implicit (ADI) time integration. This ADI integration is based on a second-order implicit Crank-Nicolson temporal discretization that is factored out by a Peaceman-Rachford decomposition of the time and space equation terms. In space, these methods highly diverge and apply different fourth-order accurate differentiation techniques. The first method uses compact finite differences (CFD) on nodal meshes that requires solving tridiagonal linear systems along each grid line, while the second one employs staggered-grid mimetic finite differences (MFD). For each method, we implement three parallel versions: (i) a multithreaded code in Octave, (ii) a C++ code that exploits OpenMP loop parallelization, and (iii) a CUDA kernel for a NVIDIA GTX 960 Maxwell card. In these implementations, the main source of
parallelism is the simultaneous ADI updating of each wave field matrix, either column-wise or row-wise, according to the differentiation direction. In our numerical applications, the highest performances are displayed by the CFD and MFD CUDA codes that achieve speedups of 7.21x and 15.81x, respectively, relative to their C++ sequential counterparts with optimal compilation flags. Our test cases also allow to assess the numerical convergence and accuracy of both methods. In a problem with exact harmonic solution, both methods exhibit convergence rates close to 4 and the MDF accuracy is practically higher. Alternatively, both convergences decay to second order on smooth problems with severe gradients at boundaries, and the MDF rates degrade in highly-resolved grids leading to larger inaccuracies. This transition of empirical convergences agrees with the nominal truncation errors in space and time.
2020-06-11T09:12:56ZOtero Calviño, BeatrizRojas, OtilioMoya, FerránCastillo, JoséThis work studies the parallelization and empirical convergence of two finite difference acoustic wave propagation methods on 2-D rectangular grids, that use the same alternating direction implicit (ADI) time integration. This ADI integration is based on a second-order implicit Crank-Nicolson temporal discretization that is factored out by a Peaceman-Rachford decomposition of the time and space equation terms. In space, these methods highly diverge and apply different fourth-order accurate differentiation techniques. The first method uses compact finite differences (CFD) on nodal meshes that requires solving tridiagonal linear systems along each grid line, while the second one employs staggered-grid mimetic finite differences (MFD). For each method, we implement three parallel versions: (i) a multithreaded code in Octave, (ii) a C++ code that exploits OpenMP loop parallelization, and (iii) a CUDA kernel for a NVIDIA GTX 960 Maxwell card. In these implementations, the main source of
parallelism is the simultaneous ADI updating of each wave field matrix, either column-wise or row-wise, according to the differentiation direction. In our numerical applications, the highest performances are displayed by the CFD and MFD CUDA codes that achieve speedups of 7.21x and 15.81x, respectively, relative to their C++ sequential counterparts with optimal compilation flags. Our test cases also allow to assess the numerical convergence and accuracy of both methods. In a problem with exact harmonic solution, both methods exhibit convergence rates close to 4 and the MDF accuracy is practically higher. Alternatively, both convergences decay to second order on smooth problems with severe gradients at boundaries, and the MDF rates degrade in highly-resolved grids leading to larger inaccuracies. This transition of empirical convergences agrees with the nominal truncation errors in space and time.