Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/7268
2024-03-28T12:45:19ZFlexible radio access network optimization with cell coordination
http://hdl.handle.net/2117/387212
Flexible radio access network optimization with cell coordination
Ruiz Boqué, Sílvia; García Lozano, Mario; Guerra Gómez, Rolando; Saeed, Umar
This paper focuses on Beyond fifth generation
(B5G) non-linear data modeling and decision-making tools
to optimize cost reduction versus coverage-QoS tradeoff.
Especially, the distribution of active Remote Radio
Heads or Units (RRHs) needed, according to traffic demands,
is improved. The proposed optimization platform
is based on a multi-objective optimization model, which
is designed to reduce the network cost while maintaining
the coverage-QoS. Capacity constraints, User Equipments
(UEs), and different slices are considered to test the
results under realistic conditions. Results at 3.6 and 28
GHz are presented by analyzing and comparing several
Cloud Radio Access Network (C-RAN) split options in
a heterogeneous deployment with Macro-RRHs (MRRHs)
and Small-RRHs (SRRHs). Results show cost reductions
from 30% to 70% depending on the scenario. Moreover, the
proposed algorithm aggregates the possibility to consider
the coordination between cells in order to improve the cost
reduction. The results considering cooperation has been
presented at both frequency bands with a fully centralized
C-RAN (split option 8).
2023-05-09T17:32:23ZRuiz Boqué, SílviaGarcía Lozano, MarioGuerra Gómez, RolandoSaeed, UmarThis paper focuses on Beyond fifth generation
(B5G) non-linear data modeling and decision-making tools
to optimize cost reduction versus coverage-QoS tradeoff.
Especially, the distribution of active Remote Radio
Heads or Units (RRHs) needed, according to traffic demands,
is improved. The proposed optimization platform
is based on a multi-objective optimization model, which
is designed to reduce the network cost while maintaining
the coverage-QoS. Capacity constraints, User Equipments
(UEs), and different slices are considered to test the
results under realistic conditions. Results at 3.6 and 28
GHz are presented by analyzing and comparing several
Cloud Radio Access Network (C-RAN) split options in
a heterogeneous deployment with Macro-RRHs (MRRHs)
and Small-RRHs (SRRHs). Results show cost reductions
from 30% to 70% depending on the scenario. Moreover, the
proposed algorithm aggregates the possibility to consider
the coordination between cells in order to improve the cost
reduction. The results considering cooperation has been
presented at both frequency bands with a fully centralized
C-RAN (split option 8).SCHEMA: Service Chain Elastic Management with distributed reinforcement learning
http://hdl.handle.net/2117/367516
SCHEMA: Service Chain Elastic Management with distributed reinforcement learning
Dalgkitsis, Anestis; Garrido Platero, Luis Ángel; Mekikis, Prodromos-Vasileios; Ramantas, Kostas; Alonso Zárate, Luis Gonzaga; Verikoukis, Christos
As the demand for Network Function Virtualization accelerates, service providers are expected to advance the way they manage and orchestrate their network services to offer lower latency services to their future users. Modern services require complex data flows between Virtual Network Functions, placed in separate network domains, risking an increase in latency that compromises the offered latency constraints. This shift requires high levels of automation to deal with the scale and load of future networks. In this paper, we formulate the Service Function Chaining (SFC) placement problem and then we tackle it by introducing SCHEMA, a Distributed Reinforcement Learning (RL) algorithm that performs complex SFC orchestration for low latency services. We combine multiple RL agents with a Bidding Mechanism to enable scalability on multi-domain networks. Finally, we use a simulation model to evaluate SCHEMA, and we demonstrate its ability to obtain a 60.54% reduction of average service latency when compared to a centralised RL solution.
2022-05-19T06:47:47ZDalgkitsis, AnestisGarrido Platero, Luis ÁngelMekikis, Prodromos-VasileiosRamantas, KostasAlonso Zárate, Luis GonzagaVerikoukis, ChristosAs the demand for Network Function Virtualization accelerates, service providers are expected to advance the way they manage and orchestrate their network services to offer lower latency services to their future users. Modern services require complex data flows between Virtual Network Functions, placed in separate network domains, risking an increase in latency that compromises the offered latency constraints. This shift requires high levels of automation to deal with the scale and load of future networks. In this paper, we formulate the Service Function Chaining (SFC) placement problem and then we tackle it by introducing SCHEMA, a Distributed Reinforcement Learning (RL) algorithm that performs complex SFC orchestration for low latency services. We combine multiple RL agents with a Bidding Mechanism to enable scalability on multi-domain networks. Finally, we use a simulation model to evaluate SCHEMA, and we demonstrate its ability to obtain a 60.54% reduction of average service latency when compared to a centralised RL solution.A collaborative statistical actor-critic learning approach for 6G network slicing control
http://hdl.handle.net/2117/367515
A collaborative statistical actor-critic learning approach for 6G network slicing control
Rezazadeh, Farhad; Chergui, Hatim; Blanco Botana, Luis; Alonso Zárate, Luis Gonzaga; Verikoukis, Christos
Artificial intelligence (AI)-driven zero-touch massive network slicing is envisioned to be a disruptive technology in beyond 5G (B5G)/6G, where tenancy would be extended to the final consumer in the form of advanced digital use-cases. In this paper, we propose a novel model-free deep reinforcement learning (DRL) framework, called collaborative statistical Actor-Critic (CS-AC) that enables a scalable and farsighted slice performance management in a 6G-like RAN scenario that is built upon mobile edge computing (MEC) and massive multiple-input multiple-output (mMIMO). In this intent, the proposed CS-AC targets the optimization of the latency cost under a long-term statistical service-level agreement (SLA). In particular, we consider the Q-th delay percentile SLA metric and enforce some slice-specific preset constraints on it. Moreover, to implement distributed learners, we propose a developed variant of soft Actor-Critic (SAC) with less hyperparameter sensitivity. Finally, we present numerical results to showcase the gain of the adopted approach on our built OpenAI-based network slicing environment and verify the performance in terms of latency, SLA Q-th percentile, and time efficiency. To the best of our knowledge, this is the first work that studies the feasibility of an AI-driven approach for massive network slicing under statistical SLA.
2022-05-19T06:20:09ZRezazadeh, FarhadChergui, HatimBlanco Botana, LuisAlonso Zárate, Luis GonzagaVerikoukis, ChristosArtificial intelligence (AI)-driven zero-touch massive network slicing is envisioned to be a disruptive technology in beyond 5G (B5G)/6G, where tenancy would be extended to the final consumer in the form of advanced digital use-cases. In this paper, we propose a novel model-free deep reinforcement learning (DRL) framework, called collaborative statistical Actor-Critic (CS-AC) that enables a scalable and farsighted slice performance management in a 6G-like RAN scenario that is built upon mobile edge computing (MEC) and massive multiple-input multiple-output (mMIMO). In this intent, the proposed CS-AC targets the optimization of the latency cost under a long-term statistical service-level agreement (SLA). In particular, we consider the Q-th delay percentile SLA metric and enforce some slice-specific preset constraints on it. Moreover, to implement distributed learners, we propose a developed variant of soft Actor-Critic (SAC) with less hyperparameter sensitivity. Finally, we present numerical results to showcase the gain of the adopted approach on our built OpenAI-based network slicing environment and verify the performance in terms of latency, SLA Q-th percentile, and time efficiency. To the best of our knowledge, this is the first work that studies the feasibility of an AI-driven approach for massive network slicing under statistical SLA.Evaluación de la implantación del aprendizaje basado en proyectos en la EPSC (2001-2003)
http://hdl.handle.net/2117/350331
Evaluación de la implantación del aprendizaje basado en proyectos en la EPSC (2001-2003)
Alcober Segura, Jesús Ángel; Ruiz Boqué, Sílvia; Valero García, Miguel
Aprendizaje basado en problemas o proyectos (a partir de ahora PBL) es el
aprendizaje que se produce como resultado del esfuerzo que realiza el alumno
para resolver un problema o llevar a cabo un proyecto.
Cuando se usa PBL, el punto de partida del proceso de aprendizaje es el
enunciado de un proyecto que los alumnos deben llevar a cabo, normalmente
organizados en grupos (por ejemplo, de 5 alumnos). Cada grupo debe:
1. Identificar qué cosas ya sabe y qué cosas debería aprender el grupo
para abordar el proyecto
2. Establecer y llevar a cabo un plan de aprendizaje
3. Revisar el proyecto a la luz del aprendizaje adquirido y volver a
identificar nuevos aprendizajes necesarios
2021-07-30T13:27:05ZAlcober Segura, Jesús ÁngelRuiz Boqué, SílviaValero García, MiguelAprendizaje basado en problemas o proyectos (a partir de ahora PBL) es el
aprendizaje que se produce como resultado del esfuerzo que realiza el alumno
para resolver un problema o llevar a cabo un proyecto.
Cuando se usa PBL, el punto de partida del proceso de aprendizaje es el
enunciado de un proyecto que los alumnos deben llevar a cabo, normalmente
organizados en grupos (por ejemplo, de 5 alumnos). Cada grupo debe:
1. Identificar qué cosas ya sabe y qué cosas debería aprender el grupo
para abordar el proyecto
2. Establecer y llevar a cabo un plan de aprendizaje
3. Revisar el proyecto a la luz del aprendizaje adquirido y volver a
identificar nuevos aprendizajes necesariosMachine-learning based traffic forecasting for resource management in C-RAN
http://hdl.handle.net/2117/345553
Machine-learning based traffic forecasting for resource management in C-RAN
Guerra Gómez, Rolando; Ruiz Boqué, Sílvia; García Lozano, Mario; Olmos Bonafé, Juan José
The assumption of a fixed computational capacityat the Baseband Unit (BBU) pools in a Cloud Radio Access Network (C-RAN) deployment results in underutilized resourcesor unsatisfied users depending on traffic requirements. In thispaper a new strategy to predict the required resources based on Machine Learning techniques is proposed and analysed. SupportVector Machine (SVM), Time-Delay Neural Network (TDNN),and Long Short-Term Memory (LSTM) have been tested andcompared to select the best predicting approach. Instead of usinga regular synthetic scenario a realistic dense cell deployment overVienna city is used to validate the results. Authors show that theproposed solution reduces the unused resources average by 96 %
2021-05-13T10:53:40ZGuerra Gómez, RolandoRuiz Boqué, SílviaGarcía Lozano, MarioOlmos Bonafé, Juan JoséThe assumption of a fixed computational capacityat the Baseband Unit (BBU) pools in a Cloud Radio Access Network (C-RAN) deployment results in underutilized resourcesor unsatisfied users depending on traffic requirements. In thispaper a new strategy to predict the required resources based on Machine Learning techniques is proposed and analysed. SupportVector Machine (SVM), Time-Delay Neural Network (TDNN),and Long Short-Term Memory (LSTM) have been tested andcompared to select the best predicting approach. Instead of usinga regular synthetic scenario a realistic dense cell deployment overVienna city is used to validate the results. Authors show that theproposed solution reduces the unused resources average by 96 %Unsupervised learning for detection of mobility related anomalies in commercial LTE networks
http://hdl.handle.net/2117/345386
Unsupervised learning for detection of mobility related anomalies in commercial LTE networks
Moysen Cortes, Jessica; Ahmed, Furqan; García Lozano, Mario; Niëmela, Jarno
We propose an unsupervised learning based anomaly detection framework for identifying cells experiencing performance degradation due to mobility problems, in LTE networks. Handover failure rate is used as a performance metric, whereas the mobility problems considered include too-early and too-late handovers. In order to enable unsupervised learning, the framework leverages existing datasets in commercial LTE networks (e.g. performance management counters, configuration management data, geographical locations, and inventory data etc). To this end, the first step is data pre-processing, followed by feature extraction based on principal component analysis and clustering. For implementation, we use real data from an operational commercial LTE network. Results show that clustering is highly effective in understanding and identifying mobility related anomalous behaviour, and provides actionable insights for automation and self-optimization, paving the way for efficient mobility robustness optimization, which is an important self-optimization use-case for contemporary 4G/5G networks.
2021-05-10T15:19:16ZMoysen Cortes, JessicaAhmed, FurqanGarcía Lozano, MarioNiëmela, JarnoWe propose an unsupervised learning based anomaly detection framework for identifying cells experiencing performance degradation due to mobility problems, in LTE networks. Handover failure rate is used as a performance metric, whereas the mobility problems considered include too-early and too-late handovers. In order to enable unsupervised learning, the framework leverages existing datasets in commercial LTE networks (e.g. performance management counters, configuration management data, geographical locations, and inventory data etc). To this end, the first step is data pre-processing, followed by feature extraction based on principal component analysis and clustering. For implementation, we use real data from an operational commercial LTE network. Results show that clustering is highly effective in understanding and identifying mobility related anomalous behaviour, and provides actionable insights for automation and self-optimization, paving the way for efficient mobility robustness optimization, which is an important self-optimization use-case for contemporary 4G/5G networks.Big data-driven automated anomaly detection and performance forecasting in mobile networks
http://hdl.handle.net/2117/345306
Big data-driven automated anomaly detection and performance forecasting in mobile networks
Moysen Cortes, Jessica; Ahmed, Furqan; García Lozano, Mario; Niëmela, Jarno
The massive amount of data available in operational mobile networks offers an invaluable opportunity for operators to detect and analyze possible anomalies and predict network performance. In particular, application of advanced machine learning (ML) techniques on data aggregated from multiple sources can lead to important insights, not only for the detection of anomalous behavior but also for performance forecasting, thereby complementing classic network operation and maintenance solutions with intelligent monitoring tools. In this paper, we propose a novel framework that aggregates diverse data sets (e.g. configuration, performance, inventory, locations, user speeds) from an operational LTE network and applies ML algorithms to diagnose network issues and analyze their impact on key performance indicators. To this end, pattern identification and time-series forecasting algorithms are used on the ingested data. Results show that proposed framework can indeed be leveraged to automate the identification of anomalous behaviors associated with the spatial-temporal characteristics, and predict customer impact in an accurate manner.
2021-05-07T12:29:41ZMoysen Cortes, JessicaAhmed, FurqanGarcía Lozano, MarioNiëmela, JarnoThe massive amount of data available in operational mobile networks offers an invaluable opportunity for operators to detect and analyze possible anomalies and predict network performance. In particular, application of advanced machine learning (ML) techniques on data aggregated from multiple sources can lead to important insights, not only for the detection of anomalous behavior but also for performance forecasting, thereby complementing classic network operation and maintenance solutions with intelligent monitoring tools. In this paper, we propose a novel framework that aggregates diverse data sets (e.g. configuration, performance, inventory, locations, user speeds) from an operational LTE network and applies ML algorithms to diagnose network issues and analyze their impact on key performance indicators. To this end, pattern identification and time-series forecasting algorithms are used on the ingested data. Results show that proposed framework can indeed be leveraged to automate the identification of anomalous behaviors associated with the spatial-temporal characteristics, and predict customer impact in an accurate manner.Continuous multi-objective zero-touch network slicing via twin delayed DDPG and OpenAI gym
http://hdl.handle.net/2117/338159
Continuous multi-objective zero-touch network slicing via twin delayed DDPG and OpenAI gym
Rezazadeh, Farhad; Chergui, Hatim; Alonso Zárate, Luis Gonzaga; Verikoukis, Christos
Artificial intelligence (AI)-driven zero-touch network slicing (NS) is a new paradigm enabling the automation of resource management and orchestration (MANO) in multi-tenant beyond 5G (B5G) networks. In this paper, we tackle the problem of cloud-RAN (C-RAN) joint slice admission control and resource allocation by first formulating it as a Markov decision process (MDP). We then invoke an advanced continuous deep reinforcement learning (DRL) method called twin delayed deep deterministic policy gradient (TD3) to solve it. In this intent, we introduce a multi-objective approach to make the central unit (CU) learn how to re-configure computing resources autonomously while minimizing latency, energy consumption and virtual network function (VNF) instantiation cost for each slice. Moreover, we build a complete 5G C-RAN network slicing environment using OpenAI Gym toolkit where, thanks to its standardized interface, it can be easily tested with different DRL schemes. Finally, we present extensive experimental results to showcase the gain of TD3 as well as the adopted multi-objective strategy in terms of achieved slice admission success rate, latency, energy saving and CPU utilization.
2021-02-09T13:42:28ZRezazadeh, FarhadChergui, HatimAlonso Zárate, Luis GonzagaVerikoukis, ChristosArtificial intelligence (AI)-driven zero-touch network slicing (NS) is a new paradigm enabling the automation of resource management and orchestration (MANO) in multi-tenant beyond 5G (B5G) networks. In this paper, we tackle the problem of cloud-RAN (C-RAN) joint slice admission control and resource allocation by first formulating it as a Markov decision process (MDP). We then invoke an advanced continuous deep reinforcement learning (DRL) method called twin delayed deep deterministic policy gradient (TD3) to solve it. In this intent, we introduce a multi-objective approach to make the central unit (CU) learn how to re-configure computing resources autonomously while minimizing latency, energy consumption and virtual network function (VNF) instantiation cost for each slice. Moreover, we build a complete 5G C-RAN network slicing environment using OpenAI Gym toolkit where, thanks to its standardized interface, it can be easily tested with different DRL schemes. Finally, we present extensive experimental results to showcase the gain of TD3 as well as the adopted multi-objective strategy in terms of achieved slice admission success rate, latency, energy saving and CPU utilization.Real-time dynamic network slicing for the 5G radio access network
http://hdl.handle.net/2117/329319
Real-time dynamic network slicing for the 5G radio access network
Maule, Massimiliano; Mekikis, Prodromos Vasileios; Ramantas, Kostas; Vardakas, John; Verikoukis, Christos
The 5G networks are expected to satisfy diverse use cases and business models with significant advancements in terms of capacity, reliability, and latency. The allocation and provisioning of network resources pose a challenge for this novel architecture to guarantee higher flexibility and quality of service. As a potential enabler, network slicing was proposed as an innovative approach for the control of the network resources. Although a static slicing approach can be suitable for the transport and core network, the stochastic behavior of the wireless channel requires fast and secure slicing techniques for resource allocation. In this paper, we propose a dynamic slicing approach for the radio access network, where the network resources are carefully assigned to guarantee the service level agreements and increase the number of served users. To prove the performance of our approach, we implemented a fronthaul testbed to emphasize the strength of our method in terms of throughput and resource utilization, compared to static slicing.
2020-09-28T15:57:37ZMaule, MassimilianoMekikis, Prodromos VasileiosRamantas, KostasVardakas, JohnVerikoukis, ChristosThe 5G networks are expected to satisfy diverse use cases and business models with significant advancements in terms of capacity, reliability, and latency. The allocation and provisioning of network resources pose a challenge for this novel architecture to guarantee higher flexibility and quality of service. As a potential enabler, network slicing was proposed as an innovative approach for the control of the network resources. Although a static slicing approach can be suitable for the transport and core network, the stochastic behavior of the wireless channel requires fast and secure slicing techniques for resource allocation. In this paper, we propose a dynamic slicing approach for the radio access network, where the network resources are carefully assigned to guarantee the service level agreements and increase the number of served users. To prove the performance of our approach, we implemented a fronthaul testbed to emphasize the strength of our method in terms of throughput and resource utilization, compared to static slicing.On the use of existing 4G small cell deployments for 5G V2N communication
http://hdl.handle.net/2117/185462
On the use of existing 4G small cell deployments for 5G V2N communication
Saeed, Umar; Hämäläinen, Jyri; Mutafungwa, Edward; Wichman, Risto; González González, David; García Lozano, Mario
We study the feasibility of the dense 4G and 5G cellular networks at sub-6 GHz and millimeter wave carriers for vehicle-to-network applications. For this purpose, road-side network coverage, signal-to-interference-plus-noise (SINR) and handover rate are used as key performance indicators (KPIs). The KPIs are calculated over realistic vehicular user routes which are created by Google Maps APIs. The channel pathloss is simulated using a ray tracing software and it is shown that even for a dense 4G small cell deployment the coverage at 28 GHz carrier frequency is very fragmented and thus, service continuity depends on the availability of sub-mmWave carriers.
2020-04-28T13:31:13ZSaeed, UmarHämäläinen, JyriMutafungwa, EdwardWichman, RistoGonzález González, DavidGarcía Lozano, MarioWe study the feasibility of the dense 4G and 5G cellular networks at sub-6 GHz and millimeter wave carriers for vehicle-to-network applications. For this purpose, road-side network coverage, signal-to-interference-plus-noise (SINR) and handover rate are used as key performance indicators (KPIs). The KPIs are calculated over realistic vehicular user routes which are created by Google Maps APIs. The channel pathloss is simulated using a ray tracing software and it is shown that even for a dense 4G small cell deployment the coverage at 28 GHz carrier frequency is very fragmented and thus, service continuity depends on the availability of sub-mmWave carriers.