Articles de revista
http://hdl.handle.net/2117/1135
2024-03-28T18:51:02Z
-
A high-accuracy, scalable and affordable indoor positioning system using visible light positioning for automated guided vehicles
http://hdl.handle.net/2117/400925
A high-accuracy, scalable and affordable indoor positioning system using visible light positioning for automated guided vehicles
Boixader Coma, Aleix; Labella, Carlos; Catalán, Marisa; Paradells Aspas, Josep
Indoor Positioning Systems (IPSs) have multiple applications. For example, they can be used to guide people, to locate items in a warehouse and to support the navigation of Automated Guided Vehicles (AGV). Currently most AGVs use local pre-defined navigation systems, but they lack a global localisation system. Integrating both systems is uncommon due to the inherent challenge in balancing accuracy with coverage. Visible Light Position (VLP) offers accurate and fast localisation, but it encounters scalability limitations. To overcome this, this paper presents a novel Image Sensor-based VLP (IS-VLP) identification method that harnesses existing Light Emitting Diode (LED) lighting infrastructure to substitute both navigation and localisation systems effectively in the whole area. We developed an IPS that achieves six-axis positioning at 90 Hz refresh rate using OpenCV’s solvePnP algorithm and embedded computing. This IPS has been validated in a laboratory environment and successfully deployed in a real factory to position an operative AGV. The system has resulted in accuracies better than 12 cm for 95% of the measurements. This work advances towards positioning VLP as an appealing choice for IPS in industrial environments, offering an inexpensive, scalable, accurate and robust solution.
2024-02-05T08:44:34Z
Boixader Coma, Aleix
Labella, Carlos
Catalán, Marisa
Paradells Aspas, Josep
Indoor Positioning Systems (IPSs) have multiple applications. For example, they can be used to guide people, to locate items in a warehouse and to support the navigation of Automated Guided Vehicles (AGV). Currently most AGVs use local pre-defined navigation systems, but they lack a global localisation system. Integrating both systems is uncommon due to the inherent challenge in balancing accuracy with coverage. Visible Light Position (VLP) offers accurate and fast localisation, but it encounters scalability limitations. To overcome this, this paper presents a novel Image Sensor-based VLP (IS-VLP) identification method that harnesses existing Light Emitting Diode (LED) lighting infrastructure to substitute both navigation and localisation systems effectively in the whole area. We developed an IPS that achieves six-axis positioning at 90 Hz refresh rate using OpenCV’s solvePnP algorithm and embedded computing. This IPS has been validated in a laboratory environment and successfully deployed in a real factory to position an operative AGV. The system has resulted in accuracies better than 12 cm for 95% of the measurements. This work advances towards positioning VLP as an appealing choice for IPS in industrial environments, offering an inexpensive, scalable, accurate and robust solution.
-
Improving road safety and user experience by employing dynamic in-vehicle information systems
http://hdl.handle.net/2117/400831
Improving road safety and user experience by employing dynamic in-vehicle information systems
Galarza Osio, Miguel Ángel; Paradells Aspas, Josep
Modern vehicular infotainment systems are becoming increasingly complex and are embracing a wide range of functionalities. Interacting with some of these functionalities, while driving, may increase the driver's workload and affect the execution of primary driving tasks. This article addresses this issue and proposes a practical application that could help overcome the problem as well as improve the driving experience. The applied method consists of developing a system capable of estimating the driving complexity in real time using variables already available in current vehicles, and responding according to the complexity reference, to apply countermeasures to the infotainment system. The intended purpose is to facilitate the interaction with the functionalities and reducing the amount of information offered to the driver in complex scenarios. A baseline system was built and tested, demonstrating the feasibility of its implementation in current vehicles.
This is the peer reviewed version of the following article: Galarza, M.; Paradells, J. Improving road safety and user experience by employing dynamic in-vehicle information systems. "IET intelligent transport systems", 1 Abril 2019, vol. 13, núm. 4, p. 738-744, which has been published in final form at https://doi.org/10.1049/iet-its.2018.5022. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions. This article may not be enhanced, enriched or otherwise transformed into a derivative work, without express permission from Wiley or by statutory rights under applicable legislation. Copyright notices must not be removed, obscured or modified. The article must be linked to Wiley’s version of record on Wiley Online Library and any embedding, framing or otherwise making available the article or pages thereof by third parties from platforms, services and websites other than Wiley Online Library must be prohibited.
2024-02-02T08:47:10Z
Galarza Osio, Miguel Ángel
Paradells Aspas, Josep
Modern vehicular infotainment systems are becoming increasingly complex and are embracing a wide range of functionalities. Interacting with some of these functionalities, while driving, may increase the driver's workload and affect the execution of primary driving tasks. This article addresses this issue and proposes a practical application that could help overcome the problem as well as improve the driving experience. The applied method consists of developing a system capable of estimating the driving complexity in real time using variables already available in current vehicles, and responding according to the complexity reference, to apply countermeasures to the infotainment system. The intended purpose is to facilitate the interaction with the functionalities and reducing the amount of information offered to the driver in complex scenarios. A baseline system was built and tested, demonstrating the feasibility of its implementation in current vehicles.
-
Understanding the impact of packet size on the energy efficiency of LoRaWAN
http://hdl.handle.net/2117/399255
Understanding the impact of packet size on the energy efficiency of LoRaWAN
Casals Ibáñez, Lluís; Gómez Montenegro, Carlos; Vidal Ferré, Rafael
LoRaWAN has become a flagship LPWAN technology, and one of the main connectivity alternatives for IoT devices. Since LoRaWAN was designed for low energy consumption, it is fundamental to understand its energy performance. In this paper, we study the impact of packet size on LoRaWAN device energy consumption per delivered data bit (EPB). By means of extensive simulations, we show that, when network performance is very high or very low, EPB decreases steadily with packet size; otherwise, EPB may show an “asymmetric U” shape as a function of packet size, with a minimum EPB value that is achieved for a medium packet size. We also provide detailed insights on the reasons that produce the observed behaviors.
2024-01-12T12:08:24Z
Casals Ibáñez, Lluís
Gómez Montenegro, Carlos
Vidal Ferré, Rafael
LoRaWAN has become a flagship LPWAN technology, and one of the main connectivity alternatives for IoT devices. Since LoRaWAN was designed for low energy consumption, it is fundamental to understand its energy performance. In this paper, we study the impact of packet size on LoRaWAN device energy consumption per delivered data bit (EPB). By means of extensive simulations, we show that, when network performance is very high or very low, EPB decreases steadily with packet size; otherwise, EPB may show an “asymmetric U” shape as a function of packet size, with a minimum EPB value that is achieved for a medium packet size. We also provide detailed insights on the reasons that produce the observed behaviors.
-
Design and validation of a dual-band circular polarization patch antenna and stripline combiner for the FSSCat mission
http://hdl.handle.net/2117/393641
Design and validation of a dual-band circular polarization patch antenna and stripline combiner for the FSSCat mission
Fernandez Capon, Lara Pilar; Muñoz Martín, Joan Francesc; Ruiz de Azua, Joan Adria; Calveras Augé, Anna M.; Camps Carmona, Adriano José
The FMPL-2 payload on board the 3Cat-5/A 6 Unit CubeSat, part of the FSSCat CubeSat mission, includes a dual L-Band Microwave Radiometer and a Global Navigation Satellite System Reflectometer, in one instrument, implemented in a Software Defined Radio. One of the design challenges of this payload was its Nadir looking Antenna, which had to be directive (> 12 dB), dual-band at 1400–1427 MHz and 1575.42 MHz, left-hand circularly polarized, and with important envelope restrictions, notably with a low profile. After a trade-off analysis, the best design solution appeared to be an array of six elements each of them being a stacked dual-band patch antenna, with diagonal feed to create the circular polarization, and a six to one stripline combiner. The design process of the elementary antennas first includes a theoretical analysis, to obtain the approximate dimensions. Then, by means of numerical simulations, prototyping, and adjusting the results in the simulations, the manufacturing errors and dielectric constant tolerances, to which patch antennas are very sensitive, can be characterized. A similar approach is taken with the combiner. This article includes the theoretical analysis, simulations, and prototype results, including the Flight Model assembly and characterization
2023-09-19T07:48:34Z
Fernandez Capon, Lara Pilar
Muñoz Martín, Joan Francesc
Ruiz de Azua, Joan Adria
Calveras Augé, Anna M.
Camps Carmona, Adriano José
The FMPL-2 payload on board the 3Cat-5/A 6 Unit CubeSat, part of the FSSCat CubeSat mission, includes a dual L-Band Microwave Radiometer and a Global Navigation Satellite System Reflectometer, in one instrument, implemented in a Software Defined Radio. One of the design challenges of this payload was its Nadir looking Antenna, which had to be directive (> 12 dB), dual-band at 1400–1427 MHz and 1575.42 MHz, left-hand circularly polarized, and with important envelope restrictions, notably with a low profile. After a trade-off analysis, the best design solution appeared to be an array of six elements each of them being a stacked dual-band patch antenna, with diagonal feed to create the circular polarization, and a six to one stripline combiner. The design process of the elementary antennas first includes a theoretical analysis, to obtain the approximate dimensions. Then, by means of numerical simulations, prototyping, and adjusting the results in the simulations, the manufacturing errors and dielectric constant tolerances, to which patch antennas are very sensitive, can be characterized. A similar approach is taken with the combiner. This article includes the theoretical analysis, simulations, and prototype results, including the Flight Model assembly and characterization
-
Deep learning TCP for mitigating NLoS impairments in 5G mmWave
http://hdl.handle.net/2117/391990
Deep learning TCP for mitigating NLoS impairments in 5G mmWave
Poorzare, Reza; Calveras Augé, Anna M.
5G and beyond 5G are revolutionizing cellular and ubiquitous networks with new features and capabilities. The new millimeter-wave frequency band can provide high data rates for the new generations of mobile networks but suffers from NLoS caused by obstacles, which causes packet drops that mislead TCP because the protocol interprets all drops as an indication of network congestion. The principal flaw of TCP in such networks is that the root for packet drops is not distinguishable for TCP, and the protocol takes it for granted that all losses are due to congestion. This paper presents a new TCP based on deep learning that can outperform other common TCPs in terms of throughput, RTT, and congestion window fluctuation. The primary contribution of deep learning is providing the ability to distinguish various conditions in the network. The simulation results revealed that the proposed protocol could outperform conventional TCPs such as Cubic, NewReno, Highspeed, and BBR.
2023-07-24T10:32:40Z
Poorzare, Reza
Calveras Augé, Anna M.
5G and beyond 5G are revolutionizing cellular and ubiquitous networks with new features and capabilities. The new millimeter-wave frequency band can provide high data rates for the new generations of mobile networks but suffers from NLoS caused by obstacles, which causes packet drops that mislead TCP because the protocol interprets all drops as an indication of network congestion. The principal flaw of TCP in such networks is that the root for packet drops is not distinguishable for TCP, and the protocol takes it for granted that all losses are due to congestion. This paper presents a new TCP based on deep learning that can outperform other common TCPs in terms of throughput, RTT, and congestion window fluctuation. The primary contribution of deep learning is providing the ability to distinguish various conditions in the network. The simulation results revealed that the proposed protocol could outperform conventional TCPs such as Cubic, NewReno, Highspeed, and BBR.
-
IPv6 over cross-technology communication with wake-up radio
http://hdl.handle.net/2117/391984
IPv6 over cross-technology communication with wake-up radio
Aguilar Romero, Sergio; Vidal Ferré, Rafael; Gómez Montenegro, Carlos
A variety of wireless (and wired) technologies are being used to enable Internet of Things (IoT) device connectivity. Two examples of popular technologies in some crucial IoT domains (e.g., smart home, smart factories and smart cities, among others) are IEEE 802.15.4 and IEEE 802.11. However, IoT devices supporting different wireless technologies are not interoperable without a gateway. One solution to this problem is exploiting bidirectional Cross-Technology Communication with Wake-up Radio (WuR-CTC). Nevertheless, existing WuR-CTC approaches do not support IPv6, and therefore cannot offer full Internet protocol stack interoperability. For the first time to our knowledge, in this paper, we present the design, implementation and evaluation of an adaptation layer to provide IPv6 support over WuR-CTC, by leveraging the IETF Static Context Header Compression and fragmentation (SCHC) framework. Among others, experimental results show that our solution allows transferring a 127-byte IPv6 packet from an IEEE 802.15.4 device to an IEEE 802.11 device, without a gateway, in 69 ms (in average). Therefore, the designed solution supports latency-stringent applications in smart environments, where a human in the loop expects real-time interaction between devices.
© 2023 Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
2023-07-24T09:59:12Z
Aguilar Romero, Sergio
Vidal Ferré, Rafael
Gómez Montenegro, Carlos
A variety of wireless (and wired) technologies are being used to enable Internet of Things (IoT) device connectivity. Two examples of popular technologies in some crucial IoT domains (e.g., smart home, smart factories and smart cities, among others) are IEEE 802.15.4 and IEEE 802.11. However, IoT devices supporting different wireless technologies are not interoperable without a gateway. One solution to this problem is exploiting bidirectional Cross-Technology Communication with Wake-up Radio (WuR-CTC). Nevertheless, existing WuR-CTC approaches do not support IPv6, and therefore cannot offer full Internet protocol stack interoperability. For the first time to our knowledge, in this paper, we present the design, implementation and evaluation of an adaptation layer to provide IPv6 support over WuR-CTC, by leveraging the IETF Static Context Header Compression and fragmentation (SCHC) framework. Among others, experimental results show that our solution allows transferring a 127-byte IPv6 packet from an IEEE 802.15.4 device to an IEEE 802.11 device, without a gateway, in 69 ms (in average). Therefore, the designed solution supports latency-stringent applications in smart environments, where a human in the loop expects real-time interaction between devices.
-
Genetic algorithm-based grouping strategy for IEEE 802.11ah networks
http://hdl.handle.net/2117/387171
Genetic algorithm-based grouping strategy for IEEE 802.11ah networks
García Villegas, Eduard; López García, Alejandro; López Aguilera, M. Elena
The IEEE 802.11ah standard is intended to adapt the specifications of IEEE 802.11 to the Internet of Things (IoT) scenario. One of the main features of IEEE 802.11ah consists of the Restricted Access Window (RAW) mechanism, designed for scheduling transmissions of groups of stations within certain periods of time or windows. With an appropriate configuration, the RAW feature reduces contention and improves energy efficiency. However, the standard specification does not provide mechanisms for the optimal setting of RAW parameters. In this way, this paper presents a grouping strategy based on a genetic algorithm (GA) for IEEE 802.11ah networks operating under the RAW mechanism and considering heterogeneous stations, that is, stations using different modulation and coding schemes (MCS). We define a fitness function from the combination of the predicted system throughput and fairness, and provide the tuning of the GA parameters to obtain the best result in a short time. The paper also includes a comparison of different alternatives with regard to the stages of the GA, i.e., parent selection, crossover, and mutation methods. As a proof of concept, the proposed GA-based RAW grouping is tested on a more constrained device, a Raspberry Pi 3B+ , where the grouping method converges in around 5 s. The evaluation concludes with a comparison of the GA-based grouping strategy with other grouping approaches, thus showing that the proposed mechanism provides a good trade-off between throughput and fairness performance.
2023-05-08T12:21:54Z
García Villegas, Eduard
López García, Alejandro
López Aguilera, M. Elena
The IEEE 802.11ah standard is intended to adapt the specifications of IEEE 802.11 to the Internet of Things (IoT) scenario. One of the main features of IEEE 802.11ah consists of the Restricted Access Window (RAW) mechanism, designed for scheduling transmissions of groups of stations within certain periods of time or windows. With an appropriate configuration, the RAW feature reduces contention and improves energy efficiency. However, the standard specification does not provide mechanisms for the optimal setting of RAW parameters. In this way, this paper presents a grouping strategy based on a genetic algorithm (GA) for IEEE 802.11ah networks operating under the RAW mechanism and considering heterogeneous stations, that is, stations using different modulation and coding schemes (MCS). We define a fitness function from the combination of the predicted system throughput and fairness, and provide the tuning of the GA parameters to obtain the best result in a short time. The paper also includes a comparison of different alternatives with regard to the stages of the GA, i.e., parent selection, crossover, and mutation methods. As a proof of concept, the proposed GA-based RAW grouping is tested on a more constrained device, a Raspberry Pi 3B+ , where the grouping method converges in around 5 s. The evaluation concludes with a comparison of the GA-based grouping strategy with other grouping approaches, thus showing that the proposed mechanism provides a good trade-off between throughput and fairness performance.
-
Preparing Wi-Fi 7 for healthcare Internet-of-Things
http://hdl.handle.net/2117/385437
Preparing Wi-Fi 7 for healthcare Internet-of-Things
Qadri, Yazdan Ahmad; Nain, Zulqar; Nauman, Ali; Musaddiq, Arslan; García Villegas, Eduard; Kim, Sung Won
The healthcare Internet of Things (H-IoT) is an interconnection of devices capable of sensing and transmitting information that conveys the status of an individual’s health. The continuous monitoring of an individual’s health for disease diagnosis and early detection is an important application of H-IoT. Ambient-assisted living (AAL) entails monitoring a patient’s health to ensure their well-being. However, ensuring a limit on transmission delays is an essential requirement of such monitoring systems. The uplink (UL) transmission during the orthogonal frequency division multiple access (OFDMA) in the wireless local area networks (WLANs) can incur a delay which may not be acceptable for delay-sensitive applications such as H-IoT due to their random nature. Therefore, we propose a UL OFDMA scheduler for the next Wireless Fidelity (Wi-Fi) standard, the IEEE 802.11be, that is compliant with the latency requirements for healthcare applications. The scheduler allocates the channel resources for UL transmission taking into consideration the traffic class or access category. The results demonstrate that the proposed scheduler can achieve the required latency for H-IoT applications. Additionally, the performance in terms of fairness and throughput is also superior to state-of-the-art schedulers.
2023-03-24T13:39:15Z
Qadri, Yazdan Ahmad
Nain, Zulqar
Nauman, Ali
Musaddiq, Arslan
García Villegas, Eduard
Kim, Sung Won
The healthcare Internet of Things (H-IoT) is an interconnection of devices capable of sensing and transmitting information that conveys the status of an individual’s health. The continuous monitoring of an individual’s health for disease diagnosis and early detection is an important application of H-IoT. Ambient-assisted living (AAL) entails monitoring a patient’s health to ensure their well-being. However, ensuring a limit on transmission delays is an essential requirement of such monitoring systems. The uplink (UL) transmission during the orthogonal frequency division multiple access (OFDMA) in the wireless local area networks (WLANs) can incur a delay which may not be acceptable for delay-sensitive applications such as H-IoT due to their random nature. Therefore, we propose a UL OFDMA scheduler for the next Wireless Fidelity (Wi-Fi) standard, the IEEE 802.11be, that is compliant with the latency requirements for healthcare applications. The scheduler allocates the channel resources for UL transmission taking into consideration the traffic class or access category. The results demonstrate that the proposed scheduler can achieve the required latency for H-IoT applications. Additionally, the performance in terms of fairness and throughput is also superior to state-of-the-art schedulers.
-
Energy consumption model of SCHC packet fragmentation over Sigfox LPWAN
http://hdl.handle.net/2117/385436
Energy consumption model of SCHC packet fragmentation over Sigfox LPWAN
Aguilar Romero, Sergio; Platis, Antonio; Vidal Ferré, Rafael; Gómez Montenegro, Carlos
The Internet Engineering Task Force (IETF) has standardized a new framework, called Static Context Header Compression and fragmentation (SCHC), which offers adaptation layer functionality designed to support IPv6 over Low Power Wide Area Networks (LPWANs). The IETF is currently profiling SCHC, and in particular its packet fragmentation and reassembly functionality, for its optimal use over certain LPWAN technologies. Considering the energy constraints of LPWAN devices, it is crucial to determine the energy performance of SCHC packet transfer. In this paper, we present a current and energy consumption model of SCHC packet transfer over Sigfox, a flagship LPWAN technology. The model, which is based on real hardware measurements, allows to determine the impact of several parameters and fragment transmission strategies on the energy performance of SCHC packet transfer over Sigfox. Among other results, we have found that the lifetime of a device powered by a 2000 mAh battery, transmitting packets every 5 days, is 168 days for 2250-byte packets, while it increases to 1464 days for 77-byte packets
2023-03-24T13:25:23Z
Aguilar Romero, Sergio
Platis, Antonio
Vidal Ferré, Rafael
Gómez Montenegro, Carlos
The Internet Engineering Task Force (IETF) has standardized a new framework, called Static Context Header Compression and fragmentation (SCHC), which offers adaptation layer functionality designed to support IPv6 over Low Power Wide Area Networks (LPWANs). The IETF is currently profiling SCHC, and in particular its packet fragmentation and reassembly functionality, for its optimal use over certain LPWAN technologies. Considering the energy constraints of LPWAN devices, it is crucial to determine the energy performance of SCHC packet transfer. In this paper, we present a current and energy consumption model of SCHC packet transfer over Sigfox, a flagship LPWAN technology. The model, which is based on real hardware measurements, allows to determine the impact of several parameters and fragment transmission strategies on the energy performance of SCHC packet transfer over Sigfox. Among other results, we have found that the lifetime of a device powered by a 2000 mAh battery, transmitting packets every 5 days, is 168 days for 2250-byte packets, while it increases to 1464 days for 77-byte packets
-
Novel architecture for cellular IoT in future non-terrestrial networks: store and forward adaptations for enabling discontinuous feeder link operation
http://hdl.handle.net/2117/372704
Novel architecture for cellular IoT in future non-terrestrial networks: store and forward adaptations for enabling discontinuous feeder link operation
Kellermann, Timo Nicolas; Pueyo Centelles, Roger; Camps-Mur, Daniel; Ferrús Ferré, Ramón Antonio; Guadalupi, Marco; Calveras Augé, Anna M.
The Internet of Things (IoT) paradigm has already progressed from an emerging technology to an incredibly fast-growing field. Defined as one of the three key services in 5th Generation (5G), massive Machine Type Communications (mMTC) are intended to enable the wide-spread adoption of IoT services across the globe. Satellite-based Non-Terrestrial Networks (NTN) are crucial in providing connectivity with global coverage including rural and offshore areas, which are fundamental for supporting important use cases in future networks. A rapidly growing market for IoT devices with mMTC applications using NarrowBandIoT (NB-IoT) will represent a large share of user equipment (UE) in such areas. While standardization efforts for NTN are underway for forthcoming 3GPP releases, they focus on transparent payload architectures where the satellite platform is necessarily connected to a ground station gateway to be able to provide satellite access services to IoT devices, thus requiring complex ground segment infrastructure in low Earth orbit (LEO) constellation deployments to achieve global coverage. In contrast, satellite network deployments targeting the delivery of delay-tolerant IoT applications using NB-IoT, which are a major mMTC use case, can benefit from architectures based on the use of regenerative payloads in the satellite and support for Store and Forward (S&F) operation where satellite access can remain operational even at times when the satellite is not connected to a ground station. In particular, such an approach would allow for extending satellite service coverage in areas where satellites cannot be connected to ground stations (e.g. maritime or very remote areas with lack of ground-stations infrastructures), improving ground segment affordability by enabling operation with fewer ground-stations and allowing more robust operation of the satellite under intermittent feeder link operation. In this paper, we provide a high-level design of an extended 3GPP architecture featuring store and forward mechanisms for IoT NTN delay-tolerant applications that address the previous challenges, as well as a laboratory validation of said architecture for a specific use case.
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
2022-09-13T12:22:59Z
Kellermann, Timo Nicolas
Pueyo Centelles, Roger
Camps-Mur, Daniel
Ferrús Ferré, Ramón Antonio
Guadalupi, Marco
Calveras Augé, Anna M.
The Internet of Things (IoT) paradigm has already progressed from an emerging technology to an incredibly fast-growing field. Defined as one of the three key services in 5th Generation (5G), massive Machine Type Communications (mMTC) are intended to enable the wide-spread adoption of IoT services across the globe. Satellite-based Non-Terrestrial Networks (NTN) are crucial in providing connectivity with global coverage including rural and offshore areas, which are fundamental for supporting important use cases in future networks. A rapidly growing market for IoT devices with mMTC applications using NarrowBandIoT (NB-IoT) will represent a large share of user equipment (UE) in such areas. While standardization efforts for NTN are underway for forthcoming 3GPP releases, they focus on transparent payload architectures where the satellite platform is necessarily connected to a ground station gateway to be able to provide satellite access services to IoT devices, thus requiring complex ground segment infrastructure in low Earth orbit (LEO) constellation deployments to achieve global coverage. In contrast, satellite network deployments targeting the delivery of delay-tolerant IoT applications using NB-IoT, which are a major mMTC use case, can benefit from architectures based on the use of regenerative payloads in the satellite and support for Store and Forward (S&F) operation where satellite access can remain operational even at times when the satellite is not connected to a ground station. In particular, such an approach would allow for extending satellite service coverage in areas where satellites cannot be connected to ground stations (e.g. maritime or very remote areas with lack of ground-stations infrastructures), improving ground segment affordability by enabling operation with fewer ground-stations and allowing more robust operation of the satellite under intermittent feeder link operation. In this paper, we provide a high-level design of an extended 3GPP architecture featuring store and forward mechanisms for IoT NTN delay-tolerant applications that address the previous challenges, as well as a laboratory validation of said architecture for a specific use case.