Articles de revistahttp://hdl.handle.net/2117/12562024-03-28T09:38:34Z2024-03-28T09:38:34ZMinimum error entropy estimation under contaminated Gaussian noiseLópez Molina, Carlos AlejandroCabrera Estanyol, Ferran deRiba Sagarra, Jaumehttp://hdl.handle.net/2117/4026202024-02-26T00:22:38Z2024-02-22T07:55:20ZMinimum error entropy estimation under contaminated Gaussian noise
López Molina, Carlos Alejandro; Cabrera Estanyol, Ferran de; Riba Sagarra, Jaume
It is shown that Rényi's entropy of a Gaussian mixture with entropic index α∈(1,∞] is upper-bounded by the cluster with minimum variance. This basic idea leads to a clean worst-case formulation of the minimum error entropy principle in the context of linear multi-sensor fusion by using a largely contaminated Gaussian distribution to model sensor errors with outliers. The obtained entropic best linear unbiased estimator leads to an operational interpretation in terms of a precision/reliability trade-off, it resonates closely with model-order selection methods, and it provides a possible information-theoretic root to sparsity-promoting regularization.
2024-02-22T07:55:20ZLópez Molina, Carlos AlejandroCabrera Estanyol, Ferran deRiba Sagarra, JaumeIt is shown that Rényi's entropy of a Gaussian mixture with entropic index α∈(1,∞] is upper-bounded by the cluster with minimum variance. This basic idea leads to a clean worst-case formulation of the minimum error entropy principle in the context of linear multi-sensor fusion by using a largely contaminated Gaussian distribution to model sensor errors with outliers. The obtained entropic best linear unbiased estimator leads to an operational interpretation in terms of a precision/reliability trade-off, it resonates closely with model-order selection methods, and it provides a possible information-theoretic root to sparsity-promoting regularization.Subspace leakage in conventional and dimensionally spread null-space communicationsBorràs Pino, JordiVázquez Grau, Gregoriohttp://hdl.handle.net/2117/3950912023-10-22T23:27:52Z2023-10-19T08:01:24ZSubspace leakage in conventional and dimensionally spread null-space communications
Borràs Pino, Jordi; Vázquez Grau, Gregorio
This letter evaluates the impact of subspace leakage in conventional (single-dimension) and dimensionally spread null-space precoding schemes. This phenomenon arises when the null-space inference procedure lacks precision and suffers subspace detection errors. The analysis relies on the signal-to-interference-per-dimension ratio (SIDR) metric, which jointly measures the transmitted power efficiency and the interference-mitigation robustness of the adopted transmission scheme. Based on theoretical and numerical analyses of the SIDR, this letter provides the explicit SIDR performance gain of dimension spreading-based null-space precoding schemes as an interference mitigation strategy in front of conventional approaches.
2023-10-19T08:01:24ZBorràs Pino, JordiVázquez Grau, GregorioThis letter evaluates the impact of subspace leakage in conventional (single-dimension) and dimensionally spread null-space precoding schemes. This phenomenon arises when the null-space inference procedure lacks precision and suffers subspace detection errors. The analysis relies on the signal-to-interference-per-dimension ratio (SIDR) metric, which jointly measures the transmitted power efficiency and the interference-mitigation robustness of the adopted transmission scheme. Based on theoretical and numerical analyses of the SIDR, this letter provides the explicit SIDR performance gain of dimension spreading-based null-space precoding schemes as an interference mitigation strategy in front of conventional approaches.Prediction models using artificial intelligence and longitudinal data from electronic health records: a systematic methodological reviewCarrasco Ribelles, Lucía AmaliaLlanes Jurado, JoséGallego Moll, CarlosCabrera-Bean, MargaritaMonteagudo Zaragoza, MònicaViolán Fors, ConcepciónZabaleta del Olmo, Edurnehttp://hdl.handle.net/2117/3942222023-11-24T10:37:18Z2023-09-28T11:50:30ZPrediction models using artificial intelligence and longitudinal data from electronic health records: a systematic methodological review
Carrasco Ribelles, Lucía Amalia; Llanes Jurado, José; Gallego Moll, Carlos; Cabrera-Bean, Margarita; Monteagudo Zaragoza, Mònica; Violán Fors, Concepción; Zabaleta del Olmo, Edurne
Objective:
To describe and appraise the use of artificial intelligence (AI) techniques that can cope with longitudinal data from electronic health records (EHRs) to predict health-related outcomes.
Methods:
This review included studies in any language that: EHR was at least one of the data sources, collected longitudinal data, used an AI technique capable of handling longitudinal data, and predicted any health-related outcomes. We searched MEDLINE, Scopus, Web of Science, and IEEE Xplorer from inception to January 3, 2022. Information on the dataset, prediction task, data preprocessing, feature selection, method, validation, performance, and implementation was extracted and summarized using descriptive statistics. Risk of bias and completeness of reporting were assessed using a short form of PROBAST and TRIPOD, respectively.
Results:
Eighty-one studies were included. Follow-up time and number of registers per patient varied greatly, and most predicted disease development or next event based on diagnoses and drug treatments. Architectures generally were based on Recurrent Neural Networks-like layers, though in recent years combining different layers or transformers has become more popular. About half of the included studies performed hyperparameter tuning and used attention mechanisms. Most performed a single train-test partition and could not correctly assess the variability of the model’s performance. Reporting quality was poor, and a third of the studies were at high risk of bias.
Conclusions:
AI models are increasingly using longitudinal data. However, the heterogeneity in reporting methodology and results, and the lack of public EHR datasets and code sharing, complicate the possibility of replication.
2023-09-28T11:50:30ZCarrasco Ribelles, Lucía AmaliaLlanes Jurado, JoséGallego Moll, CarlosCabrera-Bean, MargaritaMonteagudo Zaragoza, MònicaViolán Fors, ConcepciónZabaleta del Olmo, EdurneObjective:
To describe and appraise the use of artificial intelligence (AI) techniques that can cope with longitudinal data from electronic health records (EHRs) to predict health-related outcomes.
Methods:
This review included studies in any language that: EHR was at least one of the data sources, collected longitudinal data, used an AI technique capable of handling longitudinal data, and predicted any health-related outcomes. We searched MEDLINE, Scopus, Web of Science, and IEEE Xplorer from inception to January 3, 2022. Information on the dataset, prediction task, data preprocessing, feature selection, method, validation, performance, and implementation was extracted and summarized using descriptive statistics. Risk of bias and completeness of reporting were assessed using a short form of PROBAST and TRIPOD, respectively.
Results:
Eighty-one studies were included. Follow-up time and number of registers per patient varied greatly, and most predicted disease development or next event based on diagnoses and drug treatments. Architectures generally were based on Recurrent Neural Networks-like layers, though in recent years combining different layers or transformers has become more popular. About half of the included studies performed hyperparameter tuning and used attention mechanisms. Most performed a single train-test partition and could not correctly assess the variability of the model’s performance. Reporting quality was poor, and a third of the studies were at high risk of bias.
Conclusions:
AI models are increasingly using longitudinal data. However, the heterogeneity in reporting methodology and results, and the lack of public EHR datasets and code sharing, complicate the possibility of replication.Optimizing access demand for mMTC traffic using neural networksLlobet Turró, MartíCabrera-Bean, MargaritaVidal Manzano, JoséAgustín de Dios, Adriánhttp://hdl.handle.net/2117/3940682024-01-19T08:53:13Z2023-09-26T11:11:37ZOptimizing access demand for mMTC traffic using neural networks
Llobet Turró, Martí; Cabrera-Bean, Margarita; Vidal Manzano, José; Agustín de Dios, Adrián
Machine-type communications show unique spatial and temporal correlation properties that often lead to bursty access demand profiles. With the expected large-scale deployment of the Internet of Things (IoT), next-generation mobile networks should be redesigned to manage massive, highly synchronized arrivals of access requests by employing efficient access barring schemes. In this work, we first derived the analytical expression of the optimal Access Class Barring (ACB) parameter as standardized by the Third Generation Partnership Project (3GPP). Secondly, we predict the type and number of accessing devices from measurements acquired by the Base Station (BS) by employing Neural Networks (NNs). These estimates are used to effectively implement the optimal barring scheme, achieving performance results close to the theoretical bound.
2023-09-26T11:11:37ZLlobet Turró, MartíCabrera-Bean, MargaritaVidal Manzano, JoséAgustín de Dios, AdriánMachine-type communications show unique spatial and temporal correlation properties that often lead to bursty access demand profiles. With the expected large-scale deployment of the Internet of Things (IoT), next-generation mobile networks should be redesigned to manage massive, highly synchronized arrivals of access requests by employing efficient access barring schemes. In this work, we first derived the analytical expression of the optimal Access Class Barring (ACB) parameter as standardized by the Third Generation Partnership Project (3GPP). Secondly, we predict the type and number of accessing devices from measurements acquired by the Base Station (BS) by employing Neural Networks (NNs). These estimates are used to effectively implement the optimal barring scheme, achieving performance results close to the theoretical bound.Multiqubit time-varying quantum channels for NISQ-era superconducting quantum processorsEtxezarreta Martínez, JosuFuentes Ugartemendia, PatricioMartí i Oliu, Antonio deGarcía Frias, JavierRodríguez Fonollosa, JavierCrespo Bofill, Pedro M.http://hdl.handle.net/2117/3938382023-09-28T12:03:47Z2023-09-21T12:08:11ZMultiqubit time-varying quantum channels for NISQ-era superconducting quantum processors
Etxezarreta Martínez, Josu; Fuentes Ugartemendia, Patricio; Martí i Oliu, Antonio de; García Frias, Javier; Rodríguez Fonollosa, Javier; Crespo Bofill, Pedro M.
Time-varying quantum channels (TVQCs) have been proposed as a model to include fluctuations of the relaxation (T1) and dephasing times (T2). In previous works, realizations of multiqubit TVQCs have been assumed to be equal for all the qubits of an error correction block, implying that the random variables that describe the fluctuations of T1 and T2 are block-to-block uncorrelated but qubit-wise perfectly correlated for the same block. In this article, we perform a correlation analysis of the fluctuations of the relaxation times of five multiqubit quantum processors. Our results show that it is reasonable to assume that the fluctuations of the relaxation and dephasing times of superconducting qubits are local to each of the qubits of the system. Based on these results, we discuss the multiqubit TVQCs when the fluctuations of the decoherence parameters for an error correction block are qubit-wise uncorrelated (as well as from block-to-block), a scenario we have named the fast time-varying quantum channel (FTVQC). Furthermore, we lower-bound the quantum capacity of general FTVQCs based on a quantity we refer to as the ergodic quantum capacity. Finally, we use numerical simulations to study the performance of quantum error correction codes when they operate over FTVQCs.
2023-09-21T12:08:11ZEtxezarreta Martínez, JosuFuentes Ugartemendia, PatricioMartí i Oliu, Antonio deGarcía Frias, JavierRodríguez Fonollosa, JavierCrespo Bofill, Pedro M.Time-varying quantum channels (TVQCs) have been proposed as a model to include fluctuations of the relaxation (T1) and dephasing times (T2). In previous works, realizations of multiqubit TVQCs have been assumed to be equal for all the qubits of an error correction block, implying that the random variables that describe the fluctuations of T1 and T2 are block-to-block uncorrelated but qubit-wise perfectly correlated for the same block. In this article, we perform a correlation analysis of the fluctuations of the relaxation times of five multiqubit quantum processors. Our results show that it is reasonable to assume that the fluctuations of the relaxation and dephasing times of superconducting qubits are local to each of the qubits of the system. Based on these results, we discuss the multiqubit TVQCs when the fluctuations of the decoherence parameters for an error correction block are qubit-wise uncorrelated (as well as from block-to-block), a scenario we have named the fast time-varying quantum channel (FTVQC). Furthermore, we lower-bound the quantum capacity of general FTVQCs based on a quantity we refer to as the ergodic quantum capacity. Finally, we use numerical simulations to study the performance of quantum error correction codes when they operate over FTVQCs.Contribution of frailty to multimorbidity patterns and trajectories: Longitudinal dynamic cohort study of aging peopleCarrasco Ribelles, Lucía AmaliaCabrera-Bean, MargaritaDanés Castells, MarcZabaleta del Olmo, EdurneRoso Llorach, AlbertViolán Fors, Concepciónhttp://hdl.handle.net/2117/3938232023-10-15T06:19:32Z2023-09-21T10:37:58ZContribution of frailty to multimorbidity patterns and trajectories: Longitudinal dynamic cohort study of aging people
Carrasco Ribelles, Lucía Amalia; Cabrera-Bean, Margarita; Danés Castells, Marc; Zabaleta del Olmo, Edurne; Roso Llorach, Albert; Violán Fors, Concepción
Background:
Multimorbidity and frailty are characteristics of aging that need individualized evaluation, and there is a 2-way causal relationship between them. Thus, considering frailty in analyses of multimorbidity is important for tailoring social and health care to the specific needs of older people.
Objective:
This study aimed to assess how the inclusion of frailty contributes to identifying and characterizing multimorbidity patterns in people aged 65 years or older.
Methods:
Longitudinal data were drawn from electronic health records through the SIDIAP (Sistema d’Informació pel Desenvolupament de la Investigació a l’Atenció Primària) primary care database for the population aged 65 years or older from 2010 to 2019 in Catalonia, Spain. Frailty and multimorbidity were measured annually using validated tools (eFRAGICAP, a cumulative deficit model; and Swedish National Study of Aging and Care in Kungsholmen [SNAC-K], respectively). Two sets of 11 multimorbidity patterns were obtained using fuzzy c-means. Both considered the chronic conditions of the participants. In addition, one set included age, and the other included frailty. Cox models were used to test their associations with death, nursing home admission, and home care need. Trajectories were defined as the evolution of the patterns over the follow-up period.
Results:
The study included 1,456,052 unique participants (mean follow-up of 7.0 years). Most patterns were similar in both sets in terms of the most prevalent conditions. However, the patterns that considered frailty were better for identifying the population whose main conditions imposed limitations on daily life, with a higher prevalence of frail individuals in patterns like chronic ulcers &peripheral vascular. This set also included a dementia-specific pattern and showed a better fit with the risk of nursing home admission and home care need. On the other hand, the risk of death had a better fit with the set of patterns that did not include frailty. The change in patterns when considering frailty also led to a change in trajectories. On average, participants were in 1.8 patterns during their follow-up, while 45.1% (656,778/1,456,052) remained in the same pattern.
Conclusions:
Our results suggest that frailty should be considered in addition to chronic diseases when studying multimorbidity patterns in older adults. Multimorbidity patterns and trajectories can help to identify patients with specific needs. The patterns that considered frailty were better for identifying the risk of certain age-related outcomes, such as nursing home admission or home care need, while those considering age were better for identifying the risk of death. Clinical and social intervention guidelines and resource planning can be tailored based on the prevalence of these patterns and trajectories.
2023-09-21T10:37:58ZCarrasco Ribelles, Lucía AmaliaCabrera-Bean, MargaritaDanés Castells, MarcZabaleta del Olmo, EdurneRoso Llorach, AlbertViolán Fors, ConcepciónBackground:
Multimorbidity and frailty are characteristics of aging that need individualized evaluation, and there is a 2-way causal relationship between them. Thus, considering frailty in analyses of multimorbidity is important for tailoring social and health care to the specific needs of older people.
Objective:
This study aimed to assess how the inclusion of frailty contributes to identifying and characterizing multimorbidity patterns in people aged 65 years or older.
Methods:
Longitudinal data were drawn from electronic health records through the SIDIAP (Sistema d’Informació pel Desenvolupament de la Investigació a l’Atenció Primària) primary care database for the population aged 65 years or older from 2010 to 2019 in Catalonia, Spain. Frailty and multimorbidity were measured annually using validated tools (eFRAGICAP, a cumulative deficit model; and Swedish National Study of Aging and Care in Kungsholmen [SNAC-K], respectively). Two sets of 11 multimorbidity patterns were obtained using fuzzy c-means. Both considered the chronic conditions of the participants. In addition, one set included age, and the other included frailty. Cox models were used to test their associations with death, nursing home admission, and home care need. Trajectories were defined as the evolution of the patterns over the follow-up period.
Results:
The study included 1,456,052 unique participants (mean follow-up of 7.0 years). Most patterns were similar in both sets in terms of the most prevalent conditions. However, the patterns that considered frailty were better for identifying the population whose main conditions imposed limitations on daily life, with a higher prevalence of frail individuals in patterns like chronic ulcers &peripheral vascular. This set also included a dementia-specific pattern and showed a better fit with the risk of nursing home admission and home care need. On the other hand, the risk of death had a better fit with the set of patterns that did not include frailty. The change in patterns when considering frailty also led to a change in trajectories. On average, participants were in 1.8 patterns during their follow-up, while 45.1% (656,778/1,456,052) remained in the same pattern.
Conclusions:
Our results suggest that frailty should be considered in addition to chronic diseases when studying multimorbidity patterns in older adults. Multimorbidity patterns and trajectories can help to identify patients with specific needs. The patterns that considered frailty were better for identifying the risk of certain age-related outcomes, such as nursing home admission or home care need, while those considering age were better for identifying the risk of death. Clinical and social intervention guidelines and resource planning can be tailored based on the prevalence of these patterns and trajectories.On the estimation of Tsallis entropy and a novel information measure based on its propertiesMartí Espelt, AniolCabrera Estanyol, Ferran deRiba Sagarra, Jaumehttp://hdl.handle.net/2117/3907672023-10-15T01:58:11Z2023-07-13T10:06:25ZOn the estimation of Tsallis entropy and a novel information measure based on its properties
Martí Espelt, Aniol; Cabrera Estanyol, Ferran de; Riba Sagarra, Jaume
This letter explores a plug-in estimator of second-order Tsallis entropy based on Kernel Density Estimation (KDE) and its implicit regularization process. First, it is shown that the expected value of the estimator corresponds to the entropy of an Additive White Gaussian Noise (AWGN) model. Then, we prove various relevant properties of the Tsallis entropy: It is monotonically non-decreasing under random variables addition, its derivative with respect to the Gaussian noise power is monotonically non-increasing, and it is concave in the additive noise power. From these, we derive an information metric that provides an alternative to the strategy of regularization.
2023-07-13T10:06:25ZMartí Espelt, AniolCabrera Estanyol, Ferran deRiba Sagarra, JaumeThis letter explores a plug-in estimator of second-order Tsallis entropy based on Kernel Density Estimation (KDE) and its implicit regularization process. First, it is shown that the expected value of the estimator corresponds to the entropy of an Additive White Gaussian Noise (AWGN) model. Then, we prove various relevant properties of the Tsallis entropy: It is monotonically non-decreasing under random variables addition, its derivative with respect to the Gaussian noise power is monotonically non-increasing, and it is concave in the additive noise power. From these, we derive an information metric that provides an alternative to the strategy of regularization.Context-aware lossless and lossy compression of radio frequency signalsMartí Espelt, AniolPortell de Mora, JordiRiba Sagarra, JaumeMas Casals, Orestes Miquelhttp://hdl.handle.net/2117/3878202023-06-04T05:12:20Z2023-05-24T17:56:45ZContext-aware lossless and lossy compression of radio frequency signals
Martí Espelt, Aniol; Portell de Mora, Jordi; Riba Sagarra, Jaume; Mas Casals, Orestes Miquel
We propose an algorithm based on linear prediction that can perform both the lossless and near-lossless compression of RF signals. The proposed algorithm is coupled with two signal detection methods to determine the presence of relevant signals and apply varying levels of loss as needed. The first method uses spectrum sensing techniques, while the second one takes advantage of the error computed in each iteration of the Levinson–Durbin algorithm. These algorithms have been integrated as a new pre-processing stage into FAPEC, a data compressor first designed for space missions. We test the lossless algorithm using two different datasets. The first one was obtained from OPS-SAT, an ESA CubeSat, while the second one was obtained using a SDRplay RSPdx in Barcelona, Spain. The results show that our approach achieves compression ratios that are 23% better than gzip (on average) and very similar to those of FLAC, but at higher speeds. We also assess the performance of our signal detectors using the second dataset. We show that high ratios can be achieved thanks to the lossy compression of the segments without any relevant signal.
2023-05-24T17:56:45ZMartí Espelt, AniolPortell de Mora, JordiRiba Sagarra, JaumeMas Casals, Orestes MiquelWe propose an algorithm based on linear prediction that can perform both the lossless and near-lossless compression of RF signals. The proposed algorithm is coupled with two signal detection methods to determine the presence of relevant signals and apply varying levels of loss as needed. The first method uses spectrum sensing techniques, while the second one takes advantage of the error computed in each iteration of the Levinson–Durbin algorithm. These algorithms have been integrated as a new pre-processing stage into FAPEC, a data compressor first designed for space missions. We test the lossless algorithm using two different datasets. The first one was obtained from OPS-SAT, an ESA CubeSat, while the second one was obtained using a SDRplay RSPdx in Barcelona, Spain. The results show that our approach achieves compression ratios that are 23% better than gzip (on average) and very similar to those of FLAC, but at higher speeds. We also assess the performance of our signal detectors using the second dataset. We show that high ratios can be achieved thanks to the lossy compression of the segments without any relevant signal.Regularized estimation of information via canonical correlation analysis on a finite-dimensional feature spaceCabrera Estanyol, Ferran deRiba Sagarra, Jaumehttp://hdl.handle.net/2117/3872262023-10-15T00:48:59Z2023-05-10T07:29:05ZRegularized estimation of information via canonical correlation analysis on a finite-dimensional feature space
Cabrera Estanyol, Ferran de; Riba Sagarra, Jaume
This paper aims to estimate the information between two random phenomena by using consolidated second-order statistics tools. The squared-loss mutual information, a surrogate of the Shannon mutual information, is chosen due to its property of being expressed as a second-order moment. We first review the rationale for i.i.d. discrete sources, which involves mapping the data onto the simplex space, and we highlight the links with other well-known related concepts in the literature based on local approximations of information-theoretic measures. Then, the problem is translated to analog sources by mapping the data onto the characteristic space, focusing on the adaptability between the discrete and the analog case and its limitations. The proposed approach gains interpretability and scalability for its use on large data sets, providing a unified rationale for the free regularization parameters. Moreover, the structure of the proposed mapping allows resorting to Szegö’s theorem to reduce the complexity for high dimensional mappings, exhibiting a strong duality with spectral analysis. The performance of the developed estimators is analyzed using Gaussian mixtures.
2023-05-10T07:29:05ZCabrera Estanyol, Ferran deRiba Sagarra, JaumeThis paper aims to estimate the information between two random phenomena by using consolidated second-order statistics tools. The squared-loss mutual information, a surrogate of the Shannon mutual information, is chosen due to its property of being expressed as a second-order moment. We first review the rationale for i.i.d. discrete sources, which involves mapping the data onto the simplex space, and we highlight the links with other well-known related concepts in the literature based on local approximations of information-theoretic measures. Then, the problem is translated to analog sources by mapping the data onto the characteristic space, focusing on the adaptability between the discrete and the analog case and its limitations. The proposed approach gains interpretability and scalability for its use on large data sets, providing a unified rationale for the free regularization parameters. Moreover, the structure of the proposed mapping allows resorting to Szegö’s theorem to reduce the complexity for high dimensional mappings, exhibiting a strong duality with spectral analysis. The performance of the developed estimators is analyzed using Gaussian mixtures.Interference mitigation in feedforward opportunistic communicationsBorràs Pino, JordiVázquez Grau, Gregoriohttp://hdl.handle.net/2117/3867602023-05-26T08:31:24Z2023-04-27T07:16:59ZInterference mitigation in feedforward opportunistic communications
Borràs Pino, Jordi; Vázquez Grau, Gregorio
This paper deals with scenario-aware, uncoordinated, and distributed signaling techniques in the context of feedforward opportunistic communications, that is, when the opportunistic transmitting node does not cooperate with any other node in a heterogeneous communication context. In this signaling technique, each network node individually follows a transmission strategy based on the locally sensed occupied and unused physical-layer network resources to minimize the induced interference onto other coexisting networks, taking into account the impact of the sensing errors and the locality of the sensing information. The paper identifies and characterizes critical invariance properties of the transmitted pulse shaping waveforms that guarantee the detectability of the feedforward transmitted signal by the uncoordinated receiving nodes, irrespective of the sensing signal space basis. The paper also shows that, under mild operating conditions, the proposed transmission scheme asymptotically defines efficient alternatives in the frequency domain, such as the circulant-shaping TDMA (CS-TDMA) modulation, and all of them admit a direct adaptation to frequency-selective channels. Numerical evaluation of the proposed schemes validates the provided theoretical models.
2023-04-27T07:16:59ZBorràs Pino, JordiVázquez Grau, GregorioThis paper deals with scenario-aware, uncoordinated, and distributed signaling techniques in the context of feedforward opportunistic communications, that is, when the opportunistic transmitting node does not cooperate with any other node in a heterogeneous communication context. In this signaling technique, each network node individually follows a transmission strategy based on the locally sensed occupied and unused physical-layer network resources to minimize the induced interference onto other coexisting networks, taking into account the impact of the sensing errors and the locality of the sensing information. The paper identifies and characterizes critical invariance properties of the transmitted pulse shaping waveforms that guarantee the detectability of the feedforward transmitted signal by the uncoordinated receiving nodes, irrespective of the sensing signal space basis. The paper also shows that, under mild operating conditions, the proposed transmission scheme asymptotically defines efficient alternatives in the frequency domain, such as the circulant-shaping TDMA (CS-TDMA) modulation, and all of them admit a direct adaptation to frequency-selective channels. Numerical evaluation of the proposed schemes validates the provided theoretical models.