DSpace Collection:
http://hdl.handle.net/2117/3647
Fri, 18 Apr 2014 08:49:11 GMT2014-04-18T08:49:11Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoDifferential scan-path: A novel solution for secure design-for-testability
http://hdl.handle.net/2117/21393
Title: Differential scan-path: A novel solution for secure design-for-testability
Authors: Manich Bou, Salvador; S. Wamser, Markus; M. Guillen, Oscar; Sigl, Georg
Abstract: In this paper, we present a new scan-path structure for improving the security of systems including scan paths, which normally introduce a security critical information leak channel into a design. Our structure, named differential scan path (DiSP), divides the internal state of the scan path in two sections. During the shift-out operation, only subtraction of the two sections is provided. Inferring the internal state from this subtraction requires much guesswork that increases exponen-tially with scan path length while the resulting fault coverage is only marginally altered. Subtraction does not preserve parity, thus avoiding attacks using parity information. The structure is simple, needs little area and does not require unlocking keys. Through implementing the DiSP in an elliptic curve crypto-graphic coprocessor, we demonstrate how easily it can be inte-grated into existing design tools. Simulations show that test effectiveness is preserved and that the internal state is effec-tively hidden.Wed, 29 Jan 2014 10:44:33 GMThttp://hdl.handle.net/2117/213932014-01-29T10:44:33ZManich Bou, Salvador; S. Wamser, Markus; M. Guillen, Oscar; Sigl, Georgnosecurity, testability, scan path, attack, BILBOIn this paper, we present a new scan-path structure for improving the security of systems including scan paths, which normally introduce a security critical information leak channel into a design. Our structure, named differential scan path (DiSP), divides the internal state of the scan path in two sections. During the shift-out operation, only subtraction of the two sections is provided. Inferring the internal state from this subtraction requires much guesswork that increases exponen-tially with scan path length while the resulting fault coverage is only marginally altered. Subtraction does not preserve parity, thus avoiding attacks using parity information. The structure is simple, needs little area and does not require unlocking keys. Through implementing the DiSP in an elliptic curve crypto-graphic coprocessor, we demonstrate how easily it can be inte-grated into existing design tools. Simulations show that test effectiveness is preserved and that the internal state is effec-tively hidden.Improving the security of scan path test using differential chains
http://hdl.handle.net/2117/21366
Title: Improving the security of scan path test using differential chains
Authors: Manich Bou, Salvador; Wamser, Markus S.; Sigl, Georg
Abstract: In this paper we present a new scan-path structure for improving the security of systems including a scan path, which normally introduces a security critical information channel into a design. The structure, named differential scan path
(DiSP), divides the internal state of the scan path into two sections.
During shift-out operation, only subtraction of the two sections is provided. The discovery of the internal state from this
subtraction requires guesswork that increases exponentially with scan path length. Subtraction does not preserve parity, a property sometimes used during attacks. Output subtraction cannot be reversed and hence it is not possible to restore the internal state of the chip from the output. The structure is simple, requires little area and no unlocking keys.Mon, 27 Jan 2014 10:53:51 GMThttp://hdl.handle.net/2117/213662014-01-27T10:53:51ZManich Bou, Salvador; Wamser, Markus S.; Sigl, Georgnosecurity, testability, scan path, attack, smartcard, bilboIn this paper we present a new scan-path structure for improving the security of systems including a scan path, which normally introduces a security critical information channel into a design. The structure, named differential scan path
(DiSP), divides the internal state of the scan path into two sections.
During shift-out operation, only subtraction of the two sections is provided. The discovery of the internal state from this
subtraction requires guesswork that increases exponentially with scan path length. Subtraction does not preserve parity, a property sometimes used during attacks. Output subtraction cannot be reversed and hence it is not possible to restore the internal state of the chip from the output. The structure is simple, requires little area and no unlocking keys.Information Leakage Reduction at the Scan-Path Output
http://hdl.handle.net/2117/21136
Title: Information Leakage Reduction at the Scan-Path Output
Authors: Manich Bou, Salvador; S. Wamser, Markus; Sigl, Georg
Abstract: In this paper we present a new scan-path structure for improving the security of systems including a scan path, which normally introduces a security critical information channel into a design. The structure, named differential scan path (DiSP), divides the internal state of the scan path into two sections.
During shift-out operation, only subtraction of the two sections is provided. The discovery of the internal state from this subtraction requires guesswork that increases exponentially with scan path length. Subtraction does not preserve parity, a property sometimes used during attacks. Output subtraction cannot be reversed and hence it is not possible to restore the internal state of the chip from the output. The structure is simple, requires little area and no unlocking keys.Thu, 02 Jan 2014 16:38:55 GMThttp://hdl.handle.net/2117/211362014-01-02T16:38:55ZManich Bou, Salvador; S. Wamser, Markus; Sigl, Georgnosecurity, testability, scan path, attack, smart-card, bilbo.In this paper we present a new scan-path structure for improving the security of systems including a scan path, which normally introduces a security critical information channel into a design. The structure, named differential scan path (DiSP), divides the internal state of the scan path into two sections.
During shift-out operation, only subtraction of the two sections is provided. The discovery of the internal state from this subtraction requires guesswork that increases exponentially with scan path length. Subtraction does not preserve parity, a property sometimes used during attacks. Output subtraction cannot be reversed and hence it is not possible to restore the internal state of the chip from the output. The structure is simple, requires little area and no unlocking keys.Analog circuit test based on a digital signature
http://hdl.handle.net/2117/20632
Title: Analog circuit test based on a digital signature
Authors: Gómez Pau, Álvaro; Sanahuja Moliner, Ricard; Balado Suárez, Luz María; Figueras Pàmies, Joan
Abstract: Production verification of analog circuit specifica-
tions is a challenging task requiring expensive test equipment
and time consuming procedures. This paper presents a method
for low cost on-chip parameter verification based on the analysis
of a digital signature. A 65 nm CMOS on-chip monitor is
proposed and validated in practice. The monitor composes two
signals (x(t), y(t)) and divides the X-Y plane with nonlinear
boundaries in order to generate a digital code for every analog
(x, y) location. A digital signature is obtained using the digital
code and its time duration. A metric defining a discrepancy factor
is used to verify circuit parameters. The method is applied to
detect possible deviations in the natural frequency of a Biquad
filter. Simulated and experimental results show the possibilities
of the proposal.Fri, 15 Nov 2013 15:47:18 GMThttp://hdl.handle.net/2117/206322013-11-15T15:47:18ZGómez Pau, Álvaro; Sanahuja Moliner, Ricard; Balado Suárez, Luz María; Figueras Pàmies, JoannoProduction verification of analog circuit specifica-
tions is a challenging task requiring expensive test equipment
and time consuming procedures. This paper presents a method
for low cost on-chip parameter verification based on the analysis
of a digital signature. A 65 nm CMOS on-chip monitor is
proposed and validated in practice. The monitor composes two
signals (x(t), y(t)) and divides the X-Y plane with nonlinear
boundaries in order to generate a digital code for every analog
(x, y) location. A digital signature is obtained using the digital
code and its time duration. A metric defining a discrepancy factor
is used to verify circuit parameters. The method is applied to
detect possible deviations in the natural frequency of a Biquad
filter. Simulated and experimental results show the possibilities
of the proposal.BIST Architecture to Detect Defects in TSVs During Pre-Bond Testing
http://hdl.handle.net/2117/20505
Title: BIST Architecture to Detect Defects in TSVs During Pre-Bond Testing
Authors: Arumi Delgado, Daniel; Rodríguez Montañés, Rosa; Figueras Pàmies, Joan
Abstract: Through Silicon Vias (TSVs) are critical elements in three dimensional integrated circuits (3-D ICs). The detection of defective TSVs in the earliest process step is of major concern. Hence, testing TSVs is usually done at different stages of the fabrication process. In this context, this work proposes a simple pre-bond GIST architecture to improve the detection of hard and weak defects
Description: built-in self test integrated circuit testing three-dimensional integrated circuitsThu, 31 Oct 2013 11:34:26 GMThttp://hdl.handle.net/2117/205052013-10-31T11:34:26ZArumi Delgado, Daniel; Rodríguez Montañés, Rosa; Figueras Pàmies, Joannobuilt-in self test
integrated circuit testing
three-dimensional integrated circuitsThrough Silicon Vias (TSVs) are critical elements in three dimensional integrated circuits (3-D ICs). The detection of defective TSVs in the earliest process step is of major concern. Hence, testing TSVs is usually done at different stages of the fabrication process. In this context, this work proposes a simple pre-bond GIST architecture to improve the detection of hard and weak defectsM-S test based on specification validation using octrees in the measure space
http://hdl.handle.net/2117/20485
Title: M-S test based on specification validation using octrees in the measure space
Authors: Gómez Pau, Álvaro; Balado Suárez, Luz María; Figueras Pàmies, Joan
Abstract: Testing M-S circuits is a difficult task demanding high amount of resources. To overcome these drawbacks, indirect
testing methods have been adopted as an efficient solution to perform specification based tests using easy to measure metrics.
In this work, a testing technique using octrees in the measure space is presented. Octrees have been used in computer graphics
with successful results for rendering, image processing and space clustering applications. In this paper are used to encode the test acceptance region with arbitrary precision after an statistical
training phase. Such representation allows an efficient way to test a candidate circuit in terms of test application time. The method
is applied to test a Biquad filter with encouraging results. Test escapes and test yield loss caused by parametric variations have been estimated.Mon, 28 Oct 2013 17:23:27 GMThttp://hdl.handle.net/2117/204852013-10-28T17:23:27ZGómez Pau, Álvaro; Balado Suárez, Luz María; Figueras Pàmies, JoannoLissajous Compositions, Mixed-Signal Test, Octrees, Quadtrees, Test Escapes, Test Metrics, Test Yield LossTesting M-S circuits is a difficult task demanding high amount of resources. To overcome these drawbacks, indirect
testing methods have been adopted as an efficient solution to perform specification based tests using easy to measure metrics.
In this work, a testing technique using octrees in the measure space is presented. Octrees have been used in computer graphics
with successful results for rendering, image processing and space clustering applications. In this paper are used to encode the test acceptance region with arbitrary precision after an statistical
training phase. Such representation allows an efficient way to test a candidate circuit in terms of test application time. The method
is applied to test a Biquad filter with encouraging results. Test escapes and test yield loss caused by parametric variations have been estimated.Two new algorithms to compute steady-state bounds for Markov models with slow forward and fast backward transitions
http://hdl.handle.net/2117/20065
Title: Two new algorithms to compute steady-state bounds for Markov models with slow forward and fast backward transitions
Authors: Carrasco López, Juan Antonio; Calderón, A; Escribà, J
Abstract: Two new algorithms are proposed for the computation of bounds for the steady-state reward rate of irreducible finite Markov models with slow forward and fast backward transitions. The algorithms use detailed knowledge of the model in a subset of generated states G and partial information about the model in the non-generated portion U of the state space. U is assumed partitioned into subsets U_k,1\leq k\leq N with a “nearest neighbor” structure. The algorithms
involve the solution of, respectively, |M| + 2 and 4 linear systems of site |G|, where M is the set of values of k corresponding to the subsets U_k through which the model can jump from G to U. Previously proposed algorithms for the same type of models required the solution of |S| linear systems of size |G|+ N , where S is the subset of G through which the model can enter G from U, to achieve the same bounds as our algorithms, or gave less tighter bounds if state cloning techniques were used to reduce the number of solved linear systems. An availability model with system state dependent repair rates is used to illustrate the application and performance of the algorithms.Thu, 01 Aug 2013 10:38:01 GMThttp://hdl.handle.net/2117/200652013-08-01T10:38:01ZCarrasco López, Juan Antonio; Calderón, A; Escribà, JnoTwo new algorithms are proposed for the computation of bounds for the steady-state reward rate of irreducible finite Markov models with slow forward and fast backward transitions. The algorithms use detailed knowledge of the model in a subset of generated states G and partial information about the model in the non-generated portion U of the state space. U is assumed partitioned into subsets U_k,1\leq k\leq N with a “nearest neighbor” structure. The algorithms
involve the solution of, respectively, |M| + 2 and 4 linear systems of site |G|, where M is the set of values of k corresponding to the subsets U_k through which the model can jump from G to U. Previously proposed algorithms for the same type of models required the solution of |S| linear systems of size |G|+ N , where S is the subset of G through which the model can enter G from U, to achieve the same bounds as our algorithms, or gave less tighter bounds if state cloning techniques were used to reduce the number of solved linear systems. An availability model with system state dependent repair rates is used to illustrate the application and performance of the algorithms.Synthesis of IDDQ-Testable Circuits: Integrating Built-in Current Sensors
http://hdl.handle.net/2117/20064
Title: Synthesis of IDDQ-Testable Circuits: Integrating Built-in Current Sensors
Authors: Wunderlich, H J; Herzog, M; Figueras Pàmies, Joan; Carrasco López, Juan Antonio; Calderón, A
Abstract: "On-Chip" I_{DDQ} testing by the incorporation of Built-In Current (BIC) sensors has some advantages over "offchip" techniques. However, the integration of sensors poses analog design problems which are hard to be solved by a digital designer. The automatic incorporation of the sensors using parameterized BIC cells could be a promising alternative. The work reported here identifies partitioning criteria to guide the synthesis of I_{DDQ}-testable circuits. The circuit must be partitioned, such that the defective I_{DDQ} is observable, and the power
supply voltage perturbation is within specified limits. In addition to these constraints, also cost criteria are considered: circuit extra delay, area overhead of the BIC sensors, connectivity costs of the test circuitry, and the test application time. The parameters are estimated based on logical as well as electrical level information of the target cell library to be used in the technology mapping phase of the synthesis process. The resulting cost function is optimized by an evolution-based algorithm. When run over large benchmark circuits our method gives significantly superior results to those obtained using simpler and less comprehensive partitioning methods.Thu, 01 Aug 2013 10:33:44 GMThttp://hdl.handle.net/2117/200642013-08-01T10:33:44ZWunderlich, H J; Herzog, M; Figueras Pàmies, Joan; Carrasco López, Juan Antonio; Calderón, Ano"On-Chip" I_{DDQ} testing by the incorporation of Built-In Current (BIC) sensors has some advantages over "offchip" techniques. However, the integration of sensors poses analog design problems which are hard to be solved by a digital designer. The automatic incorporation of the sensors using parameterized BIC cells could be a promising alternative. The work reported here identifies partitioning criteria to guide the synthesis of I_{DDQ}-testable circuits. The circuit must be partitioned, such that the defective I_{DDQ} is observable, and the power
supply voltage perturbation is within specified limits. In addition to these constraints, also cost criteria are considered: circuit extra delay, area overhead of the BIC sensors, connectivity costs of the test circuitry, and the test application time. The parameters are estimated based on logical as well as electrical level information of the target cell library to be used in the technology mapping phase of the synthesis process. The resulting cost function is optimized by an evolution-based algorithm. When run over large benchmark circuits our method gives significantly superior results to those obtained using simpler and less comprehensive partitioning methods.Hierarchical object-oriented modeling of fault-tolerant computer systems
http://hdl.handle.net/2117/20063
Title: Hierarchical object-oriented modeling of fault-tolerant computer systems
Authors: Carrasco López, Juan Antonio
Abstract: A hierarchical, object-oriented modeling language for the specification of dependability models for complex fault-tolerant computer systems is overviewed. The language incorporates the hierarchical notions of cluster, operational mode and configuration and borrows from object-oriented programming the concepts of class, parameterization, and instantiation. These
features together result in a highly expressive environment allowing the concise specification of sophisticated dependability models for complex systems. In addition, the language supports the declaration of symmetries that systems may exhibit
at levels higher than the component level. These symmetries can be used to automatically generate lumped state-level models of significantly reduced size in relation to the state-level models which would be generated from a flat, component-level description of the system.Thu, 01 Aug 2013 10:26:18 GMThttp://hdl.handle.net/2117/200632013-08-01T10:26:18ZCarrasco López, Juan AntonionoA hierarchical, object-oriented modeling language for the specification of dependability models for complex fault-tolerant computer systems is overviewed. The language incorporates the hierarchical notions of cluster, operational mode and configuration and borrows from object-oriented programming the concepts of class, parameterization, and instantiation. These
features together result in a highly expressive environment allowing the concise specification of sophisticated dependability models for complex systems. In addition, the language supports the declaration of symmetries that systems may exhibit
at levels higher than the component level. These symmetries can be used to automatically generate lumped state-level models of significantly reduced size in relation to the state-level models which would be generated from a flat, component-level description of the system.Failure distance based bounds for steady-state availability without the kwnowledge of minimal cuts
http://hdl.handle.net/2117/20062
Title: Failure distance based bounds for steady-state availability without the kwnowledge of minimal cuts
Authors: Suñé Socías, Víctor Manuel; Carrasco López, Juan Antonio
Abstract: We propose an algorithm to compute bounds for the steady-state unavailability using continuous-time Markov chains, which is based on the failure distance concept. The algorithm generates incrementally a subset of the state space until the tightness of the bounds is the specified one. In contrast with a previous algorithmalso based on the failure distance concept, the proposed algorithm uses lower bounds for failure distances which are computed on the fault tree of the system, and does not require the knowledge of the minimal cuts. This is advantageous when the number of minimal cuts is large or their computation is time-consuming.Thu, 01 Aug 2013 10:20:27 GMThttp://hdl.handle.net/2117/200622013-08-01T10:20:27ZSuñé Socías, Víctor Manuel; Carrasco López, Juan AntonionoWe propose an algorithm to compute bounds for the steady-state unavailability using continuous-time Markov chains, which is based on the failure distance concept. The algorithm generates incrementally a subset of the state space until the tightness of the bounds is the specified one. In contrast with a previous algorithmalso based on the failure distance concept, the proposed algorithm uses lower bounds for failure distances which are computed on the fault tree of the system, and does not require the knowledge of the minimal cuts. This is advantageous when the number of minimal cuts is large or their computation is time-consuming.Efficient transient simulation of failure/repair markovian models
http://hdl.handle.net/2117/20061
Title: Efficient transient simulation of failure/repair markovian models
Authors: Carrasco López, Juan Antonio
Abstract: Simulation methods have recently been developed for the solution of the extremely large markovian dependability models which result from complex fault-tolerant computer systems. This paper presents efficient simulation methods for the estimation of transient reliability/availability metrics for repairable fault-tolerant computer systems which combine estimator decomposition
techniques with an efficient importance sampling technique recently developed. Comparison with simulation methods previously proposed for the same type of metrics and models shows that the methods proposed here are orders of magnitude faster.Thu, 01 Aug 2013 10:08:37 GMThttp://hdl.handle.net/2117/200612013-08-01T10:08:37ZCarrasco López, Juan AntonionoSimulation methods have recently been developed for the solution of the extremely large markovian dependability models which result from complex fault-tolerant computer systems. This paper presents efficient simulation methods for the estimation of transient reliability/availability metrics for repairable fault-tolerant computer systems which combine estimator decomposition
techniques with an efficient importance sampling technique recently developed. Comparison with simulation methods previously proposed for the same type of metrics and models shows that the methods proposed here are orders of magnitude faster.Computation of absorption probability distributions of continuous-time Markov chains using regenerative randomization
http://hdl.handle.net/2117/20060
Title: Computation of absorption probability distributions of continuous-time Markov chains using regenerative randomization
Authors: Carrasco López, Juan Antonio; Calderón, A
Abstract: Randomization is a popular method for the transient solution of continuous-time Markov models. Its primary advantages over other methods (i.e., ODE solvers) are robustness and ease of implementation. It is however well-known that the performance of the method deteriorates with the “stiffness” of the model: the number of required steps to solve the model up to time t tends to \Ld t for \Ld t \rightarrow \infty, where \Ld as the maximum output rate. For measures like the uiireliability \Ld t can be very large for the t of interest, making the randomization method very inefficient. In this paper we consider such measures and propose a new solution method called regenerative randomization which exploits the regenerative structure of the model and can be far more efficient. Regarding Ihe number of steps required in regenerative randomizaizon we prove that:1) it is smaller than the number of steps required in standard randomization when the initial distribution is concentrated in a single state, 2) for \Ld t \roghtarrow \infty, it is upper bounded by a function O(log(\Ld t/\eps)), where \eps is the desired approximation error hound. Using a reliability example we analyze the performance and stability of the method.Thu, 01 Aug 2013 10:02:34 GMThttp://hdl.handle.net/2117/200602013-08-01T10:02:34ZCarrasco López, Juan Antonio; Calderón, AnoRandomization is a popular method for the transient solution of continuous-time Markov models. Its primary advantages over other methods (i.e., ODE solvers) are robustness and ease of implementation. It is however well-known that the performance of the method deteriorates with the “stiffness” of the model: the number of required steps to solve the model up to time t tends to \Ld t for \Ld t \rightarrow \infty, where \Ld as the maximum output rate. For measures like the uiireliability \Ld t can be very large for the t of interest, making the randomization method very inefficient. In this paper we consider such measures and propose a new solution method called regenerative randomization which exploits the regenerative structure of the model and can be far more efficient. Regarding Ihe number of steps required in regenerative randomizaizon we prove that:1) it is smaller than the number of steps required in standard randomization when the initial distribution is concentrated in a single state, 2) for \Ld t \roghtarrow \infty, it is upper bounded by a function O(log(\Ld t/\eps)), where \eps is the desired approximation error hound. Using a reliability example we analyze the performance and stability of the method.Automated construction of compound Markov chains from generalized stochastic high-level Petri nets
http://hdl.handle.net/2117/20057
Title: Automated construction of compound Markov chains from generalized stochastic high-level Petri nets
Authors: Carrasco López, Juan Antonio
Abstract: A new type of Petri nets: Generalized Stochastic High-Level Petri nets (GSHLPN’s), collecting the qualities of GSPN’s and SHLPN’s, is presented. The automated construction of compound continuous-time Markov chains (CTMC’s) from GSHLPN’s is also considered. A formalism for the description of compound markings allowing a symbolic firing of the net to obtain a compound CTMC with correct state grouping is derived. The construction of the compound CTMC requires an algorithm to test the equivalence of compound markings. It is shown that, in the general case and for bounded number of rotation groups, the problem is polynomially equivalent to GRAPH ISOMORPHISM, a problem whose classification in the NP world is currently open.Thu, 01 Aug 2013 09:53:38 GMThttp://hdl.handle.net/2117/200572013-08-01T09:53:38ZCarrasco López, Juan AntonionoA new type of Petri nets: Generalized Stochastic High-Level Petri nets (GSHLPN’s), collecting the qualities of GSPN’s and SHLPN’s, is presented. The automated construction of compound continuous-time Markov chains (CTMC’s) from GSHLPN’s is also considered. A formalism for the description of compound markings allowing a symbolic firing of the net to obtain a compound CTMC with correct state grouping is derived. The construction of the compound CTMC requires an algorithm to test the equivalence of compound markings. It is shown that, in the general case and for bounded number of rotation groups, the problem is polynomially equivalent to GRAPH ISOMORPHISM, a problem whose classification in the NP world is currently open.A method for the computation of reliability bounds for non-repairable fault-tolerant systems
http://hdl.handle.net/2117/20055
Title: A method for the computation of reliability bounds for non-repairable fault-tolerant systems
Authors: Suñé Socías, Víctor Manuel; Carrasco López, Juan Antonio
Abstract: A realistic modeling of fault-tolerant systems requires to take into account phenomena such as the dependence of component failure rates and coverage parameters on the operational configuration of the system, which cannot be properly captured using combinatoric techniques. Such dependencies can be modeled with detail using continuous-time Markov chains (CTMC’s). However, the use of CTMC models is limited by the well-known state space exploition problem. In this paper we develop a method for the computation of bounds for the reliability of non-repairable fault-tolerant systems which requires the generation of only a subset of states. The tightness of the bounds increases as more detailed states are generated. The method uses the failure distance concept and is illustrated using an example of a quite complex fault-tolerant system whose failure behavior has the above mentioned types of dependencies.Thu, 01 Aug 2013 08:08:02 GMThttp://hdl.handle.net/2117/200552013-08-01T08:08:02ZSuñé Socías, Víctor Manuel; Carrasco López, Juan AntonionoA realistic modeling of fault-tolerant systems requires to take into account phenomena such as the dependence of component failure rates and coverage parameters on the operational configuration of the system, which cannot be properly captured using combinatoric techniques. Such dependencies can be modeled with detail using continuous-time Markov chains (CTMC’s). However, the use of CTMC models is limited by the well-known state space exploition problem. In this paper we develop a method for the computation of bounds for the reliability of non-repairable fault-tolerant systems which requires the generation of only a subset of states. The tightness of the bounds increases as more detailed states are generated. The method uses the failure distance concept and is illustrated using an example of a quite complex fault-tolerant system whose failure behavior has the above mentioned types of dependencies.A combinatorial method for the evaluation of yield of fault-tolerant systems-on-chip
http://hdl.handle.net/2117/20054
Title: A combinatorial method for the evaluation of yield of fault-tolerant systems-on-chip
Authors: Suñé Socías, Víctor Manuel; Rodríguez Montañés, Rosa; Carrasco López, Juan Antonio; Munteanu, D-P
Abstract: In this paper we develop a combinatorial method for the evaluation of yield of fault-tolerant systems-on-chip. The method assumes that defects are produced according to a model in which defects are lethal and affect given components of the system following a distribution common to all defects. The distribution of the number of defects is arbitrary.
The method is based on the formulation of the yield as 1 minus the probability that a given boolean function with multiple-valued variables has value 1. That probability is computed by analyzing a ROMDD (reduced ordered multiple-valuedecision diagram) representation of the function.
For efficiency reasons, we first build a coded ROBDD (reduced ordered binary decision diagram) representation of the function and then transform that coded ROBDD into the ROMDD required by the method. We present numerical experiments showing that the method is able to cope with quite large systems in moderate CPU times.Thu, 01 Aug 2013 08:00:55 GMThttp://hdl.handle.net/2117/200542013-08-01T08:00:55ZSuñé Socías, Víctor Manuel; Rodríguez Montañés, Rosa; Carrasco López, Juan Antonio; Munteanu, D-PnoIn this paper we develop a combinatorial method for the evaluation of yield of fault-tolerant systems-on-chip. The method assumes that defects are produced according to a model in which defects are lethal and affect given components of the system following a distribution common to all defects. The distribution of the number of defects is arbitrary.
The method is based on the formulation of the yield as 1 minus the probability that a given boolean function with multiple-valued variables has value 1. That probability is computed by analyzing a ROMDD (reduced ordered multiple-valuedecision diagram) representation of the function.
For efficiency reasons, we first build a coded ROBDD (reduced ordered binary decision diagram) representation of the function and then transform that coded ROBDD into the ROMDD required by the method. We present numerical experiments showing that the method is able to cope with quite large systems in moderate CPU times.