首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a method to model the relationship between degradation and failure time for a simple step-stress test where the underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to induce failure experimentally and a tampered failure rate model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates of the model parameters are obtained through the expectation–maximization algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real-world example is analysed to illustrate the application of the proposed methods.  相似文献   

2.
In this paper we consider a binary, monotone system whose component states are dependent through the possible occurrence of independent common shocks, i.e. shocks that destroy several components at once. The individual failure of a component is also thought of as a shock. Such systems can be used to model common cause failures in reliability analysis. The system may be a technological one, or a human being. It is observed until it fails or dies. At this instant, the set of failed components and the failure time of the system are noted. The failure times of the components are not known. These are the so-called autopsy data of the system. For the case of independent components, i.e. no common shocks, Meilijson (1981), Nowik (1990), Antoine et al . (1993) and GTsemyr (1998) discuss the corresponding identifiability problem, i.e. whether the component life distributions can be determined from the distribution of the observed data. Assuming a model where autopsy data is known to be enough for identifia bility, Meilijson (1994) goes beyond the identifiability question and into maximum likelihood estimation of the parameters of the component lifetime distributions based on empirical autopsy data from a sample of several systems. He also considers life-monitoring of some components and conditional life-monitoring of some other. Here a corresponding Bayesian approach is presented for the shock model. Due to prior information one advantage of this approach is that the identifiability problem represents no obstacle. The motivation for introducing the shock model is that the autopsy model is of special importance when components can not be tested separately because it is difficult to reproduce the conditions prevailing in the functioning system. In Gåsemyr & Natvig (1997) we treat the Bayesian approach to life-monitoring and conditional life- monitoring of components  相似文献   

3.
The notion of cascading failures is a common phenomenon we observe around us. Here the initial failure alters the structure function of the system, which leads to subsequent failures within a short period of time referred to as threshold time. The concept of cascading failures within the framework of reliability theory and the Freund bivariate exponential distribution to model cascading failures has been studied by few authors. The Freund bivariate exponential distribution allows modelling a parallel redundant system with two components. In this system, the lifetimes of the two components behave as if they are independent, until one of the components fail, after which the remaining component suffers an increased/decreased stress. In this article, we further generalize this model to accommodate cascading failures. Various properties of the model are investigated and statistical inference procedures are developed using L-moments and method of moments. A practical application of this model is illustrated using data from www.espncricinfo.com. Also well analysed Diabetic Retinopathy Study (DRS) data is further analysed using this model and our findings are presented.  相似文献   

4.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

5.
As is well known, the monotonicity of failure rate of a life distribution plays an important role in modeling failure time data. In this paper, we develop techniques for the determination of increasing failure rate (IFR) and decreasing failure rate (DFR) property for a wide class of discrete distributions. Instead of using the failure rate, we make use of the ratio of two consecutive probabilities. The method developed is applied to various well known families of discrete distributions which include the binomial, negative binomial and Poisson distributions as special cases. Finally, a formula is presented to determine explicitly the failure rate of the families considered. This formula is used to determine the failure rate of various classes of discrete distributions. These formulas are explicit but complicated and cannot normally be used to determine the monotonicity of the failure rates.  相似文献   

6.
In this paper, a system of five components is studied; one of these components is a bridge network component. Each of these components has a non-constant failure rate. The system components have linear failure rate lifetime distribution. The given system is improved by using three methods: reduction, warm standby with perfect switch and warm standby with imperfect switch. The reliability equivalence factors of the bridge structure system are obtained. The γ-fractiles are obtained to compare the original system with these improved systems. Finally, we present numerical results to show the difference between these methods.  相似文献   

7.
A failure model with damage accumulation is considered. Damages occur according to a Poisson process and they degenerate into failures in a random time. The rate of the Poisson process and the degeneration time distribution are unknown. Two sample populations are available: a sample of intervals between damages and a sample of degeneration times. The case of small samples is considered. The purpose is to estimate the expectation and the distribution of the number of damages and failures at time t. We consider the plug-in and resampling estimators of the above mentioned characteristics. The expectations and variances of the suggested estimators are investigated. The numerical examples show that the resampling estimator has some advantages.  相似文献   

8.
We extend proportional hazards frailty models for lifetime data to allow a negative binomial, Poisson, Geometric or other discrete distribution of the frailty variable. This might represent, for example, the unknown number of flaws in an item under test. Zero frailty corresponds to a limited failure model containing a proportion of units that never fail (long-term survivors). Ways of modifying the model to avoid this are discussed. The models are illustrated on a previously published set of data on failures of printed circuit boards and on new data on breaking strengths of samples of cord.  相似文献   

9.
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator(MLE) of a systems lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.  相似文献   

10.
A generalized negative binomial distribution is derived from the Markov Bernoulli sequence of successes and failures. We study the properties and applications of this distribution. The properties are illustrated by two examples of discrete time queueing systems. The distribution is then fitted to two data sets, the eruption record of Mt. Sangay, and a record of computer disk failure accesses. In the first case there is a strong serial dependence in the data and the generalized negative binomial provides a good fit, while in the second case, although there is a significant serial dependence, it is insufficient to justify the additional parameter of the distribution. We conclude by demonstrating the usefulness of the distribution in the field of statistical quality control.  相似文献   

11.
Let R = Rn denote the total (and unconditional) number of runs of successes or failures in a sequence of n Bernoulll (p) trials, where p is assumed to be known throughout. The exact distribution of R is related to a convolution of two negative binomial random variables with parameters p and q (=1-p). Using the representation of R as the sum of 1 - dependent indicators, a Berry - Esséen theorem is derived; the obtained rate of sup norm convergence is O(n). This yields an unconditional version of the classical result of Wald and Wolfowitz (1940). The Stein - Chen method for m - dependent random variables is used, together with a suitable coupling, to prove a Poisson limit theorem for R. but with the limiting support set being the set of odd integers, Total variation error bounds (of order O(p) are found for the last result. Applications are indicated.  相似文献   

12.
Inspired by reliability issues in electric transmission networks, we use a probabilistic approach to study the occurrence of large failures in a stylized cascading line failure model. Such models capture the phenomenon where an initial line failure potentially triggers massive knock-on effects. Under certain critical conditions, the probability that the number of line failures exceeds a large threshold obeys a power-law distribution, a distinctive property observed in empiric blackout data. In this paper, we examine the robustness of the power-law behavior by exploring under which conditions this behavior prevails.  相似文献   

13.
We propose a new bivariate negative binomial model with constant correlation structure, which was derived from a contagious bivariate distribution of two independent Poisson mass functions, by mixing the proposed bivariate gamma type density with constantly correlated covariance structure (Iwasaki & Tsubaki, 2005), which satisfies the integrability condition of McCullagh & Nelder (1989, p. 334). The proposed bivariate gamma type density comes from a natural exponential family. Joe (1997) points out the necessity of a multivariate gamma distribution to derive a multivariate distribution with negative binomial margins, and the luck of a convenient form of multivariate gamma distribution to get a model with greater flexibility in a dependent structure with indices of dispersion. In this paper we first derive a new bivariate negative binomial distribution as well as the first two cumulants, and, secondly, formulate bivariate generalized linear models with a constantly correlated negative binomial covariance structure in addition to the moment estimator of the components of the matrix. We finally fit the bivariate negative binomial models to two correlated environmental data sets.  相似文献   

14.
ABSTRACT

In the present paper, we aim at providing plug-in-type empirical estimators that enable us to quantify the contribution of each operational or/and non-functioning state to the failures of a system described by a semi-Markov model. In the discrete-time and finite state space semi-Markov framework, we study different conditional versions of an important reliability measure for random repairable systems, the failure occurrence rate, which is based on counting processes. The identification of potential failure contributors through the conditional counterparts of the failure occurrence rate is of paramount importance since it could lead to corrective actions that minimize the occurrence of the more important failure modes and therefore improve the reliability of the system. The aforementioned estimators are characterized by appealing asymptotic properties such as strong consistency and asymptotic normality. We further obtain detailed analytical expressions for the covariance matrices of the random vectors describing the conditional failure occurrence rates. As particular cases we present the failure occurrence rates for hidden (semi-) Markov models. We illustrate our results by means of a simulated study. Different applications are presented based on wind, earthquake and vibration data.  相似文献   

15.
The data are n independent random binomial events, each resulting in success or failure. The event outcomes are believed to be trials from a binomial distribution with success probability p, and tests of p=1/2 are desired. However, there is the possibility that some unidentified event has a success probability different from the common value p for the other n?1 events. Then, tests of whether this common p equals 1/2 are desired. Fortunately, two-sided tests can be obtained that simultaneously are applicable for both situations. That is, the significance level for a test is same when one event has a different probability as when all events have the same probability. These tests are the usual equal-tail tests for p=1/2 (based on n independent trials from a binomial distribution).  相似文献   

16.
Accelerated life testing of a product under more severe than normal conditions is commonly used to reduce test time and costs. Data collected at such accelerated conditions are used to obtain estimates of the parameters of a stress translation function. This function is then used to make inference about the product's life under normal operating conditions. We consider the problem of accelerated life tests when the product of interest is a p component series system. Each of the components is assumed to have an independent Weibull time to failure distribution with different shape parameters and different scale parameters which are increasing functions stress. A general model i s used for the scale parameter includes the standard engineering models as special This model also has an appealing biological interpretation  相似文献   

17.
SUMMARY This paper presents three methods for estimating Weibull distribution parameters for the case of irregular interval group failure data with unknown failure times. The methods are based on the concepts of the piecewise linear distribution function (PLDF), an average interval failure rate (AIFR) and sequential updating of the distribution function (SUDF), and use an analytical approach similar to that of Ackoff and Sasieni for regular interval group data. Results from a large number of simulated case problems generated with specified values of Weibull distribution parameters have been presented, which clearly indicate that the SUDF method produces near-perfect parameter estimates for all types of failure pattern. The performances of the PLDF and AIFR methods have been evaluated by goodness-of-fit testing and statistical confidence limits on the shape parameter. It has been found that, while the PLDF method produces acceptable parameter estimates, the AIFR method may fail for low and high shape parameter values that represent the cases of random and wear-out types of failure. A real-life application of the proposed methods is also presented, by analyzing failures of hydrogen make-up compressor valves in a petroleum refinery.  相似文献   

18.
It is an important problem in reliability analysis to decide whether for a given k-out-of-n system the static or the sequential k-out-of-n model is appropriate. Often components are redundantly added to a system to protect against failure of the system. If the failure of any component of the system induces a higher rate of failure of the remaining components due to increased load, the sequential k-out-of-n model is appropriate. The increase of the failure rate of the remaining components after a failure of some component implies that the effects of the component redundancy are diminished. On the other hand, if all the components have the same failure distribution and whenever a failure occurs, the remaining components are not affected, the static k-out-of-n model is adequate. In this paper, we consider nonparametric hypothesis tests to make a decision between these two models. We analyze test statistics based on the profile score process as well as test statistics based on a multivariate intensity ratio and derive their asymptotic distribution. Finally, we compare the different test statistics.  相似文献   

19.
The two-sided power (TSP) distribution is a flexible two-parameter distribution having uniform, power function and triangular as sub-distributions, and it is a reasonable alternative to beta distribution in some cases. In this work, we introduce the TSP-binomial model which is defined as a mixture of binomial distributions, with the binomial parameter p having a TSP distribution. We study its distributional properties and demonstrate its use on some data. It is shown that the newly defined model is a useful candidate for overdispersed binomial data.  相似文献   

20.
In certain applications involving discrete data, it is sometimes found that X = 0 is observed with a frequency significantly higher than predicted by the assumed model. Zero inflated Poisson, binomial and negative binomial models have been employed in some clinical trials and in some regression analysis problems.

In this paper, we study the zero inflated modified power series distributions (IMPSD) which include among others the generalized Poisson and the generalized negative binomial distributions and hence the Poisson, binomial and negative binomial distributions. The structural properties along with the distribution of the sum of independent IMPSD variables are studied. The maximum likelihood estimation of the parameters of the model is examined and the variance-covariance matrix of the estimators is obtained. Finally, examples are presented for the generalized Poisson distribution to illustrate the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号