首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider a multicomponent load-sharing system in which the failure rate of a given component depends on the set of working components at any given time. Such systems can arise in software reliability models and in multivariate failure-time models in biostatistics, for example. A load-share rule dictates how stress or load is redistributed to the surviving components after a component fails within the system. In this paper, we assume the load share rule is unknown and derive methods for statistical inference on load-share parameters based on maximum likelihood. Components with (individual) constant failure rates are observed in two environments: (1) the system load is distributed evenly among the working components, and (2) we assume only the load for each working component increases when other components in the system fail. Tests for these special load-share models are investigated.  相似文献   

2.
Two-parameter Gompertz distribution has been introduced as a lifetime model for reliability inference recently. In this paper, the Gompertz distribution is proposed for the baseline lifetimes of components in a composite system. In this composite system, failure of a component induces increased load on the surviving components and thus increases component hazard rate via a power-trend process. Point estimates of the composite system parameters are obtained by the method of maximum likelihood. Interval estimates of the baseline survival function are obtained by using the maximum-likelihood estimator via a bootstrap percentile method. Two parametric bootstrap procedures are proposed to test whether the hazard rate function changes with the number of failed components. Intensive simulations are carried out to evaluate the performance of the proposed estimation procedure.  相似文献   

3.
A mixture of the MANOVA and GMANOVA models is presented. The expected value of the response matrix in this model is the sum of two matrix components. The first component represents the GMANOVA portion and the second component represents the MANOVA portion. Maximum likelihood estimators are derived for the parameters in this model, and goodness-of-fit tests are constructed for fuller models via the likelihood ration criterion. Finally, likelihood ration tests for general liinear hypotheses are developed and a numerical example is presented.  相似文献   

4.
The non-parametric maximum likelihood estimators (MLEs) are derived for survival functions associated with individual risks or system components in a reliability framework. Lifetimes are observed for systems that contain one or more of those components. Analogous to a competing risks model, the system is assumed to fail upon the first instance of any component failure; i.e. the system is configured in series. For any given risk or component type, the asymptotic distribution is shown to depend explicitly on the unknown survival function of the other risks, as well as the censoring distribution. Survival functions with increasing failure rate are investigated as a special case. The order restricted MLE is shown to be consistent under mild assumptions of the underlying component lifetime distributions.  相似文献   

5.
Component lifetime parameters of a series system are estimated from system lifetimes and masked cause of failure observations. The time and cause of system failures are assumed to follow a competing risks model. The masking probabilities of the minimum random subsets are not subjected to the symmetry assumption. Sufficient regularity conditions are provided, justifying the maximum likelihood analysis. Maximum likelihood estimates of both the lifetime parameters and masking probabilities are generically computed via an EM algorithm. An appropriate set of asymptotically pivotal quantities are also derived. Such maximum likelihood based estimates are further refined by bootstrap. The developed techniques are illustrated by numerical examples of independent Weibull component lifetimes with distinct scale and shape parameters.  相似文献   

6.
Abstract

Many engineering systems have multiple components with more than one degradation measure which is dependent on each other due to their complex failure mechanisms, which results in some insurmountable difficulties for reliability work in engineering. To overcome these difficulties, the system reliability prediction approaches based on performance degradation theory develop rapidly in recent years, and show their superiority over the traditional approaches in many applications. This paper proposes reliability models of systems with two dependent degrading components. It is assumed that the degradation paths of the components are governed by gamma processes. For a parallel system, its failure probability function can be approximated by the bivariate Birnbaum–Saunders distribution. According to the relationship of parallel and series systems, it is easy to find that the failure probability function of a series system can be expressed by the bivariate Birnbaum–Saunders distribution and its marginal distributions. The model in such a situation is very complicated and analytically intractable, and becomes cumbersome from a computational viewpoint. For this reason, the Bayesian Markov chain Monte Carlo method is developed for this problem that allows the maximum likelihood estimates of the parameters to be determined in an efficient manner. After that, the confidence intervals of the failure probability of systems are given. For an illustration of the proposed model, a numerical example about railway track is presented.  相似文献   

7.
Competing risks models are of great importance in reliability and survival analysis. They are often assumed to have independent causes of failure in literature, which may be unreasonable. In this article, dependent causes of failure are considered by using the Marshall–Olkin bivariate Weibull distribution. After deriving some useful results for the model, we use ML, fiducial inference, and Bayesian methods to estimate the unknown model parameters with a parameter transformation. Simulation studies are carried out to assess the performances of the three methods. Compared with the maximum likelihood method, the fiducial and Bayesian methods could provide better parameter estimation.  相似文献   

8.
In this paper we consider a binary, monotone system whose component states are dependent through the possible occurrence of independent common shocks, i.e. shocks that destroy several components at once. The individual failure of a component is also thought of as a shock. Such systems can be used to model common cause failures in reliability analysis. The system may be a technological one, or a human being. It is observed until it fails or dies. At this instant, the set of failed components and the failure time of the system are noted. The failure times of the components are not known. These are the so-called autopsy data of the system. For the case of independent components, i.e. no common shocks, Meilijson (1981), Nowik (1990), Antoine et al . (1993) and GTsemyr (1998) discuss the corresponding identifiability problem, i.e. whether the component life distributions can be determined from the distribution of the observed data. Assuming a model where autopsy data is known to be enough for identifia bility, Meilijson (1994) goes beyond the identifiability question and into maximum likelihood estimation of the parameters of the component lifetime distributions based on empirical autopsy data from a sample of several systems. He also considers life-monitoring of some components and conditional life-monitoring of some other. Here a corresponding Bayesian approach is presented for the shock model. Due to prior information one advantage of this approach is that the identifiability problem represents no obstacle. The motivation for introducing the shock model is that the autopsy model is of special importance when components can not be tested separately because it is difficult to reproduce the conditions prevailing in the functioning system. In Gåsemyr & Natvig (1997) we treat the Bayesian approach to life-monitoring and conditional life- monitoring of components  相似文献   

9.
The Cox‐Aalen model, obtained by replacing the baseline hazard function in the well‐known Cox model with a covariate‐dependent Aalen model, allows for both fixed and dynamic covariate effects. In this paper, we examine maximum likelihood estimation for a Cox‐Aalen model based on interval‐censored failure times with fixed covariates. The resulting estimator globally converges to the truth slower than the parametric rate, but its finite‐dimensional component is asymptotically efficient. Numerical studies show that estimation via a constrained Newton method performs well in terms of both finite sample properties and processing time for moderate‐to‐large samples with few covariates. We conclude with an application of the proposed methods to assess risk factors for disease progression in psoriatic arthritis.  相似文献   

10.
Kalman filtering techniques are widely used by engineers to recursively estimate random signal parameters which are essentially coefficients in a large-scale time series regression model. These Bayesian estimators depend on the values assumed for the mean and covariance parameters associated with the initial state of the random signal. This paper considers a likelihood approach to estimation and tests of hypotheses involving the critical initial means and covariances. A computationally simple convergent iterative algorithm is used to generate estimators which depend only on standard Kalman filter outputs at each successive stage. Conditions are given under which the maximum likelihood estimators are consistent and asymptotically normal. The procedure is illustrated using a typical large-scale data set involving 10-dimensional signal vectors.  相似文献   

11.
A log-linear model is defined for multiway contingency tables with negative multinomial frequency counts. The maximum likelihood estimator of the model parameters and the estimator covariance matrix is given. The likelihood ratio test for the general log-linear hypothesis also is presented.  相似文献   

12.
This paper deals with the regression analysis of failure time data when there are censoring and multiple types of failures. We propose a semiparametric generalization of a parametric mixture model of Larson & Dinse (1985), for which the marginal probabilities of the various failure types are logistic functions of the covariates. Given the type of failure, the conditional distribution of the time to failure follows a proportional hazards model. A marginal like lihood approach to estimating regression parameters is suggested, whereby the baseline hazard functions are eliminated as nuisance parameters. The Monte Carlo method is used to approximate the marginal likelihood; the resulting function is maximized easily using existing software. Some guidelines for choosing the number of Monte Carlo replications are given. Fixing the regression parameters at their estimated values, the full likelihood is maximized via an EM algorithm to estimate the baseline survivor functions. The methods suggested are illustrated using the Stanford heart transplant data.  相似文献   

13.
A model for the lifetime of a system is considered in which the system is susceptible to simultaneous failures of two or more components, the failures having a common external cause. Three sets of discrete failure data from the US nuclear industry are examined to motivate and illustrate the model derivation: they are for motor-operated valves, cooling fans and emergency diesel generators. To achieve target reliabilities, these components must be placed in systems that have built-in redundancy. Consequently, multiple failures due to a common cause are critical in the risk of core meltdown. Vesely has offered a simple methodology for inference, called the binomial failure rate model: external events are assumed to be governed by a Poisson shock model in which resulting shocks kill X out of m system components, X having a binomial distribution with parameters ( m , p ), 0< p <1. In many applications the binomial failure rate model fits failure data poorly, and the model has not typically been applied to probabilistic risk assessments in the nuclear industry. We introduce a realistic generalization of the binomial failure rate model by assigning a mixing distribution to the unknown parameter p . The distribution is generally identifiable, and its unique nonparametric maximum likelihood estimator can be obtained by using a simple iterative scheme.  相似文献   

14.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

15.
The class of inflated beta regression models generalizes that of beta regressions [S.L.P. Ferrari and F. Cribari-Neto, Beta regression for modelling rates and proportions, J. Appl. Stat. 31 (2004), pp. 799–815] by incorporating a discrete component that allows practitioners to model data on rates and proportions with observations that equal an interval limit. For instance, one can model responses that assume values in (0, 1]. The likelihood ratio test tends to be quite oversized (liberal, anticonservative) in inflated beta regressions estimated with a small number of observations. Indeed, our numerical results show that its null rejection rate can be almost twice the nominal level. It is thus important to develop alternative testing strategies. This paper develops small-sample adjustments to the likelihood ratio and signed likelihood ratio test statistics in inflated beta regression models. The adjustments do not require orthogonality between the parameters of interest and the nuisance parameters and are fairly simple since they only require first- and second-order log-likelihood cumulants. Simulation results show that the modified likelihood ratio tests deliver much accurate inference in small samples. An empirical application is presented and discussed.  相似文献   

16.
Binary-response data arise in teratology and mutagenicity studies in which each treatment is applied to a group of litters. In a large experiment, a contingency table can be constructed to test the treatment X litter size interaction (see Kastenbaum and Lamphiear 1959). In situations in which there is a clumped category, as in the Kastenbaum and Lamphiear mice-depletion data, a clumped binomial model (Koch et al. 1976) or a clumped beta-binomial model (Paul 1979) can be used to analyze these data. When a clumped binomial model is appropriate, the maximum likelihood estimates of the parameters of the model under the hypothesis of no treatment X litter size interaction, as well as under the hypothesis of the said interaction, can be estimated via the EM algorithm for computing maximum likelihood estimates from incomplete data (Dempster et al. 1977). In this article the EM algorithm is described and used to test treatment X litter size interaction for the Kastenbaum and Lamphiear data and for a set of data given in Luning et al. (1966).  相似文献   

17.
In this paper, we consider a new mixture of varying coefficient models, in which each mixture component follows a varying coefficient model and the mixing proportions and dispersion parameters are also allowed to be unknown smooth functions. We systematically study the identifiability, estimation and inference for the new mixture model. The proposed new mixture model is rather general, encompassing many mixture models as its special cases such as mixtures of linear regression models, mixtures of generalized linear models, mixtures of partially linear models and mixtures of generalized additive models, some of which are new mixture models by themselves and have not been investigated before. The new mixture of varying coefficient model is shown to be identifiable under mild conditions. We develop a local likelihood procedure and a modified expectation–maximization algorithm for the estimation of the unknown non‐parametric functions. Asymptotic normality is established for the proposed estimator. A generalized likelihood ratio test is further developed for testing whether some of the unknown functions are constants. We derive the asymptotic distribution of the proposed generalized likelihood ratio test statistics and prove that the Wilks phenomenon holds. The proposed methodology is illustrated by Monte Carlo simulations and an analysis of a CO2‐GDP data set.  相似文献   

18.
ABSTRACT

System failure data is often analyzed to estimate component reliabilities. Due to cost and time constraints, the exact component causing the failure of the system cannot be identified in some cases. This phenomenon is called masking. Further, it is sometimes necessary for us to take account of the influence of the operating environment. Here we consider a series system, operating under unknown environment, of two components whose failure times follow the Marshall-Olkin bivariate exponential distribution. We present a maximum likelihood approach for obtaining estimators from the masked data for this system. From a simulation study, we found that the relative errors of the estimates are almost well behaved even for small or moderate expected number of systems whose cause of failure is identified.  相似文献   

19.
We propose a semiparametric approach for the analysis of case–control genome-wide association study. Parametric components are used to model both the conditional distribution of the case status given the covariates and the distribution of genotype counts, whereas the distribution of the covariates are modelled nonparametrically. This yields a direct and joint modelling of the case status, covariates and genotype counts, and gives a better understanding of the disease mechanism and results in more reliable conclusions. Side information, such as the disease prevalence, can be conveniently incorporated into the model by an empirical likelihood approach and leads to more efficient estimates and a powerful test in the detection of disease-associated SNPs. Profiling is used to eliminate a nuisance nonparametric component, and the resulting profile empirical likelihood estimates are shown to be consistent and asymptotically normal. For the hypothesis test on disease association, we apply the approximate Bayes factor (ABF) which is computationally simple and most desirable in genome-wide association studies where hundreds of thousands to a million genetic markers are tested. We treat the approximate Bayes factor as a hybrid Bayes factor which replaces the full data by the maximum likelihood estimates of the parameters of interest in the full model and derive it under a general setting. The deviation from Hardy–Weinberg Equilibrium (HWE) is also taken into account and the ABF for HWE using cases is shown to provide evidence of association between a disease and a genetic marker. Simulation studies and an application are further provided to illustrate the utility of the proposed methodology.  相似文献   

20.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号