首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 177 毫秒
1.
Summary.  Hip replacements rovide a means of achieving a higher quality of life for individuals who have, through aging or injury, accumulated damage to their natural joints. This is a very common operation, with over a million people a year benefiting from the procedure. The replacements themselves fail mainly as a result of the mechanical loosening of the components of the artificial joint due to damage accumulation. This damage accumulation consists of the initiation and growth of cracks in the bone cement which is used to fixate the replacement in the human body. The data come from laboratory experiments that are designed to assess the effectiveness of the bone cement in resisting damage. We examine the properties of the bone cement, with the aim being to estimate the effect that both observable and unobservable spatially varying factors have on causing crack initiation. To do this, an explicit model for the damage process is constructed taking into account the tension and compression at different locations in the specimens. A gamma random field is used to model any latent spatial factors that may be influential in crack initiation. Bayesian inference is carried out for the parameters of this field and related covariates by using Markov chain Monte Carlo techniques.  相似文献   

2.
In this paper, we propose a spatial model for the initiation of cracks in the bone cement of hip replacement specimens. The failure of hip replacements can be attributed mainly to damage accumulation, consisting of crack initiation and growth, occurring in the cement mantle that interlocks the hip prosthesis and the femur bone. Since crack initiation is an important factor in determining the lifetime of a replacement, the understanding of the reasons for crack initiation is vital in attempting to prolong the life of the hip replacement. The data consist of crack location coordinates from five laboratory experimental models, together with stress measurements. It is known that stress plays a major role in the initiation of cracks, and it is also known that other unmeasurable factors such as air bubbles (pores) in the cement mantle are also influential. We propose an identity-link spatial Poisson regression model for the counts of cracks in discrete regions of the cement, incorporating both the measured (stress), and through a latent process, any unmeasured factors (possibly pores) that may be influential. All analysis is carried out in a Bayesian framework, allowing for the inclusion of prior information obtained from engineers, and parameter estimation for the model is done via Markov chain Monte Carlo techniques.  相似文献   

3.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   

4.
The increasingreliability of some manufactured products has led to fewer observedfailures in reliability testing. Thus, useful inference on thedistribution of failure times is often not possible using traditionalsurvival analysis methods. Partly as a result of this difficulty,there has been increasing interest in inference from degradationmeasurements made on products prior to failure. In the degradationliterature inference is commonly based on large-sample theoryand, if the degradation path model is nonlinear, their implementationcan be complicated by the need for approximations. In this paperwe review existing methods and then describe a fully Bayesianapproach which allows approximation-free inference. We focuson predicting the failure time distribution of both future unitsand those that are currently under test. The methods are illustratedusing fatigue crack growth data.  相似文献   

5.
This paper deals with the software reliability model based on a nonhomogeneous Poisson process. We introduce new types of mean functions which can be either NHPP-I or NHPP-II according to the choice of the distribution function. The proposed mean function is motivated by the fact that a strictly monotone increasing function can be modeled by a distribution function and an unknown distribution function approximated by a mixture of beta distributions. Some existing mean functions can be regarded as special cases of the proposed mean functions. The EM algorithm is used to obtain maximum likelihood estimates of the parameters in the proposed model.  相似文献   

6.
In most reliability studies involving censoring, one assumes that censoring probabilities are unknown. We derive a nonparametric estimator for the survival function when information regarding censoring frequency is available. The estimator is constructed by adjusting the Nelson–Aalen estimator to incorporate censoring information. Our results indicate significant improvements can be achieved if available information regarding censoring is used. We compare this model to the Koziol–Green model, which is also based on a form of proportional hazards for the lifetime and censoring distributions. Two examples of survival data help to illustrate the differences in the estimation techniques.  相似文献   

7.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

8.
The Type-II progressive censoring scheme has become very popular for analyzing lifetime data in reliability and survival analysis. However, no published papers address parameter estimation under progressive Type-II censoring for the mixed exponential distribution (MED), which is an important model for reliability and survival analysis. This is the problem that we address in this paper. It is noted that maximum likelihood estimation of unknown parameters cannot be obtained in closed form due to the complicated log-likelihood function. We solve this problem by using the EM algorithm. Finally, we obtain closed form estimates of the model. The proposed methods are illustrated by both some simulations and a case analysis.  相似文献   

9.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   

10.
ABSTRACT

In the present paper, we aim at providing plug-in-type empirical estimators that enable us to quantify the contribution of each operational or/and non-functioning state to the failures of a system described by a semi-Markov model. In the discrete-time and finite state space semi-Markov framework, we study different conditional versions of an important reliability measure for random repairable systems, the failure occurrence rate, which is based on counting processes. The identification of potential failure contributors through the conditional counterparts of the failure occurrence rate is of paramount importance since it could lead to corrective actions that minimize the occurrence of the more important failure modes and therefore improve the reliability of the system. The aforementioned estimators are characterized by appealing asymptotic properties such as strong consistency and asymptotic normality. We further obtain detailed analytical expressions for the covariance matrices of the random vectors describing the conditional failure occurrence rates. As particular cases we present the failure occurrence rates for hidden (semi-) Markov models. We illustrate our results by means of a simulated study. Different applications are presented based on wind, earthquake and vibration data.  相似文献   

11.
In this paper, it is demonstrated that coefficient of determination of an ANOVA linear model provides a measure of polarization. Taking as the starting point the link between polarization and dispersion, we reformulate the measure of polarization of Zhang and Kanbur using the decomposition of the variance instead of the decomposition of the Theil index. We show that the proposed measure is equivalent to the coefficient of determination of an ANOVA linear model that explains, for example, the income of the households as a function of any population characteristic such as education, gender, occupation, etc. This result provides an alternative way to analyse polarization by sub-populations characteristics and at the same time allows us to compare sub-populations via the estimated coefficients of the ANOVA model.  相似文献   

12.
Dynamic reliability models with conditional proportional hazards   总被引:1,自引:0,他引:1  
A dynamic approach to the stochastic modelling of reliability systems is further explored. This modelling approach is particularly appropriate for load-sharing, software reliability, and multivariate failure-time models, where component failure characteristics are affected by their degree of use, amount of load, or extent of stresses experienced. This approach incorporates the intuitive notion that when a set of components in a coherent system fail at a certain time, there is a jump from one structure function to another which governs the residual lifetimes of the remaining functioning components, and since the component lifetimes are intrinsically affected by the structure function which they constitute, then at such a failure time there should also be a jump in the stochastic structure of the lifetimes of the remaining components. For such dynamically-modelled systems, the stochastic characteristics of their jump times are studied. These properties of the jump times allow us to obtain the properties of the lifetime of the system. In particular, for a Markov dynamic model, specific expressions for the exact distribution functions of the jump times are obtained for a general coherent system, a parallel system, and a series-parallel system. We derive a new family of distribution functions which describes the distributions of the jump times for a dynamically-modelled system.  相似文献   

13.
Traditionally, reliability assessment of devices has been based on life tests (LTs) or accelerated life tests (ALTs). However, these approaches are not practical for high-reliability devices which are not likely to fail in experiments of reasonable length. For these devices, LTs or ALTs will end up with a high censoring rate compromising the traditional estimation methods. An alternative approach is to monitor the devices for a period of time and assess their reliability from the changes in performance (degradation) observed during the experiment. In this paper, we present a model to evaluate the problem of train wheel degradation, which is related to the failure modes of train derailments. We first identify the most significant working conditions affecting the wheel wear using a nonlinear mixed-effects (NLME) model where the log-rate of wear is a linear function of some working conditions such as side, truck and axle positions. Next, we estimate the failure time distribution by working condition analytically. Point and interval estimates of reliability figures by working condition are also obtained. We compare the results of the analysis via an NLME to the ones obtained by an approximate degradation analysis.  相似文献   

14.
System reliability depends on the reliability of the system’s components and the structure of the system. For example, in a competing risks model, the system fails when the weakest component fails. The reliability function and the quantile function of a complicated system are two important metrics for characterizing the system’s reliability. When there are data available at the component level, the system reliability can be estimated by using the component level information. Confidence intervals (CIs) are needed to quantify the statistical uncertainty in the estimation. Obtaining system reliability CI procedures with good properties is not straightforward, especially when the system structure is complicated. In this paper, we develop a general procedure for constructing a CI for the system failure-time quantile function by using the implicit delta method. We also develop general procedures for constructing a CI for the cumulative distribution function (cdf) of the system. We show that the recommended procedures are asymptotically valid and have good statistical properties. We conduct simulations to study the finite-sample coverage properties of the proposed procedures and compare them with existing procedures. We apply the proposed procedures to three applications; two applications in competing risks models and an application with a $k\text{-out-of-}s$ system. The paper concludes with some discussion and an outline of areas for future research.  相似文献   

15.
A system comprising k identical components is considered. The system load that is common to all components is treated as a random variable, modeled by a one-parameter Exponential distribution. Each component's resistance to load is also taken as a random variable, modeled by another one-parameter Exponential distribution that applies to each component. The system is considered redundant, meaning that the system remains operative so long as any m out of k components function, where 1≤m≤k. An expression for system reliability of this load-strength model is derived. The maximum likelihood estimator is compared with the structural expectation of system reliability.  相似文献   

16.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

17.
The problem of comparing the linear calibration equations of several measuring methods, each designed to measure the same characteristic on a common group of individuals, is discussed. We consider the factor analysis version of the model and propose to estimate the model parameters using the EM algorithm. The equations that define the 'M' step are simple to implement and computationally in expensive, requiring no additional maximization procedures. The derivation of the complete data log-likelihood function makes it possible to obtain the expected and observed information matrices for any number p(> 3) of instruments in closed form, upon which large sample inference on the parameters can be based. Re-analysis of two actual data sets is presented.  相似文献   

18.
This study investigates the effect of manufacturing defects on the failure rate for a population of repairable devices and for a population of non-repairable devices. A reliability function is obtained for a random number of manufacturing defects in a device following a general distribution. We observe that for any population, the failure rate decreases if the device-to-device variability of the number of defects is large enough. Considering a case further where the defect size initially follows a linear-power-law distribution and increases at a rate that is proportional to the defect size at any instant during field operation, we show that the defect growth and defect clustering plays an important role in inducing the decreasing property in the failure rate function.  相似文献   

19.
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we introduce a location-scale model for bivariate survival times based on the copula to model the dependence of bivariate survival data with cure fraction. We create the correlation structure between the failure times using the Clayton family of copulas, which is assumed to have any distribution. It turns out that the model becomes very flexible with respect to the choice of the marginal distributions. For the proposed model, we consider inferential procedures based on constrained parameters under maximum likelihood. We derive the appropriate matrices for assessing local influence under different perturbation schemes and present some ways to perform global influence analysis. The relevance of the approach is illustrated using a real data set and a diagnostic analysis is performed to select an appropriate model.  相似文献   

20.
Asymptotics of an alternative extreme-value estimator for the autocorrelation parameter in a first-order bifurcating autoregressive (BAR) process with non-gaussian innovations are derived. This contrasts with traditional estimators whose asymptotic behavior depends on the central part of the innovation distribution. Within any BAR model, the main concern is addressing the complex dependency between generations. The inability of traditional methods to handle this dependency motivated an alternative procedure. With the combination of an extreme-value approach and a clever blocking argument, the dependency issue within the BAR process was resolved, which in turn allowed us to derive the limiting distribution for the proposed estimator through the use of regular variation and non-stationary point processes. Finally, the implications of our extreme-value approach are discussed with an extensive simulation study that not only assesses the reliability of our proposed estimate but also presents the findings for a new estimator of an unknown location parameter θ and its implications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号