首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
For series systems with k components it is assumed that the cause of failure is known to belong to one of the 2k − 1 possible subsets of the failure-modes. The theoretical time to failure due to k causes are assumed to have independent Weibull distributions with equal shape parameters. After finding the MLEs and the observed information matrix of (λ1, …, λk, β), a prior distribution is proposed for (λ1, …, λk), which is shown to yield a scale-invariant noninformative prior as well. No particular structure is imposed on the prior of β. Methods to obtain the marginal posterior distributions of the parameters and other parametric functions of interest and their Bayesian point and interval estimates are discussed. The developed techniques are illustrated using a numerical example.  相似文献   

2.
Many existing approaches to analysing interval-censored data lack flexibility or efficiency. In this paper, we propose an efficient, easy to implement approach on accelerated failure time model with a logarithm transformation of the failure time and flexible specifications on the error distribution. We use exact inference for the Dirichlet process without approximation in imputation. Our algorithm can be implemented with simple Gibbs sampling which produces exact posterior distributions on the features of interest. Simulation and real data analysis demonstrate the advantage of our method compared to some other methods.  相似文献   

3.
In this paper, we consider the Bayesian analysis of competing risks data, when the data are partially complete in both time and type of failures. It is assumed that the latent cause of failures have independent Weibull distributions with the common shape parameter, but different scale parameters. When the shape parameter is known, it is assumed that the scale parameters have Beta–Gamma priors. In this case, the Bayes estimates and the associated credible intervals can be obtained in explicit forms. When the shape parameter is also unknown, it is assumed that it has a very flexible log-concave prior density functions. When the common shape parameter is unknown, the Bayes estimates of the unknown parameters and the associated credible intervals cannot be obtained in explicit forms. We propose to use Markov Chain Monte Carlo sampling technique to compute Bayes estimates and also to compute associated credible intervals. We further consider the case when the covariates are also present. The analysis of two competing risks data sets, one with covariates and the other without covariates, have been performed for illustrative purposes. It is observed that the proposed model is very flexible, and the method is very easy to implement in practice.  相似文献   

4.
5.
This paper gives matrix formilae for the O(n-1 ) cerrecti0n applicable to asymptotically efficient conditional moment tests. These formulae only require expectations of functions involving, at most, second order derivatives of the log-likelihood; unlike those previously providcd by Ferrari and Corddro(1994). The correction is used to assess the reliability of first order asymptotic theory for arbitrary residual-based diagnostics in a class of accelerated failure time models: this correction is always parameter free, depending only on the number of included covariates in the regression design. For all but one of the tests considered, first order theory is found to be extremely unreliable, even in quite large samples, although this may not be widely appreciated by applied workers.  相似文献   

6.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   

7.
Han introduced an E-Bayesian estimation method for estimating a system failure probability and revealed the relationship between the E-Bayesian estimates under three different prior distributions of hyperparameters in 2007. In this article, formulas of the hierarchical Bayesian estimation of a system failure probability are investigated and, furthermore, the relationship between hierarchical Bayesian estimation and E-Bayesian estimation is discussed. Finally, numerical example and application example are provided for illustrative purpose.  相似文献   

8.
We propose what appears to be the first Bayesian procedure for the analysis of seasonal variation when the sample size and the amplitude are small. Such data occur often in the medical sciences, where seasonality analyses and environmental considerations can help clarify disease etiologies. The method is explained in terms of a simple physico-geometric setting. We present the Bayesian version of a frequentist test that performs well. Two examples of real data illustrate the procedure's application.  相似文献   

9.
Factorial experiments with spatially arranged units occur in many situations, particularly in agricultural field trials. The design of such experiments when observations are spatially correlated is investigated in this paper. We show that having a large number of within-factor level changes in rows and columns is important for efficient and robust designs, and demonstrate how designs with these properties can be constructed.  相似文献   

10.
A flexible Bayesian semiparametric accelerated failure time (AFT) model is proposed for analyzing arbitrarily censored survival data with covariates subject to measurement error. Specifically, the baseline error distribution in the AFT model is nonparametrically modeled as a Dirichlet process mixture of normals. Classical measurement error models are imposed for covariates subject to measurement error. An efficient and easy-to-implement Gibbs sampler, based on the stick-breaking formulation of the Dirichlet process combined with the techniques of retrospective and slice sampling, is developed for the posterior calculation. An extensive simulation study is conducted to illustrate the advantages of our approach.  相似文献   

11.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

12.
Interval-censored failure time data and panel count data are two types of incomplete data that commonly occur in event history studies and many methods have been developed for their analysis separately (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006; Sun and Zhao in The statistical analysis of panel count data. Springer, New York, 2013). Sometimes one may be interested in or need to conduct their joint analysis such as in the clinical trials with composite endpoints, for which it does not seem to exist an established approach in the literature. In this paper, a sieve maximum likelihood approach is developed for the joint analysis and in the proposed method, Bernstein polynomials are used to approximate unknown functions. The asymptotic properties of the resulting estimators are established and in particular, the proposed estimators of regression parameters are shown to be semiparametrically efficient. In addition, an extensive simulation study was conducted and the proposed method is applied to a set of real data arising from a skin cancer study.  相似文献   

13.
Kleinbaum (1973) developed a generalized growth curve model for analyzing incomplete longitudinal data. In this paper the small sample properties of several related test statistics are investigated via Monte Carlo techniques. The covariance matrix is estimated by each of three non-iterative methods. The null and non-null distributions of these test statistics are examined.  相似文献   

14.
All statistical methods involve basic model assumptions, which if violated render results of the analysis dubious. A solution to such a contingency is to seek an appropriate model or to modify the customary model by introducing additional parameters. Both of these approaches are in general cumbersome and demand uncommon expertise. An alternative is to transform the data to achieve compatibility with a well understood and convenient customary model with readily available software. The well-known example is the Box–Cox data transformation developed in order to make the normal theory linear model usable even when the assumptions of normality and homoscedasticity are not met.In reliability analysis the model appropriateness is determined by the nature of the hazard function. The well-known Weibull distribution is the most commonly employed model for this purpose. However, this model, which allows only a small spectrum of monotone hazard rates, is especially inappropriate if the data indicate bathtub-shaped hazard rates.In this paper, a new model based on the use of data transformation is presented for modeling bathtub-shaped hazard rates. Parameter estimation methods are studied for this new (transformation) approach. Examples and results of comparisons between the new model and other bathtub-shaped models are shown to illustrate the applicability of this new model.  相似文献   

15.
The authors show that for balanced data, the estimates of effects of interest and of their standard errors are unaffected when a covariate is removed from a multiplicative Poisson model. As they point out, this is not verified in the analogous linear model, nor in the logistic model. In the first case, only the estimated coefficients remain the same, while in the second case, both the estimated effects and their standard errors can change.  相似文献   

16.
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.  相似文献   

17.
We study the reliability estimates of the non-standard mixture of degenerate (degenerated at zero) and exponential distributions. The Uniformly Minimum Variance Unbiased Estimator (UMVUE) and Bayes estimator of the reliability for some selective prior when the mixing proportion is known and unknown are derived. The Bayes risk is computed for each Bayes estimator of the reliability. A simulated study is carried out to assess the performance of the estimators alongwith the true and Maximum Likelihood Estimate (MLE) of the reliability. An example from Vannman (1991) is also discussed at the end of the paper.  相似文献   

18.
This paper considers the problem of hypothesis testing in a simple panel data regression model with random individual effects and serially correlated disturbances. Following Baltagi et al. (Econom. J. 11:554–572, 2008), we allow for the possibility of non-stationarity in the regressor and/or the disturbance term. While Baltagi et al. (Econom. J. 11:554–572, 2008) focus on the asymptotic properties and distributions of the standard panel data estimators, this paper focuses on testing of hypotheses in this setting. One important finding is that unlike the time-series case, one does not necessarily need to rely on the “super-efficient” type AR estimator by Perron and Yabu (J. Econom. 151:56–69, 2009) to make an inference in the panel data. In fact, we show that the simple t-ratio always converges to the standard normal distribution, regardless of whether the disturbances and/or the regressor are stationary.  相似文献   

19.
Prostate cancer (PrCA) is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of PrCA. Much work on the spatial survival model is based on the proportional hazards (PH) model, but few focused on the accelerated failure time (AFT) model. In this paper, we investigate the PrCA data of Louisiana from the Surveillance, Epidemiology, and End Results program and the violation of the PH assumption suggests that the spatial survival model based on the AFT model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially referenced independent or dependent spatial structures. The deviance information criterion is used to select a best-fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage, and geographical distribution are significant in evaluating PrCA survival.  相似文献   

20.
Since the development of methods for the analysis of experiments with dependent data, see for example Gleeson and Cullis (1987), the design of such experiments has been an area of active research. We investigate the design of factorial experiments, complete and fractional, for various dependency structures. An algorithm for generating optimal or near optimal designs is presented and shown to be useful across a wide range of dependency structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号