首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this article, we study a new class of non negative distributions generated by the symmetric distributions around zero. For the special case of the distribution generated using the normal distribution, properties like moments generating function, stochastic representation, reliability connections, and inference aspects using methods of moments and maximum likelihood are studied. Moreover, a real data set is analyzed, illustrating the fact that good fits can result.  相似文献   

2.
Bathtub distributions are characterized by bathtub failure rate functions . These are possibly more realisitic models than the monotone failure rate models . A systematic account of such distributions is not available and this review aims to give such an account . We give some easily verifiable conditions to check the bathtub property of a distribution along with methods to construct such distributions . We also discuss some stochastic and reliablity mechanisms which lead to bathtub distributions. These include mixtures ( stochastic failure rate models ) , series system , stochastic differential equation models and so on. We also review inference on bathtub distributions. The paper concludes with a rather exhaustive list of bathtub distributions.  相似文献   

3.
The family of skew distributions introduced by Azzalini and extended by others has received widespread attention. However, it suffers from complicated inference procedures. In this paper, a new family of skew distributions that overcomes the difficulties is introduced. This new family belongs to the exponential family. Many properties of this family are studied, inference procedures developed and simulation studies performed to assess the procedures. Some particular cases of this family, evidence of its flexibility and a real data application are presented. At least 10 advantages of the new family over Azzalini's distributions are established.  相似文献   

4.
In this article, a new class of distributions is introduced, which generalizes the linear failure rate distribution and is obtained by compounding this distribution and power series class of distributions. This new class of distributions is called the linear failure rate-power series distributions and contains some new distributions such as linear failure rate-geometric, linear failure rate-Poisson, linear failure rate-logarithmic, linear failure rate-binomial distributions, and Rayleigh-power series class of distributions. Some former works such as exponential-power series class of distributions, exponential-geometric, exponential-Poisson, and exponential-logarithmic distributions are special cases of the new proposed model. The ability of the linear failure rate-power series class of distributions is in covering five possible hazard rate function, that is, increasing, decreasing, upside-down bathtub (unimodal), bathtub and increasing-decreasing-increasing shaped. Several properties of this class of distributions such as moments, maximum likelihood estimation procedure via an EM-algorithm and inference for a large sample, are discussed in this article. In order to show the flexibility and potentiality, the fitted results of the new class of distributions and some of its submodels are compared using two real datasets.  相似文献   

5.
The analysis of recurrent failure time data from longitudinal studies can be complicated by the presence of dependent censoring. There has been a substantive literature that has developed based on an artificial censoring device. We explore in this article the connection between this class of methods with truncated data structures. In addition, a new procedure is developed for estimation and inference in a joint model for recurrent events and dependent censoring. Estimation proceeds using a mixed U-statistic based estimating function approach. New resampling-based methods for variance estimation and model checking are also described. The methods are illustrated by application to data from an HIV clinical trial as with a limited simulation study.  相似文献   

6.
Accelerated life-testing (ALT) is a very useful technique for examining the reliability of highly reliable products. It allows the experimenter to obtain failure data more quickly at increased stress levels than under normal operating conditions. A step-stress model is one special class of ALT, and in this article we consider a simple step-stress model under the cumulative exposure model with lognormally distributed lifetimes in the presence of Type-I censoring. We then discuss inferential methods for the unknown parameters of the model by the maximum likelihood estimation method. Some numerical methods, such as the Newton–Raphson and quasi-Newton methods, are discussed for solving the corresponding non-linear likelihood equations. Next, we discuss the construction of confidence intervals for the unknown parameters based on (i) the asymptotic normality of the maximum likelihood estimators (MLEs), and (ii) parametric bootstrap resampling technique. A Monte Carlo simulation study is carried out to examine the performance of these methods of inference. Finally, a numerical example is presented in order to illustrate all the methods of inference developed here.  相似文献   

7.
In order to quickly extract information on the life of a product, accelerated life-tests are usually employed. In this article, we discuss a k-stage step-stress accelerated life-test with M-stress variables when the underlying data are progressively Type-I group censored. The life-testing model assumed is an exponential distribution with a link function that relates the failure rate and the stress variables in a linear way under the Box–Cox transformation, and a cumulative exposure model for modelling the effect of stress changes. The classical maximum likelihood method as well as a fully Bayesian method based on the Markov chain Monte Carlo (MCMC) technique is developed for inference on all the parameters of this model. Numerical examples are presented to illustrate all the methods of inference developed here, and a comparison of the ML and Bayesian methods is also carried out.  相似文献   

8.
The two distinct concepts of inference and criterion robustness are illustrated with a simple example based on the one-parameter exponential model. The study of inference and criterion robustness of point and interval estimators of the population mean under a more general exponential power model involves simple but fascinating distribution theory. This example could be usefully exploited in elementary mathematical statistics courses, where the topic of robustness is often neglected.  相似文献   

9.
In this article, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this article. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.  相似文献   

10.
The problem of spuriousity has been dealt with from a Bayesian perspective by, among others, Box and Taio (1968) and in several papers by Guttman with various co-authors, beginning with Guttman (1973), The main objective of these papers has been to obtain posterior distributions of parameters, and to base inference on these distributions. In the current paper, the Bayesian argument is carried one step further by deriving predictive distributions of future observations. Inferences are then based on these distributions. We will obtain predictive results for several models, First, we consider the univariate normal case with one spurious observation, This is then generalized to several spurious observations. The multivariate normal situation is studied next. Finally, we consider the general linear model with normal errors.  相似文献   

11.
Some matrix representations of diverse diagonal arrays are studied in this work; the results allow new definitions of classes of elliptical distributions indexed by kernels mixing Hadamard and usual products. A number of applications are derived in the setting of prior densities from the Bayesian multivariate regression model and families of non-elliptical distributions, such as the matrix multivariate generalized Birnbaum–Saunders density. The philosophy of the research about matrix representations of quadratic and inverse quadratic forms can be extended as a methodology for exploring possible new applications in non-standard distributions, matrix transformations and inference.  相似文献   

12.
We discuss properties of the bivariate family of distributions introduced by Sarmanov (1966). It is shown that correlation coefficients of this family of distributions have wider range than those of the Farlie-Gumbel-Morgenstern distributins. Possible applications of this family of bivariate distributions as prior distributins in Bayesian inference are discussed. The density of the bivariate Sarmanov distributions with beta marginals can be expressed as a linear combination of products of independent beta densities. This pseudoconjugate property greatly reduces the complexity of posterior computations when this bivariate beta distribution is used as a prior. Multivariate extensions are derived.  相似文献   

13.
Abstract.  Multivariate correlated failure time data arise in many medical and scientific settings. In the analysis of such data, it is important to use models where the parameters have simple interpretations. In this paper, we formulate a model for bivariate survival data based on the Plackett distribution. The model is an alternative to the Gamma frailty model proposed by Clayton and Oakes. The parameter in this distribution has a very appealing odds ratio interpretation for dependence between the two failure times; in addition, it allows for negative dependence. We develop novel semiparametric estimation and inference procedures for the model. The asymptotic results of the estimator are developed. The performance of the proposed techniques in finite samples is examined using simulation studies; in addition, the proposed methods are applied to data from an observational study in cancer.  相似文献   

14.
In this article, it is shown that many intractable problems of Bayesian inference can be cast in a form called “artificial augmenting regression” in which application of Markov Chain Monte Carlo techniques, especially Gibbs sampling with data augmentation, is rather convenient. The new techniques are illustrated using several challenging statistical problems and numerical results are presented.  相似文献   

15.
In this paper, we propose a model based on a class of symmetric distributions, which avoids the transformation of data, stabilizes the variance of the observations, and provides robust estimation of parameters and high flexibility for modeling different types of data. Probabilistic and statistical aspects of this new model are developed throughout the article, which include mathematical properties, estimation of parameters and inference. The obtained results are illustrated by means of real genomic data.  相似文献   

16.
Abstract. The modelling process in Bayesian Statistics constitutes the fundamental stage of the analysis, since depending on the chosen probability laws the inferences may vary considerably. This is particularly true when conflicts arise between two or more sources of information. For instance, inference in the presence of an outlier (which conflicts with the information provided by the other observations) can be highly dependent on the assumed sampling distribution. When heavy‐tailed (e.g. t) distributions are used, outliers may be rejected whereas this kind of robust inference is not available when we use light‐tailed (e.g. normal) distributions. A long literature has established sufficient conditions on location‐parameter models to resolve conflict in various ways. In this work, we consider a location–scale parameter structure, which is more complex than the single parameter cases because conflicts can arise between three sources of information, namely the likelihood, the prior distribution for the location parameter and the prior for the scale parameter. We establish sufficient conditions on the distributions in a location–scale model to resolve conflicts in different ways as a single observation tends to infinity. In addition, for each case, we explicitly give the limiting posterior distributions as the conflict becomes more extreme.  相似文献   

17.

In analyzing failure data pertaining to a repairable system, perhaps the most widely used parametric model is a nonhomogeneous Poisson process with Weibull intensity, more commonly referred to as the Power Law Process (PLP) model. Investigations relating to inference of parameters of the PLP under a frequentist framework abound in the literature. The focus of this article is to supplement those findings from a Bayesian perspective, which has thus far been explored to a limited extent in this context. Main emphasis is on the inference of the intensity function of the PLP. Both estimation and future prediction are considered under traditional as well as more complex censoring schemes. Modern computational tools such as Markov Chain Monte Carlo are exploited efficiently to facilitate the numerical evaluation process. Results from the Bayesian inference are contrasted with the corresponding findings from a frequentist analysis, both from a qualitative and a quantitative viewpoint. The developed methodology is implemented in analyzing interval-censored failure data of equipments in a fleet of marine vessels.  相似文献   

18.
Abstract. This paper reviews some of the key statistical ideas that are encountered when trying to find empirical support to causal interpretations and conclusions, by applying statistical methods on experimental or observational longitudinal data. In such data, typically a collection of individuals are followed over time, then each one has registered a sequence of covariate measurements along with values of control variables that in the analysis are to be interpreted as causes, and finally the individual outcomes or responses are reported. Particular attention is given to the potentially important problem of confounding. We provide conditions under which, at least in principle, unconfounded estimation of the causal effects can be accomplished. Our approach for dealing with causal problems is entirely probabilistic, and we apply Bayesian ideas and techniques to deal with the corresponding statistical inference. In particular, we use the general framework of marked point processes for setting up the probability models, and consider posterior predictive distributions as providing the natural summary measures for assessing the causal effects. We also draw connections to relevant recent work in this area, notably to Judea Pearl's formulations based on graphical models and his calculus of so‐called do‐probabilities. Two examples illustrating different aspects of causal reasoning are discussed in detail.  相似文献   

19.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

20.
In this article, we apply the empirical likelihood method to make inference on the bivariate survival function of paired failure times by estimating the survival function of censored time with the Kaplan–Meier estimator. Adjusted empirical likelihood (AEL) confidence intervals for the bivariate survival function are developed. We conduct a simulation study to compare the proposed AEL method with other methods. The simulation study shows the proposed AEL method has better performance than other existing methods. We illustrate the proposed method by analyzing the skin graft data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号