首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we consider non‐parametric copula inference under bivariate censoring. Based on an estimator of the joint cumulative distribution function, we define a discrete and two smooth estimators of the copula. The construction that we propose is valid for a large range of estimators of the distribution function and therefore for a large range of bivariate censoring frameworks. Under some conditions on the tails of the distributions, the weak convergence of the corresponding copula processes is obtained in l([0,1]2). We derive the uniform convergence rates of the copula density estimators deduced from our smooth copula estimators. Investigation of the practical behaviour of these estimators is performed through a simulation study and two real data applications, corresponding to different censoring settings. We use our non‐parametric estimators to define a goodness‐of‐fit procedure for parametric copula models. A new bootstrap scheme is proposed to compute the critical values.  相似文献   

2.
The issue of residual life (RL) estimation plays an important role for products while they are in use, especially for expensive and reliability-critical products. For many products, they may have two or more performance characteristics (PCs). Here, an adaptive method of RL estimation based on bivariate Wiener degradation process with time-scale transformations is presented. It is assumed that a product has two PCs, and that each PC is governed by a Wiener process with a time-scale transformation. The dependency of PCs is characterized by the Frank copula function. Parameters are estimated by using the Bayesian Markov chain Monte Carlo method. Once new degradation information is available, the RL is re-estimated in an adaptive manner. A numerical example about fatigue cracks is given to demonstrate the usefulness and validity of the proposed method.  相似文献   

3.
Due to the growing importance in maintenance scheduling, the issue of residual life (RL) estimation for some high reliable products based on degradation data has been studied quite extensively. However, most of the existing work only deals with one-dimensional degradation data, which may not be realistic in some cases. Here, an adaptive method of RL estimation is developed based on two-dimensional degradation data. It is assumed that a product has two performance characteristics (PCs) and that the degradation of each PC over time is governed by a non-stationary gamma degradation process. From a practical consideration, it is further assumed that these two PCs are dependent and that their dependency can be characterized by a copula function. As the likelihood function in such a situation is complicated and computationally quite intensive, a two-stage method is used to estimate the unknown parameters of the model. Once new degradation information of the product being monitored becomes available, random effects are first updated by using the Bayesian method. Following that, the RL at current time is estimated accordingly. As the degradation data information accumulates, the RL can be re-estimated in an adaptive manner. Finally, a numerical example about fatigue cracks is presented in order to illustrate the proposed model and the developed inferential method.  相似文献   

4.
Modern highly reliable products usually have complex structure and many functions. This means that they may have two or more performance characteristics. All the performance characteristics can reflect the product's performance degradation over time, and they may be independent or dependent. If the performance characteristics are independent, they can be modelled separately. But if they are not independent, it is very important to find the joint distribution function of the performance characteristics for estimating the reliability of the product as accurately as possible. Here, we suppose that a product has two performance characteristics and the degradation paths of these two performance characteristics can be governed by a Wiener process with a time-scale transformation, and that the dependency of the performance characteristics can be described by a copula function. The parameters of the two performance characteristics and the copula function can be estimated jointly. The model in such a situation is very complicated and analytically intractable and becomes cumbersome from a computational viewpoint. For this reason, the Bayesian Markov chain Monte Carlo method is developed for this problem that allows the maximum-likelihood estimates of the parameters to be determined in an efficient manner. For an illustration of the proposed model, a numerical example about fatigue cracks is presented.  相似文献   

5.
By adding a resilience parameter to the scale model, a general distribution family called resilience-scale model is introduced including exponential, Weibull, generalized exponential, exponentiated Weibull and exponentiated Lomax distributions as special cases. This paper carries out stochastic comparisons on parallel and series systems with heterogeneous resilience-scaled components. On the one hand, it is shown that more heterogeneity among the resilience-scaled components of a parallel [series] system with an Archimedean [survival] copula leads to better [worse] performance in the sense of the usual stochastic order. On the other hand, the [reversed hazard] hazard rate order is established for two series [parallel] systems consisting of independent heterogeneous resilience-scaled components. The skewness and dispersiveness are also investigated for the lifetimes of two parallel systems consisting of independent heterogeneous and homogeneous [multiple-outlier] resilience-scaled components. Numerical examples are provided to illustrate the effectiveness of our theoretical findings. These results not only generalize and extend some known ones in the literature, but also provide guidance for engineers to assemble systems with higher reliability in practical situations.  相似文献   

6.
This article conducts a Bayesian analysis for bivariate degradation models based on the inverse Gaussian (IG) process. Assume that a product has two quality characteristics (QCs) and each of the QCs is governed by an IG process. The dependence of the QCs is described by a copula function. A bivariate simple IG process model and three bivariate IG process models with random effects are investigated by using Bayesian method. In addition, a simulation example is given to illustrate the effectiveness of the proposed methods. Finally, an example about heavy machine tools is presented to validate the proposed models.  相似文献   

7.
Bootstrapping the conditional copula   总被引:1,自引:0,他引:1  
This paper is concerned with inference about the dependence or association between two random variables conditionally upon the given value of a covariate. A way to describe such a conditional dependence is via a conditional copula function. Nonparametric estimators for a conditional copula then lead to nonparametric estimates of conditional association measures such as a conditional Kendall's tau. The limiting distributions of nonparametric conditional copula estimators are rather involved. In this paper we propose a bootstrap procedure for approximating these distributions and their characteristics, and establish its consistency. We apply the proposed bootstrap procedure for constructing confidence intervals for conditional association measures, such as a conditional Blomqvist beta and a conditional Kendall's tau. The performances of the proposed methods are investigated via a simulation study involving a variety of models, ranging from models in which the dependence (weak or strong) on the covariate is only through the copula and not through the marginals, to models in which this dependence appears in both the copula and the marginal distributions. As a conclusion we provide practical recommendations for constructing bootstrap-based confidence intervals for the discussed conditional association measures.  相似文献   

8.
To assess the reliability of highly reliable products that have two or more performance characteristics (PCs) in an accurate manner, relations between the PCs should be taken duly into account. If they are not independent, it would then become important to describe the dependence of the PCs. For many products, the constant-stress degradation test cannot provide sufficient data for reliability evaluation and for this reason, accelerated degradation test is usually performed. In this article, we assume that a product has two PCs and that the PCs are governed by a Wiener process with a time scale transformation, and the relationship between the PCs is described by the Frank copula function. The copula parameter is dependent on stress and assumed to be a function of stress level that can be described by a logistic function. Based on these assumptions, a bivariate constant-stress accelerated degradation model is proposed here. The direct likelihood estimation of parameters of such a model becomes analytically intractable, and so the Bayesian Markov chain Monte Carlo (MCMC) method is developed here for this model for obtaining the maximum likelihood estimates (MLEs) efficiently. For an illustration of the proposed model and the method of inference, a simulated example is presented along with the associated computational results.  相似文献   

9.
This article presents flexible new models for the dependence structure, or copula, of economic variables based on a latent factor structure. The proposed models are particularly attractive for relatively high-dimensional applications, involving 50 or more variables, and can be combined with semiparametric marginal distributions to obtain flexible multivariate distributions. Factor copulas generally lack a closed-form density, but we obtain analytical results for the implied tail dependence using extreme value theory, and we verify that simulation-based estimation using rank statistics is reliable even in high dimensions. We consider “scree” plots to aid the choice of the number of factors in the model. The model is applied to daily returns on all 100 constituents of the S&P 100 index, and we find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. We also show that factor copula models provide superior estimates of some measures of systemic risk. Supplementary materials for this article are available online.  相似文献   

10.
Consider two parallel systems with their independent components’ lifetimes following heterogeneous exponentiated generalized gamma distributions, where the heterogeneity is in both shape and scale parameters. We then obtain the usual stochastic (reversed hazard rate) order between the lifetimes of two systems by using the weak submajorization order between the vectors of shape parameters and the p-larger (weak supermajorization) order between the vectors of scale parameters, under some restrictions on the involved parameters. Further, by reducing the heterogeneity of parameters in each system, the usual stochastic (reversed hazard rate) order mentioned above is strengthened to the hazard rate (likelihood ratio) order. Finally, two characterization results concerning the comparisons of two parallel systems, one with independent heterogeneous generalized exponential components and another with independent homogeneous generalized exponential components, are derived. These characterization results enable us to find some lower and upper bounds for the hazard rate and reversed hazard rate functions of a parallel system consisting of independent heterogeneous generalized exponential components. The results established here generalize some of the known results in the literature, concerning the comparisons of parallel systems under generalized exponential and exponentiated Weibull models.  相似文献   

11.
We consider semiparametric multivariate data models based on copula representation of the common distribution function. A copula is characterized by a parameter of association and marginal distribution functions. This parameter and the marginal distributions are unknown. In this article, we study the estimator of the parameter of association in copulas with the marginal distribution functions assumed as nuisance parameters restricted by the assumption that the components are identically distributed. Results of this work could be used to construct special kinds of tests of homogeneity for random vectors having dependent components.  相似文献   

12.
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.  相似文献   

13.
Summary.  We discuss a method for combining different but related longitudinal studies to improve predictive precision. The motivation is to borrow strength across clinical studies in which the same measurements are collected at different frequencies. Key features of the data are heterogeneous populations and an unbalanced design across three studies of interest. The first two studies are phase I studies with very detailed observations on a relatively small number of patients. The third study is a large phase III study with over 1500 enrolled patients, but with relatively few measurements on each patient. Patients receive different doses of several drugs in the studies, with the phase III study containing significantly less toxic treatments. Thus, the main challenges for the analysis are to accommodate heterogeneous population distributions and to formalize borrowing strength across the studies and across the various treatment levels. We describe a hierarchical extension over suitable semiparametric longitudinal data models to achieve the inferential goal. A nonparametric random-effects model accommodates the heterogeneity of the population of patients. A hierarchical extension allows borrowing strength across different studies and different levels of treatment by introducing dependence across these nonparametric random-effects distributions. Dependence is introduced by building an analysis of variance (ANOVA) like structure over the random-effects distributions for different studies and treatment combinations. Model structure and parameter interpretation are similar to standard ANOVA models. Instead of the unknown normal means as in standard ANOVA models, however, the basic objects of inference are random distributions, namely the unknown population distributions under each study. The analysis is based on a mixture of Dirichlet processes model as the underlying semiparametric model.  相似文献   

14.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

15.
Amarjit Kundu 《Statistics》2018,52(1):133-146
In this paper we compare the minimums of two independent and heterogeneous samples each following Kumaraswamy (Kw)-G distribution with the same and the different parent distribution functions. The comparisons are carried out with respect to usual stochastic ordering and hazard rate ordering with majorized shape parameters of the distributions. The likelihood ratio ordering between the minimum order statistics is established for heterogeneous multiple-outlier Kw-G random variables with the same parent distribution function.  相似文献   

16.
This paper discusses some stochastic models for dependence of observations which include angular ones. First, we provide a theorem which constructs four-dimensional distributions with specified bivariate marginals on certain manifolds such as two tori, cylinders or discs. Some properties of the submodel of the proposed models are investigated. The theorem is also applicable to the construction of a related Markov process, models for incomplete observations, and distributions with specified marginals on the disc. Second, two maximum entropy distributions on the cylinder are discussed. The circular marginal of each model is distributed as the generalized von Mises distribution which represents a symmetric or asymmetric, unimodal or bimodal shape. The proposed cylindrical model is applied to two data sets.  相似文献   

17.
In this paper, we propose novel methods of quantifying expert opinion about prior distributions for multinomial models. Two different multivariate priors are elicited using median and quartile assessments of the multinomial probabilities. First, we start by eliciting a univariate beta distribution for the probability of each category. Then we elicit the hyperparameters of the Dirichlet distribution, as a tractable conjugate prior, from those of the univariate betas through various forms of reconciliation using least-squares techniques. However, a multivariate copula function will give a more flexible correlation structure between multinomial parameters if it is used as their multivariate prior distribution. So, second, we use beta marginal distributions to construct a Gaussian copula as a multivariate normal distribution function that binds these marginals and expresses the dependence structure between them. The proposed method elicits a positive-definite correlation matrix of this Gaussian copula. The two proposed methods are designed to be used through interactive graphical software written in Java.  相似文献   

18.
Summary.  In functional data analysis, curves or surfaces are observed, up to measurement error, at a finite set of locations, for, say, a sample of n individuals. Often, the curves are homogeneous, except perhaps for individual-specific regions that provide heterogeneous behaviour (e.g. 'damaged' areas of irregular shape on an otherwise smooth surface). Motivated by applications with functional data of this nature, we propose a Bayesian mixture model, with the aim of dimension reduction, by representing the sample of n curves through a smaller set of canonical curves. We propose a novel prior on the space of probability measures for a random curve which extends the popular Dirichlet priors by allowing local clustering: non-homogeneous portions of a curve can be allocated to different clusters and the n individual curves can be represented as recombinations (hybrids) of a few canonical curves. More precisely, the prior proposed envisions a conceptual hidden factor with k -levels that acts locally on each curve. We discuss several models incorporating this prior and illustrate its performance with simulated and real data sets. We examine theoretical properties of the proposed finite hybrid Dirichlet mixtures, specifically, their behaviour as the number of the mixture components goes to ∞ and their connection with Dirichlet process mixtures.  相似文献   

19.
We consider stochastic volatility models that are defined by an Ornstein–Uhlenbeck (OU)-Gamma time change. These models are most suitable for modeling financial time series and follow the general framework of the popular non-Gaussian OU models of Barndorff-Nielsen and Shephard. One current problem of these otherwise attractive nontrivial models is, in general, the unavailability of a tractable likelihood-based statistical analysis for the returns of financial assets, which requires the ability to sample from a nontrivial joint distribution. We show that an OU process driven by an infinite activity Gamma process, which is an OU-Gamma process, exhibits unique features, which allows one to explicitly describe and exactly sample from relevant joint distributions. This is a consequence of the OU structure and the calculus of Gamma and Dirichlet processes. We develop a particle marginal Metropolis–Hastings algorithm for this type of continuous-time stochastic volatility models and check its performance using simulated data. For illustration we finally fit the model to S&P500 index data.  相似文献   

20.
Abstract. We propose a Bayesian semiparametric methodology for quantile regression modelling. In particular, working with parametric quantile regression functions, we develop Dirichlet process mixture models for the error distribution in an additive quantile regression formulation. The proposed non‐parametric prior probability models allow the shape of the error density to adapt to the data and thus provide more reliable predictive inference than models based on parametric error distributions. We consider extensions to quantile regression for data sets that include censored observations. Moreover, we employ dependent Dirichlet processes to develop quantile regression models that allow the error distribution to change non‐parametrically with the covariates. Posterior inference is implemented using Markov chain Monte Carlo methods. We assess and compare the performance of our models using both simulated and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号