首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Asymptotic properties of a class of test statistics when applied to hazard-based residuals arising in survival and reliability models are presented. These test statistics are useful in goodness-of-fit tests and model validation. The properties are obtained by examining the asymptotic properties of generalized residual processes, which are (possibly random) time transformations of the processes associated with the incomplete failure times. Since the time transformations depend on unknown model parameters, the residual processes are obtained by replacing the unknown parameters by their estimators. The results shed light on the effects of estimating parameters to obtain the residual processes. Implications concerning possible pitfalls of some existing model validation procedures utilizing hazard-based residuals and ways to correct these problems are discussed.  相似文献   

2.
This paper considers residuals for time series regression. Despite much literature on visual diagnostics for uncorrelated data, there is little on the autocorrelated case. To examine various aspects of the fitted time series regression model, three residuals are considered. The fitted regression model can be checked using orthogonal residuals; the time series error model can be analysed using marginal residuals; and the white noise error component can be tested using conditional residuals. When used together, these residuals allow identification of outliers, model mis‐specification and mean shifts. Due to the sensitivity of conditional residuals to model mis‐specification, it is suggested that the orthogonal and marginal residuals be examined first.  相似文献   

3.
A new family of statistics is proposed to test for the presence of serial correlation in linear regression models. The tests are based on partial sums of lagged cross-products of regression residuals that define a class of interesting Gaussian processes. These processes are characterized in terms of regressor functions, the serial-correlation structure, the distribution of the noise process, and the order of the lag of the cross-products of residuals. It is shown that these four factors affect the lagged residual processes independently. Large-sample distributional results are presented for test statistics under the null hypothesis of no serial correlation or for alternatives from a range of interesting hypotheses. Some indication of the circumstances to which the asymptotic results apply in finite-sample situations and of those to which they should be applied with some caution are obtained through a simulation study. Tables of selected quantiles of the proposed tests are also given. The tests are illustrated with two examples taken from the empirical literature. It is also proposed that plots of lagged residual processes be used as diagnostic tools to gain insight into the correlation structure of residuals derived from regression fits.  相似文献   

4.
Conditional and unconditional confidence intervals have been compared by Grice, Bain, and Engelhardt (Commun. Statist. B7 (1978), 515–524) in terms of the location-scale model with double-exponential distribution form. Preference was found for the conditional intervals based on mean length and coverage probability for untrue parameters values. These two criteria for a location-scale system are shown to be inappropriate criteria for assessing the conditional versus unconditional approaches to inference. The usual ancillarity concept is also noted to be inappropriate. Support for many conditional analyses, however, is found in a more careful formulation of the statistical model.  相似文献   

5.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

6.
The authors propose a novel class of cure rate models for right‐censored failure time data. The class is formulated through a transformation on the unknown population survival function. It includes the mixture cure model and the promotion time cure model as two special cases. The authors propose a general form of the covariate structure which automatically satisfies an inherent parameter constraint and includes the corresponding binomial and exponential covariate structures in the two main formulations of cure models. The proposed class provides a natural link between the mixture and the promotion time cure models, and it offers a wide variety of new modelling structures as well. Within the Bayesian paradigm, a Markov chain Monte Carlo computational scheme is implemented for sampling from the full conditional distributions of the parameters. Model selection is based on the conditional predictive ordinate criterion. The use of the new class of models is illustrated with a set of real data involving a melanoma clinical trial.  相似文献   

7.
Summary.  We define residuals for point process models fitted to spatial point pattern data, and we propose diagnostic plots based on them. The residuals apply to any point process model that has a conditional intensity; the model may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Some existing ad hoc methods for model checking (quadrat counts, scan statistic, kernel smoothed intensity and Berman's diagnostic) are recovered as special cases. Diagnostic tools are developed systematically, by using an analogy between our spatial residuals and the usual residuals for (non-spatial) generalized linear models. The conditional intensity λ plays the role of the mean response. This makes it possible to adapt existing knowledge about model validation for generalized linear models to the spatial point process context, giving recommendations for diagnostic plots. A plot of smoothed residuals against spatial location, or against a spatial covariate, is effective in diagnosing spatial trend or co-variate effects. Q – Q -plots of the residuals are effective in diagnosing interpoint interaction.  相似文献   

8.
The modelling of discrete such as binary time series, unlike the continuous time series, is not easy. This is due to the fact that there is no unique way to model the correlation structure of the repeated binary data. Some models may also provide a complicated correlation structure with narrow ranges for the correlations. In this paper, we consider a nonlinear dynamic binary time series model that provides a correlation structure which is easy to interpret and the correlations under this model satisfy the full?1 to 1 range. For the estimation of the parameters of this nonlinear model, we use a conditional generalized quasilikelihood (CGQL) approach which provides the same estimates as those of the well-known maximum likelihood approach. Furthermore, we consider a competitive linear dynamic binary time series model and examine the performance of the CGQL approach through a simulation study in estimating the parameters of this linear model. The model mis-specification effects on estimation as well as forecasting are also examined through simulations.  相似文献   

9.
ABSTRACT

ARMA–GARCH models are widely used to model the conditional mean and conditional variance dynamics of returns on risky assets. Empirical results suggest heavy-tailed innovations with positive extreme value index for these models. Hence, one may use extreme value theory to estimate extreme quantiles of residuals. Using weak convergence of the weighted sequential tail empirical process of the residuals, we derive the limiting distribution of extreme conditional Value-at-Risk (CVaR) and conditional expected shortfall (CES) estimates for a wide range of extreme value index estimators. To construct confidence intervals, we propose to use self-normalization. This leads to improved coverage vis-à-vis the normal approximation, while delivering slightly wider confidence intervals. A data-driven choice of the number of upper order statistics in the estimation is suggested and shown to work well in simulations. An application to stock index returns documents the improvements of CVaR and CES forecasts.  相似文献   

10.
This paper explores the relationship between ignorability, sufficiency and ancillarity in the coarse data model of Heitjan and Rubin. Bayes or likelihood ignorability has a natural relationship to sufficiency, and frequentist ignorability an analogous relationship to ancillarity. Weaker conditions, termed observed likelihood sufficiency, observed specific sufficiency and observed ancillarity, expand the concepts to models where the coarsening mechanism is sometimes, but not always, ignorable.  相似文献   

11.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   

12.
Integer-valued time series models make use of thinning operators for coherency in the nature of count data. However, the thinning operators make residuals unobservable and are the main difficulty in developing diagnostic tools for autocorrelated count data. In this regard, we introduce a new residual, which takes the form of predictive distribution functions, to assess probabilistic forecasts, and this new residual is supplemented by a modified usual residuals. Under integer-valued autoregressive (INAR) models, the properties of these two residuals are investigated and used to evaluate the predictive performance and model adequacy of the INAR models. We compare our residuals with the existing residuals through simulation studies and apply our method to select an appropriate INAR model for an over-dispersed real data.  相似文献   

13.
In this paper we develop multiple case deletion statistics for the general linear model so that a residual vector and a leverage matrix are identified which have roles analogous to residuals and leverage for ordinary least squares models. We extend the notion of the conditional deletion diagnostic to general linear models. The residuals, leverage and deletion diagnostics are illustrated with data modelled by a linear growth curve.  相似文献   

14.
A general theory is presented for residuals from the general linear model with correlated errors. It is demonstrated that there are two fundamental types of residual associated with this model, referred to here as the marginal and the conditional residual. These measure respectively the distance to the global aspects of the model as represented by the expected value and the local aspects as represented by the conditional expected value. These residuals may be multivariate. Some important dualities are developed which have simple implications for diagnostics. The results are illustrated by reference to model diagnostics in time series and in classical multivariate analysis with independent cases.  相似文献   

15.
In this paper, we consider joint modelling of repeated measurements and competing risks failure time data. For competing risks time data, a semiparametric mixture model in which proportional hazards model are specified for failure time models conditional on cause and a multinomial model for the marginal distribution of cause conditional on covariates. We also derive a score test based on joint modelling of repeated measurements and competing risks failure time data to identify longitudinal biomarkers or surrogates for a time to event outcome in competing risks data.  相似文献   

16.
In robust and nonparametric MANOVA models, the basic assumptions of independence, homoscedasticity and multinormality of the error components have been relaxed to a certain extent. In mixed-effects MANOVA models, the random effects components (due to concomitant variates) rest on the linearity of the regression function and some other distributional homogeneity conditions that may not hold universally, and avoidance of such regularity conditions generally introduce complications. Some of these difficulties are eliminated here through a conditional functional estimation approach, and in this setup, improved estimation of the fixed effects parameters is presented in a unified manner. Robustness and efficacy of these nonparametric procedures are appraised, and the picture is compared with their parametric as well as semiparametric counterparts.  相似文献   

17.
In the context of ACD models for ultra-high frequency data different specifications are available to estimate the conditional mean of intertrade durations, while quantiles estimation has been completely neglected by literature, even if to trading extent it can be more informative. The main problem arising with quantiles estimation is the correct specification of durations probability law: the usual assumption of Exponentially distributed residuals, is very robust for the estimation of parameters of the conditional mean, but dramatically fails the distributional fit. In this paper a semiparametric approach is formalized, and compared with the parametric one, deriving from Exponential assumption. Empirical evidence for a stock of Italian financial market strongly supports the former approach.Paola Zuccolotto: The author wishes to thank Prof. A. Mazzali, Dott. G. De Luca, Dott. M. Sandri for valuable comments.  相似文献   

18.
Bivariate Exponential Distribution (BVED) were introduced by Freund (1961), Marshall and Olkin (1967) and Block and Basu (1974) as models for the distributions of (X,Y) the failure times of dependent components (C1,C2). We study the structure of these models and observe that Freund model leads to a regular exponential family with a four dimensional orthogonal parameter. Marshall-Olkin model involving three parameters leads to a conditional or piece wise exponential family and Block-Basu model which also depends on three parameters is a sub-model of the Freund model and is a curved exponential family. We obtain a large sample tests for symmetry as well as independence of (X,Y) in each of these models by using the Generalized Likelihood Ratio Tests (GLRT) or tests basesd on MLE of the parameters and root n consistent estimators of their variance-covariance matrices.  相似文献   

19.
In this article, we shall attempt to introduce a new class of lifetime distributions, which enfolds several known distributions such as the generalized linear failure rate distribution and covers both positive as well as negative skewed data. This new four-parameter distribution allows for flexible hazard rate behavior. Indeed, the hazard rate function here can be increasing, decreasing, bathtub-shaped, or upside-down bathtub-shaped. We shall first study some basic distributional properties of the new model such as the cumulative distribution function, the density of the order statistics, their moments, and Rényi entropy. Estimation of the stress-strength parameter as an important reliability property is also studied. The maximum likelihood estimation procedure for complete and censored data and Bayesian method are used for estimating the parameters involved. Finally, application of the new model to three real datasets is illustrated to show the flexibility and potential of the new model compared to rival models.  相似文献   

20.
A Multivariate Model for Repeated Failure Time Measurements   总被引:1,自引:1,他引:0  
A parametric multivariate failure time distribution is derived from a frailty-type model with a particular frailty distribution. It covers as special cases certain distributions which have been used for multivariate survival data in recent years. Some properties of the distribution are derived: its marginal and conditional distributions lie within the parametric family, and association between the component variates can be positive or, to a limited extent, negative. The simple closed form of the survivor function is useful for right-censored data, as occur commonly in survival analysis, and for calculating uniform residuals. Also featured is the distribution of ratios of paired failure times. The model is applied to data from the literature  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号