首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Normal residual is one of the usual assumptions in autoregressive model but sometimes in practice we are faced with non-negative residuals. In this paper, we have derived modified maximum likelihood estimators of parameters of the residuals and autoregressive coefficient. Also asymptotic distribution of modified maximum likelihood estimators in both stationary and non-stationary models are computed. So that, we can derive asymptotic distribution of unit root, Vuong's and Cox's tests statistics in stationary situation. Using simulation, it shows that Akaike information criterion and Vuong's test work to select the optimal autoregressive model with non-negative residuals. Sometimes Vuong's test select two competing models as equivalent models. These models may be suitable or unsuitable equivalent models. So we consider Cox's test to make inference after model selection. Kolmogorov–Smirnov test confirms our results. Also we have computed tracking interval for competing models to choosing between two close competing models when Vuong's test and Cox's test cannot detect the differences.  相似文献   

2.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

3.
Model selection aims to find the best model. Most of the usual criteria are based on goodness of fit and parsimony and aim to maximize a transformed version of likelihood. The situation is less clear when two models are equivalent: are they close to the unknown true model or are they far from it? Based on simulations, we study the results of Vuong's test, Cox's test, AIC and BIC and the ability of these four tests to discriminate between models.  相似文献   

4.
The use of general linear modeling (GLM) procedures based on log-rank scores is proposed for the analysis of survival data and compared to standard survival analysis procedures. For the comparison of two groups, this approach performed similarly to the traditional log-rank test. In the case of more complicated designs - without ties in the survival times - the approach was only marginally less powerful than tests from proportional hazards models, and clearly less powerful than a likelihood ratio test for a fully parametric model; however, with ties in the survival time, the approach proved more powerful than tests from Cox's semi-parametric proportional hazards procedure. The method appears to provide a reasonably powerful alternative for the analysis of survival data, is easily used in complicated study designs, avoids (semi-)parametric assumptions, and is quite computationally easy and inexpensive to employ.  相似文献   

5.
Model selection problems arise while constructing unbiased or asymptotically unbiased estimators of measures known as discrepancies to find the best model. Most of the usual criteria are based on goodness-of-fit and parsimony. They aim to maximize a transformed version of likelihood. For linear regression models with normally distributed error, the situation is less clear when two models are equivalent: are they close to or far from the unknown true model? In this work, based on stochastic simulation and parametric simulation, we study the results of Vuong's test, Cox's test, Akaike's information criterion, Bayesian information criterion, Kullback information criterion and bias corrected Kullback information criterion and the ability of these tests to discriminate between non-nested linear models.  相似文献   

6.
As the number of random variables for the categorical data increases, the possible number of log-linear models which can be fitted to the data increases rapidly, so that various model selection methods are developed. However, we often found that some models chosen by different selection criteria do not coincide. In this paper, we propose a comparison method to test the final models which are non-nested. The statistic of Cox (1961, 1962) is applied to log-linear models for testing non-nested models, and the Kullback-Leibler measure of closeness (Pesaran 1987) is explored. In log-linear models, pseudo estimators for the expectation and the variance of Cox's statistic are not only derived but also shown to be consistent estimators.  相似文献   

7.
Abstract. The Dantzig selector (DS) is a recent approach of estimation in high‐dimensional linear regression models with a large number of explanatory variables and a relatively small number of observations. As in the least absolute shrinkage and selection operator (LASSO), this approach sets certain regression coefficients exactly to zero, thus performing variable selection. However, such a framework, contrary to the LASSO, has never been used in regression models for survival data with censoring. A key motivation of this article is to study the estimation problem for Cox's proportional hazards (PH) function regression models using a framework that extends the theory, the computational advantages and the optimal asymptotic rate properties of the DS to the class of Cox's PH under appropriate sparsity scenarios. We perform a detailed simulation study to compare our approach with other methods and illustrate it on a well‐known microarray gene expression data set for predicting survival from gene expressions.  相似文献   

8.
The paper proves geometric optimality of Cox's partial likelihood score functions via estimating functions. As an illustration, Cox's proportional-hazards model is considered. Les auteurs démontrent l'optimalité g?ométrique des fonctions scores de vraisemblance partielle de Cox au moyen des fonctions d'estimation. Le modéle des risques proportionnels de Cox permet d'illustrer leur résultat.  相似文献   

9.
Cox's (1972) Proportioal hazards failure time model, already widely used in the analysis of clinical trials, also provides an elegant formalization of the epidemiologic concept of relative risk. When used to compare the disease experience of a study cohort with that of an external control population, it generalizes the notions of the standardized morbidity ratio (SMR) and the proportional morbidity ratio (PMR). For studies in which matched sets of cases and controls are sampled retrospectively from the population at risk, the model provides a flexible tool for the regression analysis of multiple risk factors.  相似文献   

10.
In this paper, we provide a full Bayesian analysis for Cox's proportional hazards model under different hazard rate shape assumptions. To this end, we select the modified Weibull distribution family to model failure rates. A novel Markov chain Monte Carlo method allows one to tackle both exact and right-censored failure time data. Both simulated and real data are used to illustrate the methods.  相似文献   

11.
Prognostic studies are essential to understand the role of particular prognostic factors and, thus, improve prognosis. In most studies, disease progression trajectories of individual patients may end up with one of mutually exclusive endpoints or can involve a sequence of different events.

One challenge in such studies concerns separating the effects of putative prognostic factors on these different endpoints and testing the differences between these effects.

In this article, we systematically evaluate and compare, through simulations, the performance of three alternative multivariable regression approaches in analyzing competing risks and multiple-event longitudinal data. The three approaches are: (1) fitting separate event-specific Cox's proportional hazards models; (2) the extension of Cox's model to competing risks proposed by Lunn and McNeil; and (3) Markov multi-state model.

The simulation design is based on a prognostic study of cancer progression, and several simulated scenarios help investigate different methodological issues relevant to the modeling of multiple-event processes of disease progression. The results highlight some practically important issues. Specifically, the decreased precision of the observed timing of intermediary (non fatal) events has a strong negative impact on the accuracy of regression coefficients estimated with either the Cox's or Lunn-McNeil models, while the Markov model appears to be quite robust, under the same circumstances. Furthermore, the tests based on both Markov and Lunn-McNeil models had similar power for detecting a difference between the effects of the same covariate on the hazards of two mutually exclusive events. The Markov approach yields also accurate Type I error rate and good empirical power for testing the hypothesis that the effect of a prognostic factor on changes after an intermediary event, which cannot be directly tested with the Lunn-McNeil method. Bootstrap-based standard errors improve the coverage rates for Markov model estimates. Overall, the results of our simulations validate Markov multi-state model for a wide range of data structures encountered in prognostic studies of disease progression, and may guide end users regarding the choice of model(s) most appropriate for their specific application.  相似文献   

12.
The exponential–Poisson (EP) distribution with scale and shape parameters β>0 and λ∈?, respectively, is a lifetime distribution obtained by mixing exponential and zero-truncated Poisson models. The EP distribution has been a good alternative to the gamma distribution for modelling lifetime, reliability and time intervals of successive natural disasters. Both EP and gamma distributions have some similarities and properties in common, for example, their densities may be strictly decreasing or unimodal, and their hazard rate functions may be decreasing, increasing or constant depending on their shape parameters. On the other hand, the EP distribution has several interesting applications based on stochastic representations involving maximum and minimum of iid exponential variables (with random sample size) which make it of distinguishable scientific importance from the gamma distribution. Given the similarities and different scientific relevance between these models, one question of interest is how to discriminate them. With this in mind, we propose a likelihood ratio test based on Cox's statistic to discriminate the EP and gamma distributions. The asymptotic distribution of the normalized logarithm of the ratio of the maximized likelihoods under two null hypotheses – data come from EP or gamma distributions – is provided. With this, we obtain the probabilities of correct selection. Hence, we propose to choose the model that maximizes the probability of correct selection (PCS). We also determinate the minimum sample size required to discriminate the EP and gamma distributions when the PCS and a given tolerance level based on some distance are before stated. A simulation study to evaluate the accuracy of the asymptotic probabilities of correct selection is also presented. The paper is motivated by two applications to real data sets.  相似文献   

13.
Ion Grama 《Statistics》2019,53(4):807-838
We propose an extension of the regular Cox's proportional hazards model which allows the estimation of the probabilities of rare events. It is known that when the data are heavily censored, the estimation of the tail of the survival distribution is not reliable. To improve the estimate of the baseline survival function in the range of the largest observed data and to extend it outside, we adjust the tail of the baseline distribution beyond some threshold by an extreme value model under appropriate assumptions. The survival distributions conditioned to the covariates are easily computed from the baseline. A procedure allowing an automatic choice of the threshold and an aggregated estimate of the survival probabilities are also proposed. The performance is studied by simulations and an application on two data sets is given.  相似文献   

14.
In this short communication, we extend characterization theorems for distributions based on versions of the Chernoff inequality to the case where the distributions are not necessarily purely discrete or absolutely continuous (in the usual sense) and relate these to Cox's representation for a survivor function in terms of the hazard measure, as presented by Kotz and Shanbhag (1980). (The original version of the representation referred to had appeared in Cox, 1972). Some corollaries of the results giving characteristic properties of certain well-known distributions explicitly are also presented.  相似文献   

15.
Extensions to Cox's proportional hazards regression model (Cox, 1972) for the analysis of survival data are considered for a more general multistate framework. This framework allows several transient disease states between initial entry state and death as well as incorporating possible competing causes of death. Methods for parameter and function estimation within this extension are presented and applied to the analysis of data from the Stanford Heart Transplantation Program (Crowley and Hu,1977).  相似文献   

16.
We revisit the problem of testing homoscedasticity (or, equality of variances) of several normal populations which has applications in many statistical analyses, including design of experiments. The standard text books and widely used statistical packages propose a few popular tests including Bartlett's test, Levene's test and a few adjustments of the latter. Apparently, the popularity of these tests have been based on limited simulation study carried out a few decades ago. The traditional tests, including the classical likelihood ratio test (LRT), are asymptotic in nature, and hence do not perform well for small sample sizes. In this paper we propose a simple parametric bootstrap (PB) modification of the LRT, and compare it against the other popular tests as well as their PB versions in terms of size and power. Our comprehensive simulation study bursts some popularly held myths about the commonly used tests and sheds some new light on this important problem. Though most popular statistical software/packages suggest using Bartlette's test, Levene's test, or modified Levene's test among a few others, our extensive simulation study, carried out under both the normal model as well as several non-normal models clearly shows that a PB version of the modified Levene's test (which does not use the F-distribution cut-off point as its critical value), and Loh's exact test are the “best” performers in terms of overall size as well as power.  相似文献   

17.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

18.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

19.
Left-truncated and right-censored (LTRC) data are encountered frequently due to a prevalent cohort sampling in follow-up studies. Because of the skewness of the distribution of survival time, quantile regression is a useful alternative to the Cox's proportional hazards model and the accelerated failure time model for survival analysis. In this paper, we apply the quantile regression model to LTRC data and develops an unbiased estimating equation for regression coefficients. The proposed estimation methods use the inverse probabilities of truncation and censoring weighting technique. The resulting estimator is uniformly consistent and asymptotically normal. The finite-sample performance of the proposed estimation methods is also evaluated using extensive simulation studies. Finally, analysis of real data is presented to illustrate our proposed estimation methods.  相似文献   

20.
A modification to Tiku's (1981) test, which may be seriously biased, is proposed. The modified test is only marginally biased if at all and is substantially more powerful. A ratio test based on Tiku’s (1967) modified likelihood function is also proposed, and shown to have power comparable to the power of the ratio test based on the likelihood function. The proposed ratio test is, however, much easier from a computational viewpoint.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号