首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.  相似文献   

2.
In this article, a robust multistage parameter estimator is proposed for nonlinear regression with heteroscedastic variance, where the residual variances are considered as a general parametric function of predictors. The motivation is based on considering the chi-square distribution for the calculated sample variance of the data. It is shown that outliers that are influential in nonlinear regression parameter estimates are not necessarily influential in calculating the sample variance. This matter persuades us, not only to robustify the estimate of the parameters of the models for both the regression function and the variance, but also to replace the sample variance of the data by a robust scale estimate.  相似文献   

3.
A parametric robust test is proposed for comparing several coefficients of variation. This test is derived by properly correcting the normal likelihood function according to the technique suggested by Royall and Tsou. The proposed test statistic is asymptotically valid for general random variables, as long as their underlying distributions have finite fourth moments.

Simulation studies and real data analyses are provided to demonstrate the effectiveness of the novel robust procedure.  相似文献   

4.
In this paper we study the asymptotic theory of M-estimates and their associated tests for a one-factor experiment in a randomized block design. In this case one natural asymptotic theory corresponds to leaving the number of treatments fixed and letting the number of blocks tend to infinity. The classic asymptotic theory of M-estimates does not apply here, because the number of parameters and the number of observations are of the same order. In this paper we prove the consistency and asymptotic normality of the estimators of the treatment effects. It turns out that the asymptotic covariance matrix of the treatment effects estimators differs from the one derived from the classic theory of M-estimates for the linear model with a fixed number of parameters. We also study a test for treatment effects derived from M-estimates and we compare by Monte Carlo simulation the efficiency of this test with respect to the F-test, the Friedman test and the test based on aligned ranks.  相似文献   

5.
The determination of optimal sample sizes for estimating the difference between population means to a desired degree of confidence and precision is a question of economic significance. This question, however, is generally not discussed in statistics texts. Sample sizes to minimize linear sampling costs are proportional to the population standard deviations and inversely proportional to the square roots of the unit sampling costs. Sensitivity analysis shows that the impact of the use of equal rather than optimal sample sizes on the amount of sampling and its cost is not great as long as the unit costs and population variances are comparable.  相似文献   

6.
This work is devoted to robust principal component analysis (PCA). We give a comparison between some multivariate estimators of location and scatter by computing the influence functions of the sensitivity coefficient ρ corresponding to these estimators, and the mean squared error (MSE) of estimators of ρ. The coefficient ρ measures the closeness between the subspaces spanned by the initial eigenvectors and their corresponding version derived from an infinitesimal perturbation of the data distribution.  相似文献   

7.
8.
The two-sample location-scale problem arises in many situations like climate dynamics, bioinformatics, medicine, and finance. To address this problem, the nonparametric approach is considered because in practice, the normal assumption is often not fulfilled or the observations are too few to rely on the central limit theorem, and moreover outliers, heavy tails and skewness may be possible. In these situations, a nonparametric test is generally more robust and powerful than a parametric test. Various nonparametric tests have been proposed for the two-sample location-scale problem. In particular, we consider tests due to Lepage, Cucconi, Podgor-Gastwirth, Neuhäuser, Zhang, and Murakami. So far all these tests have not been compared. Moreover, for the Neuhäuser test and the Murakami test, the power has not been studied in detail. It is the aim of the article to review and compare these tests for the jointly detection of location and scale changes by means of a very detailed simulation study. It is shown that both the Podgor–Gastwirth test and the computationally simpler Cucconi test are preferable. Two actual examples within the medical context are discussed.  相似文献   

9.
Elementary inductive proofs are presented for the binomial approximation to the hypergeometric distribution, the density of an order statistic, and the distribution of when X 1, ···, X n are a sample from N (μ, 1).  相似文献   

10.
Abstract. A test for two‐sided equivalence of means has been developed under the assumption of normally distributed populations with heterogeneous variances. Its rejection region is limited by functions ± h that depend on the empirical variances. h is stated implicitly by a partial differential equation, an exact solution of which would provide a test that is exactly similar at the boundary of the null hypothesis of non‐equivalence. h is approximated by Taylor series up to third powers in the reciprocal number of degrees of freedom. This suffices to obtain error probabilities of the first kind that are very close to a nominal level of α = 0 . 05 at the boundary of the null hypothesis. For more than 10 data points in each group, they range between 0.04995 and 0.05005, and are thus much more precise than those obtained by other authors.  相似文献   

11.
A class of test statistics is introduced which is sensitive against the alternative of stochastic ordering in the two-sample censored data problem. The test statistics for evaluating a cumulative weighted difference in survival distributions are developed while taking into account the imbalances in base-line covariates between two groups. This procedure can be used to test the null hypothesis of no treatment effect, especially when base-line hazards cross and prognostic covariates need to be adjusted. The statistics are semiparametric, not rank based, and can be written as integrated weighted differences in estimated survival functions, where these survival estimates are adjusted for covariate imbalances. The asymptotic distribution theory of the tests is developed, yielding test procedures that are shown to be consistent under a fixed alternative. The choice of weight function is discussed and relies on stability and interpretability considerations. An example taken from a clinical trial for acquired immune deficiency syndrome is presented.  相似文献   

12.
This article presents results concerning the performance of both single equation and system panel cointegration tests and estimators. The study considers the tests developed in Pedroni (1999 Pedroni , P. ( 1999 ). Critical values for cointegration tests in heterogeneous panels with multiple regressors . Oxford Bulletin of Economics and Statistics 61 : 653670 .[Crossref], [Web of Science ®] [Google Scholar], 2004 Pedroni , P. ( 2004 ). Panel cointegration. Asymptotic and finite sample properties of pooled time series tests with an application to the PPP hypothesis . Econometric Theory 20 : 597625 .[Crossref], [Web of Science ®] [Google Scholar]), Westerlund (2005 Westerlund , J. ( 2005 ). New simple tests for panel cointegration . Econometric Reviews 24 : 297316 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), Larsson et al. (2001 Larsson , R. , Lyhagen , J. , Löthgren , M. ( 2001 ). Likelihood-based cointegration tests in heterogeneous panels . Econometrics Journal 4 : 109142 .[Crossref] [Google Scholar]), and Breitung (2005 Breitung , J. ( 2005 ). A parametric approach to the estimation of cointegration vectors in panel data . Econometric Reviews 24 : 151173 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and the estimators developed in Phillips and Moon (1999 Phillips , P. C. B. , Moon , H. R. ( 1999 ). Linear regression limit theory for nonstationary panel data . Econometrica 67 : 10571111 .[Crossref], [Web of Science ®] [Google Scholar]), Pedroni (2000 Pedroni , P. ( 2000 ). Fully modified OLS for heterogeneous cointegrated panels . In: Baltagi , B. H. , ed. Nonstationary Panels, Panel Cointegration, and Dynamic Panels . Amsterdam : Elsevier , pp. 93130 .[Crossref] [Google Scholar]), Kao and Chiang (2000 Kao , C. , Chiang , M.-H. ( 2000 ). On the estimation and inference of a cointegrated regression in panel data . In: Baltagi , B. H. , ed. Nonstationary Panels, Panel Cointegration, and Dynamic Panels . Amsterdam : Elsevier , pp. 179222 .[Crossref] [Google Scholar]), Mark and Sul (2003 Mark , N. C. , Sul , D. ( 2003 ). Cointegration vector estimation by panel dynamic OLS and long-run money demand . Oxford Bulletin of Economics and Statistics 65 : 655680 .[Crossref], [Web of Science ®] [Google Scholar]), Pedroni (2001 Pedroni , P. ( 2001 ). Purchasing power parity tests in cointegrated panels . Review of Economics and Statistics 83 : 13711375 . [Google Scholar]), and Breitung (2005 Breitung , J. ( 2005 ). A parametric approach to the estimation of cointegration vectors in panel data . Econometric Reviews 24 : 151173 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). We study the impact of stable autoregressive roots approaching the unit circle, of I(2) components, of short-run cross-sectional correlation and of cross-unit cointegration on the performance of the tests and estimators. The data are simulated from three-dimensional individual specific VAR systems with cointegrating ranks varying from zero to two for fourteen different panel dimensions. The usual specifications of deterministic components are considered.  相似文献   

13.
In a model of equioverlapping samples maximum likelihood estimation of a Poisson parameter is examined and compared with two linear unbiased estimations by mean squared error. Since a likelihood estimator is not explicitly available in general, a simulation study has been performed and the results are illustrated  相似文献   

14.
15.
In a previous article, we investigated the performance of several classification methods for cDNA-microarrays. Via simulations, various experimental settings could be explored without having to conduct expensive microarray studies. For the selection of genes, on which classification was based, one particular method was applied. Gene selection is, however, a very important aspect of classification. We extend the previous study by considering several gene selection methods. Furthermore, the stability of the methods with respect to distributional assumptions is examined by also considering data simulated from a symmetric and asymmetric Laplace distribution, in addition to normally distributed microarray data.  相似文献   

16.
17.
Prognostic studies are essential to understand the role of particular prognostic factors and, thus, improve prognosis. In most studies, disease progression trajectories of individual patients may end up with one of mutually exclusive endpoints or can involve a sequence of different events.

One challenge in such studies concerns separating the effects of putative prognostic factors on these different endpoints and testing the differences between these effects.

In this article, we systematically evaluate and compare, through simulations, the performance of three alternative multivariable regression approaches in analyzing competing risks and multiple-event longitudinal data. The three approaches are: (1) fitting separate event-specific Cox's proportional hazards models; (2) the extension of Cox's model to competing risks proposed by Lunn and McNeil; and (3) Markov multi-state model.

The simulation design is based on a prognostic study of cancer progression, and several simulated scenarios help investigate different methodological issues relevant to the modeling of multiple-event processes of disease progression. The results highlight some practically important issues. Specifically, the decreased precision of the observed timing of intermediary (non fatal) events has a strong negative impact on the accuracy of regression coefficients estimated with either the Cox's or Lunn-McNeil models, while the Markov model appears to be quite robust, under the same circumstances. Furthermore, the tests based on both Markov and Lunn-McNeil models had similar power for detecting a difference between the effects of the same covariate on the hazards of two mutually exclusive events. The Markov approach yields also accurate Type I error rate and good empirical power for testing the hypothesis that the effect of a prognostic factor on changes after an intermediary event, which cannot be directly tested with the Lunn-McNeil method. Bootstrap-based standard errors improve the coverage rates for Markov model estimates. Overall, the results of our simulations validate Markov multi-state model for a wide range of data structures encountered in prognostic studies of disease progression, and may guide end users regarding the choice of model(s) most appropriate for their specific application.  相似文献   

18.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

19.
This paper presents results on the size and power of first generation panel unit root and stationarity tests obtained from a large scale simulation study. The tests developed in the following papers are included: Levin et al. (2002), Harris and Tzavalis (1999), Breitung (2000), Im et al. (1997, 2003), Maddala and Wu (1999), Hadri (2000), and Hadri and Larsson (2005). Our simulation set-up is designed to address inter alia the following issues. First, we assess the performance as a function of the time and the cross-section dimensions. Second, we analyze the impact of serial correlation introduced by positive MA roots, known to have detrimental impact on time series unit root tests, on the performance. Third, we investigate the power of the panel unit root tests (and the size of the stationarity tests) for a variety of first order autoregressive coefficients. Fourth, we consider both of the two usual specifications of deterministic variables in the unit root literature.  相似文献   

20.
We consider a nonparametric autoregression model under conditional heteroscedasticity with the aim to test whether the innovation distribution changes in time. To this end, we develop an asymptotic expansion for the sequential empirical process of nonparametrically estimated innovations (residuals). We suggest a Kolmogorov–Smirnov statistic based on the difference of the estimated innovation distributions built from the first ?ns?and the last n ? ?ns? residuals, respectively (0 ≤ s ≤ 1). Weak convergence of the underlying stochastic process to a Gaussian process is proved under the null hypothesis of no change point. The result implies that the test is asymptotically distribution‐free. Consistency against fixed alternatives is shown. The small sample performance of the proposed test is investigated in a simulation study and the test is applied to a data example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号