首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In many experiments where pre-treatment and post-treatment measurements are taken, investigators wish to determine if there is a difference between two treatment groups. For this type of data, the post-treatment variable is used as the primary comparison variable and the pre-treatment variable is used as a covariate. Although most of the discussion in this paper is written with the pre-treatment variable as the covariate the results are applicable to other choices of a covariate. Tests based on residuals have been proposed as alternatives to the usual covariance methods. Our objective is to investigate how the powers of these tests are affected when the conditional variance of the post-treatment variable depends on the magnitude of the pre-treatment variable. In particular, we investigate two cases. [1] Crager, Michael R. 1987. Analysis of Covariance in Parallel-Group Clinical Trials With Pretreatment Baselines. Biometrics, 43: 895901. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] The conditional variance of the post-treatment variable gradually increases as the magnitude of the pre-treatment variable increases. (In many biological models this is the case.) [2] Knoke, James D. 1991. Nonparametric Analysis of Covariance for Comparing Change in Randomized Studies with Baseline Values Subject to Error. Biometrics, 47: 523533. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] The conditional variance of the post-treatment variable is dependent upon natural or imposed subgroups contained within the pre-treatment variable. Power comparisons are made using Monte Carlo techniques.  相似文献   

2.
The order of experimental runs in a fractional factorial experiment is essential when the cost of level changes in factors is considered. The generalized foldover scheme given by [1] Coster, D. C. and Cheng, C. S. 1988. Minimum cost trend free run orders of fractional factorial designs. The Annals of Statistics, 16: 11881205. [Crossref], [Web of Science ®] [Google Scholar]gives an optimal order to experimental runs in an experiment with specified defining contrasts. An experiment can be specified by a design requirement such as resolution or estimation of some interactions. To meet such a requirement, we can find several sets of defining contrasts. Applying the generalized foldover scheme to these sets of defining contrasts, we obtain designs with different numbers of level changes and then the design with minimum number of level changes. The difficulty is to find all the sets of defining contrasts. An alternative approach is investigated by [2] Cheng, C. S., Martin, R. J. and Tang, B. 1998. Two-level factorial designs with extreme numbers of level changes. The Annals of Statistics, 26: 15221539. [Crossref], [Web of Science ®] [Google Scholar]for two-level fractional factorial experiments. In this paper, we investigate experiments with all factors in slevels.  相似文献   

3.
Based on Bradley Efron's observation that individual resamples in the regular bootstrap have support on approximately 63% of the original observations, C. R. Rao, P. K. Pathak and V. I. Koltchinskii [1] Rao, C. R., Pathak, P. K. and Koltchinskii, V. I. 1997. Bootstrap by Sequential Resampling. Journal of Statistical Planning and Inference, 64: 257281. [Crossref], [Web of Science ®] [Google Scholar]have proposed a sequential resampling scheme. This sequential bootstrap stabilizes the information content of each resample by fixing the number of unique observations and letting N, the number of observatons in each resample, vary. The Rao-Pathak-Koltchinskii paper establishes the asymptotic correctness (consistency) of the sequential bootstrap. The main object of our investigation is to study the empirical properties of the Rao-Pathak-Koltchinskii sequential bootstrap as compared to the regular bootstrap. In all our settings, sequential bootstrap performs as well or better than regular bootstrap. In the particular case where we estimate standard errors of sample medians, we find that sequential bootstrap outperforms regular bootstrap by reducing variability in the final bootstrap estimates.  相似文献   

4.
We developed an alternative random permutation testing method for multiple linear regression, which is an improvement over the existing one proposed by [1] Kennedy, P. E. 1995. Randomization tests in econometrics. Journal of Business and Economic Statistics, 13: 8594. [Taylor & Francis Online], [Web of Science ®] [Google Scholar] or [2] Freedman, D. and Lane, D. 1983. A nonstochastic interpretation of reported significance levels. Journal of Business and Economic Statistics, 1: 292298. [Taylor & Francis Online] [Google Scholar].  相似文献   

5.
Palmer and Broemeling [1] Palmer, J. L. and Broemeling, L. D. 1990. A Comparison of Bayes and Maximum Likelihood Estimation of the Intraclass Correlation Coefficient. Comm. Statist.-Theory Meth, 19: 953975. [Taylor & Francis Online], [Web of Science ®] [Google Scholar] compare Bayes and maximum likelihood estimates of the intraclass correlation (ICC). The prior information in their derivation of the Bayes estimator is placed on the variance components instead of the ICC itself. This paper finds a Bayes estimator of the ICC with the prior placed on the ICC. Bayes estimates based on three different priors are then compared to method of moments estimate.  相似文献   

6.
ABSTRACT

The systematic sampling (SYS) design (Madow and Madow, 1944 Madow , L. H. , Madow , W. G. ( 1944 ). On the theory of systematic sampling . Ann. Math. Statist. 15 : 124 .[Crossref] [Google Scholar]) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]) and usual variance estimators (Yates and Grundy, 1953 Yates , F. , Grundy , P. M. ( 1953 ). Selection without replacement from within strata with probability proportional to size . J. Roy. Statist. Soc. Series B 1 : 253261 . [Google Scholar]) are inadequate and can overestimate the variance significantly (Särndal et al., 1992 Särndal , C. E. , Swenson , B. , Wretman , J. H. ( 1992 ). Model Assisted Survey Sampling . New York : Springer-Verlag , Ch. 3 .[Crossref] [Google Scholar]). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.  相似文献   

7.
In this paper we introduce a class of estimators which includes the ordinary least squares (OLS), the principal components regression (PCR) and the Liu estimator [1] Liu, K. 1993. A new class of biased estimate in linear regression. Communications in Statistics – Theory and Methods, 22(2): 393402. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]. In particular, we show that our new estimator is superior, in the scalar mean-squared error (mse) sense, to the Liu estimator, to the OLS estimator and to the PCR estimator.  相似文献   

8.
《Econometric Reviews》2013,32(3):309-336
ABSTRACT

We examine empirical relevance of three alternative asymptotic approximations to the distribution of instrumental variables estimators by Monte Carlo experiments. We find that conventional asymptotics provides a reasonable approximation to the actual distribution of instrumental variables estimators when the sample size is reasonably large. For most sample sizes, we find Bekker[11] Bekker, P. A. 1994. Alternative Approximations to the Distributions of Instrumental Variable Estimators. Econometrica, 62: 657681. [Crossref], [Web of Science ®] [Google Scholar] asymptotics provides reasonably good approximation even when the first stage R 2 is very small. We conclude that reporting Bekker[11] Bekker, P. A. 1994. Alternative Approximations to the Distributions of Instrumental Variable Estimators. Econometrica, 62: 657681. [Crossref], [Web of Science ®] [Google Scholar] confidence interval would suffice for most microeconometric (cross-sectional) applications, and the comparative advantage of Staiger and Stock[5] Staiger, D. and Stock, J. H. 1997. Instrumental Variables Regression with Weak Instruments. Econometrica, 65: 556586. [Crossref], [Web of Science ®] [Google Scholar] asymptotic approximation is in applications with sample sizes typical in macroeconometric (time series) applications.  相似文献   

9.
Abstract

In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001 Hsueh, H.-M., Liu, J.-P. and Chen, J. J. 2001. Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics, 57: 478483. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article.  相似文献   

10.
The local influence approach of Cook [1] Cook, R. D. 1986. Assessment of Local Influence. Journal Of The Royal Statistical Society Series B-Methodological, 48: 133169.  [Google Scholar]to regression diagnostic is developed and discussed, and compared with Cook's [2] Cook, R. D. 1977. Detection of Influential Observations in Linear Regression. Technometrics, 19: 1518. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]deletion approach. The ability of the local influence approach to handle cases simultaneously, as well as some of its theoretical and practical difficulties, are reviewed. The perturbation ideas of the approach are applied to the linear model making distinction between the local perturbations on the assumptions of the model and the data.  相似文献   

11.
We consider the semiparametric regression model introduced by [1] Duan, N. and Li, K. C. 1991. Slicing regression: a link-free regression method. The Annals of Statistics, 19: 505530. [Crossref], [Web of Science ®] [Google Scholar]. The dependent variable y is linked to the index x′ β through an unknown link function. [1] Duan, N. and Li, K. C. 1991. Slicing regression: a link-free regression method. The Annals of Statistics, 19: 505530. [Crossref], [Web of Science ®] [Google Scholar] and [2] Li, K. C. 1991. Sliced inverse regression for dimension reduction, with discussions. Journal of the American Statistical Association, 86: 316342. [Taylor & Francis Online], [Web of Science ®] [Google Scholar] present Slicing methods (the Sliced Inverse Regression methods SIR-I, SIR-II and SIRα) in order to estimate the direction of the unknown slope parameter β. These methods are computationally simple and fast but depend on the choice of an arbitrary slicing fixed by the user. When the sample size is small, the number and the position of slices have an influence on the estimated direction. In this paper, we suggest to use the corresponding Pooled Slicing methods: PSIR-I (proposed by [3] Aragon, Y. and Saracco, J. 1997. Sliced Inverse Regression (SIR): an appraisal of small sample alternatives to slicing. Computational Statistics, 12: 109130. [Web of Science ®] [Google Scholar]), PSIR-II and PSIRα. These methods combine the results from a number of slicings. We compare the sample behaviour of Slicing and Pooled Slicing methods on simulations. We also propose a practical choice of α in SIRα and PSIRα methods.  相似文献   

12.
It is known that, in the presence of short memory components, the estimation of the fractional parameter d in an Autoregressive Fractionally Integrated Moving Average, ARFIMA(p, d, q), process has some difficulties (see [1] Smith, J., Taylor, N. and Yadav, S. 1997. Comparing the bias and misspecification in ARFIMA models. Journal of Time Series Analysis, 18(5): 507527. [Crossref] [Google Scholar]). In this paper, we continue the efforts made by Smith et al. [1] Smith, J., Taylor, N. and Yadav, S. 1997. Comparing the bias and misspecification in ARFIMA models. Journal of Time Series Analysis, 18(5): 507527. [Crossref] [Google Scholar] and Beveridge and Oickle [2] Beveridge, S. and Oickle, C. 1993. Estimating fractionally integrated time series models. Economics Letters, 43: 137142.  [Google Scholar] by conducting a simulation study to evaluate the convergence properties of the iterative estimation procedure suggested by Hosking [3] Hosking, J. 1981. Fractional differencing. Biometrika, 68(1): 165176. [Crossref], [Web of Science ®] [Google Scholar]. In this context we consider some semiparametric approaches and a parametric method proposed by Fox-Taqqu[4] Fox, R. and Taqqu, M. S. 1986. Large-sample properties of parameter estimates for strongly dependent stationary gaussian time series. The Annals of Statistics, 14(2): 517532. [Crossref], [Web of Science ®] [Google Scholar]. We also investigate the method proposed by Robinson [5] Robinson, P. M. 1995a. Log-periodogram regression of time series with long range dependence. The Annals of Statistics, 23(3): 10481072. [Crossref], [Web of Science ®] [Google Scholar] and a modification using the smoothed periodogram function.  相似文献   

13.
ABSTRACT

Hoerl and Kennard (1970a Hoerl , A. E. , Kennard , R. W. ( 1970a ). Ridge regression: biased estimation for non-orthogonal problems . Tech. 12 : 5567 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) introduced the ridge regression estimator as an alternative to the ordinary least squares estimator in the presence of multicollinearity. In this article, a new approach for choosing the ridge parameter (K), when multicollinearity among the columns of the design matrix exists, is suggested and evaluated by simulation techniques, in terms of mean squared errors (MSE). A number of factors that may affect the properties of these methods have been varied. The MSE from this approach has shown to be smaller than using Hoerl and Kennard (1970a Hoerl , A. E. , Kennard , R. W. ( 1970a ). Ridge regression: biased estimation for non-orthogonal problems . Tech. 12 : 5567 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) in almost all situations.  相似文献   

14.
Abstract

The heteroskedasticity-consistent covariance matrix estimator proposed by White [White, H. A. (1980 White, H. A. 1980. Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48: 817838. [Crossref], [Web of Science ®] [Google Scholar]). Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838], also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto et al. [Cribari–Neto, F., Ferrari, S. L. P., Cordeiro, G. M. (2000 Cribari–Neto, F., Ferrari, S. L. P. and Cordeiro, G. M. 2000. Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika, 87: 907918. [Crossref], [Web of Science ®] [Google Scholar]). Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika 87:907–918] have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that are also commonly used by practitioners. These include the HC1, HC2, and HC3 estimators, which have proven to have superior small-sample behavior relative to White's estimator. This paper defines a general bias-correction mechamism that can be applied not only to White's estimator, but to variants of this estimator as well, such as HC1, HC2, and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators.  相似文献   

15.
In this paper, a procedure based on the delete-1 cross-validation is given for estimating the number of superimposed exponential signals, its limiting behavior is explored and it is shown that the probability of overestimating the true number of signals is greater than a positive constant for sufficiently large samples. Also a general procedure based on the cross-validation is presented when the deletion proceeds according to a collection of subsets of indices. The result is similar to the delete-1 cross-validation if the number of deletions is fixed. The simulation results are provided for the performance of the procedure when the collections of subsets of indices are chosen as those suggested by Shao [1] Shao, J. 1993. Linear model selection by cross-validation. J. Amer. Statist. Assoc., 88: 486494. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]in a linear model selection problem.  相似文献   

16.
Classification of a bivariate binary observation into one of the two possible groups requires the estimation of the joint cell probabilities under each of the two groups. Two widely used approaches for the estimation of such joint cell probabilities are: [1] Seber, G. A. F. 1984. Multivariate Observations New York: John Wiley and Sons. [Crossref] [Google Scholar] kernel based non-parametric approach; and [2] McLachlan, G. J. 1992. Discriminant Analysis and Statistical Pattern Recognition New York: John Wiley and Sons. [Crossref] [Google Scholar] multinomial distribution based cell counts approach. In these traditional approaches, the joint cell probabilities are estimated without making any assumptions about the structural forms for these probabilities. Consequently, it is not clear, how these traditional approaches take into account the correlation that may exist between the 2-dimensional binary observations. In this paper, we model the cell probabilities by a suitable bivariate binary distribution which accommodates the correlation in a natural way, and examine the effect of this type of modelling in classifying a new correlated binary observation into one of the two groups. This is done by comparing the probability of misclassification yielded by the proposed model based approach with those of the kernel as well as multinomial distribution based approaches. It is shown through a simulation study that the probabilities of misclassification for the model based approach are substantially smaller than those of the other two approaches. We illustrate the use of the proposed model based approach in classification by analyzing a combined data from two epidemiological surveys of 6–11 year old children conducted in Connecticut, the New Haven Child Survey (NHCS) and the Eastern Connecticut Child Survey (ECCS).  相似文献   

17.
ABSTRACT

In this paper, we present a modified Kelly and Rice method for testing synergism. This approach is consistent with Berenbaum's1-3 Berenbaum, M.C. 1977. Synergism, Additivism, and Antagonism in Immuno-Suppression: A Critical Review. Clinical and Experimental Immunology, 28: 118. Berenbaum, M.C. 1985. The Expected Effect of a Combination of Agents: The General Solution. Journal of Theoretical Biology, 114: 413431. Berenbaum, M.C. 1989. What is Synergism? Pharmacological Reviews. 41: 93141.   framework for additivity. The delta method[4] Bishop, Y.M., Fienberg, S.E. and Holland, P.W. 1975. Discrete Multivariate Analysis: Theory and Practice Massachusetts: MIT Press.  [Google Scholar] is applied to obtain the estimated variance for the predicted additivity proportion. A Monte Carlo simulation study for the evaluation of the method's performance, i.e., global overall tests for synergism, is also discussed. Kelly and Rice[5] Kelly, C. and Rice, J. 1990. Monotone Smoothing with Application to Dose-Response Curves and the Assessment of Synergism. Biometrics, 46: 10711085. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] do not provide a correct test statistic because the variance is underestimated. Hence, the performance of the Kelly–Rice[5] Kelly, C. and Rice, J. 1990. Monotone Smoothing with Application to Dose-Response Curves and the Assessment of Synergism. Biometrics, 46: 10711085. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] method is generally anti-conservative, based on the simulation findings. In addition, the overall test of synergism with χ2(r) from the modified Kelly and Rice method for larger sample sizes is better than that with χ2(1) from the modified Kelly and Rice method.  相似文献   

18.
Motivated by a discussion of an elementary probability puzzle provided by Anderson and Provost [1] Anderson, O. D. and Provost, S. B. 1992. Beads, bags and Bayes. Int. J. Math. Educ. Sci. Technol., 23: 2537. [Taylor & Francis Online] [Google Scholar], we review what may be called the fundamental problem of finite population sampling theory and propose that only super-model or Bayesian approaches to finite population sampling are acceptable.  相似文献   

19.
ABSTRACT

This article considers three practical hypotheses involving the equicorrelation matrix for grouped normal data. We obtain statistics and computing formulae for common test procedures such as the score test and the likelihood ratio test. In addition, statistics and computing formulae are obtained for various small sample procedures as proposed in Skovgaard (2001 Skovgaard , I. M. . ( 2001 ). Likelihood asymptotics . Scand. J. Statist. . 28 : 332 . [CROSSREF] [Crossref], [Web of Science ®] [Google Scholar]). The properties of the tests for each of the three hypotheses are compared using Monte Carlo simulations.  相似文献   

20.
Consider the estimation of the regression parameters in the usual linear model. For design densities with infinite support, it has been shown by Faraldo Roca and González Manteiga [1] Faraldo Roca, P. and González Manteiga, W. 1987. “Efficiency of a new class of linear regression estimates obtained by preliminary nonparametric estimation”. In New Perspectives in Theoretical and Applied Statistics Edited by: Puri, M. L., Vilaplana, J. P. and Wertz, W. 229242. New York: John Wiley.  [Google Scholar] that it is possible to modify the classical least squares procedure and to obtain estimators for the regression parameters whose MSE's (mean squared errors) are smaller than those of the usual least squares estimators. The modification consists of presmoothing the response variables by a kernel estimator of the regression function. These authors also show that the gain in efficiency is not possible for a design density with compact support. We show that in this case local linear presmoothing does not fix this inefficiency problem, in spite of the well known fact that local linear fitting automatically corrects the bias in the endpoints of the (design density) support. We demonstrate on a theoretical basis how this inefficiency problem can be rectified in the compact design case: we prove that presmoothing with boundary kernels studied in Müller [2] Müller, H.-G. 1991. Smooth optimum kernel estimators near endpoints. Biometrika, 78: 521530. [Crossref], [Web of Science ®] [Google Scholar] and Müller and Wang [3] Müller, H.-G. and Wang, J.-L. 1994. Hazard rate estimation under random censoring with varying kernels and bandwidths. Biometrics, 50: 6176. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] leads to regression estimators which are superior over the least squares estimators. A very careful analytic treatment is needed to arrive at these asymptotic results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号