首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
ABSTRACT

The systematic sampling (SYS) design (Madow and Madow, 1944 Madow , L. H. , Madow , W. G. ( 1944 ). On the theory of systematic sampling . Ann. Math. Statist. 15 : 124 .[Crossref] [Google Scholar]) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]) and usual variance estimators (Yates and Grundy, 1953 Yates , F. , Grundy , P. M. ( 1953 ). Selection without replacement from within strata with probability proportional to size . J. Roy. Statist. Soc. Series B 1 : 253261 . [Google Scholar]) are inadequate and can overestimate the variance significantly (Särndal et al., 1992 Särndal , C. E. , Swenson , B. , Wretman , J. H. ( 1992 ). Model Assisted Survey Sampling . New York : Springer-Verlag , Ch. 3 .[Crossref] [Google Scholar]). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.  相似文献   

2.
ABSTRACT

Hoerl and Kennard (1970a Hoerl , A. E. , Kennard , R. W. ( 1970a ). Ridge regression: biased estimation for non-orthogonal problems . Tech. 12 : 5567 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) introduced the ridge regression estimator as an alternative to the ordinary least squares estimator in the presence of multicollinearity. In this article, a new approach for choosing the ridge parameter (K), when multicollinearity among the columns of the design matrix exists, is suggested and evaluated by simulation techniques, in terms of mean squared errors (MSE). A number of factors that may affect the properties of these methods have been varied. The MSE from this approach has shown to be smaller than using Hoerl and Kennard (1970a Hoerl , A. E. , Kennard , R. W. ( 1970a ). Ridge regression: biased estimation for non-orthogonal problems . Tech. 12 : 5567 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) in almost all situations.  相似文献   

3.
ABSTRACT

In this paper, we present a modified Kelly and Rice method for testing synergism. This approach is consistent with Berenbaum's1-3 Berenbaum, M.C. 1977. Synergism, Additivism, and Antagonism in Immuno-Suppression: A Critical Review. Clinical and Experimental Immunology, 28: 118. Berenbaum, M.C. 1985. The Expected Effect of a Combination of Agents: The General Solution. Journal of Theoretical Biology, 114: 413431. Berenbaum, M.C. 1989. What is Synergism? Pharmacological Reviews. 41: 93141.   framework for additivity. The delta method[4] Bishop, Y.M., Fienberg, S.E. and Holland, P.W. 1975. Discrete Multivariate Analysis: Theory and Practice Massachusetts: MIT Press.  [Google Scholar] is applied to obtain the estimated variance for the predicted additivity proportion. A Monte Carlo simulation study for the evaluation of the method's performance, i.e., global overall tests for synergism, is also discussed. Kelly and Rice[5] Kelly, C. and Rice, J. 1990. Monotone Smoothing with Application to Dose-Response Curves and the Assessment of Synergism. Biometrics, 46: 10711085. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] do not provide a correct test statistic because the variance is underestimated. Hence, the performance of the Kelly–Rice[5] Kelly, C. and Rice, J. 1990. Monotone Smoothing with Application to Dose-Response Curves and the Assessment of Synergism. Biometrics, 46: 10711085. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] method is generally anti-conservative, based on the simulation findings. In addition, the overall test of synergism with χ2(r) from the modified Kelly and Rice method for larger sample sizes is better than that with χ2(1) from the modified Kelly and Rice method.  相似文献   

4.
Abstract

Kernel methods are very popular in nonparametric density estimation. In this article we suggest a simple estimator which reduces the bias to the fourth power of the bandwidth, while the variance of the estimator increases only by at most a moderate constant factor. Our proposal turns out to be a fourth order kernel estimator and may be regarded as a new version of the generalized jackknifing approach (Schucany W. R., Sommers, J. P. (1977 Schucany, W. R. and Sommers, J. P. 1977. Improvement of kernel type estimators. Journal of the American Statistical Association, 72: 420423. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Improvement of Kernal type estimators. Journal of the American Statistical Association 72:420–423.) applied to kernel density estimation.  相似文献   

5.
ABSTRACT

A confidence interval and test are obtained for the mean of an asymmetric distribution using a random sample of size n. The method is based on N. J. Johnson's (1978 Johnson , N. J. ( 1978 ). Modified, t tests and confidence intervals for asymmetrical populations. J. Amer. Statist. Assoc. 73 ( 363 ): 536544 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) modified t-test, where terms of Cornish–Fisher expansions involving the third moment are used to adjust the conventional statistic to have more closely a Student's t-distribution with n ? 1 degrees of freedom. Johnson's (1978 Johnson , N. J. ( 1978 ). Modified, t tests and confidence intervals for asymmetrical populations. J. Amer. Statist. Assoc. 73 ( 363 ): 536544 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) test cannot be inverted uniquely, so a corresponding confidence interval for the mean may be disjointed. However, an artificial term of small order can be added to make inversion of the test a uniquely defined operation, which prevents such disjointedness. The resulting one-sided and two-sided intervals perform better than others in the literature with skewed distributions, and have good performance with a normal distribution. The two-sided interval may be recommended for general use if the sample size is 10 or more and the nominal confidence coefficient is 95% or less, or if the sample size is 30 or more and the confidence coefficient is 99% or less.  相似文献   

6.
Abstract

The heteroskedasticity-consistent covariance matrix estimator proposed by White [White, H. A. (1980 White, H. A. 1980. Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48: 817838. [Crossref], [Web of Science ®] [Google Scholar]). Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838], also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto et al. [Cribari–Neto, F., Ferrari, S. L. P., Cordeiro, G. M. (2000 Cribari–Neto, F., Ferrari, S. L. P. and Cordeiro, G. M. 2000. Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika, 87: 907918. [Crossref], [Web of Science ®] [Google Scholar]). Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika 87:907–918] have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that are also commonly used by practitioners. These include the HC1, HC2, and HC3 estimators, which have proven to have superior small-sample behavior relative to White's estimator. This paper defines a general bias-correction mechamism that can be applied not only to White's estimator, but to variants of this estimator as well, such as HC1, HC2, and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators.  相似文献   

7.
Abstract

Adaptive choice of smoothing parameters for nonparametric Poisson regression (O'Sullivan et al., 1986 O'Sullivan , F. , Yandell , B. S. , Raynor , W. J., Jr. ( 1986 ). Automatic smoothing of regression functions in generalized linear models . J. Amer. Statist. Assoc. 81 : 96103 . [CSA] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) is considered in this article. A computable approximation of the unbiased risk estimate (AUBR) for Poisson regression is introduced. This approximation can be used to automatically tune the smoothing parameter for the penalized likelihood estimator. An alternative choice is the generalized approximate cross validation (GACV) proposed by Xiang and Wahba (1996 Xiang , D. , Wahba , G. ( 1996 ). A generalized approximate cross validation for smoothing splines with non-Gaussian data . Statist. Sinica 6 (3): 675692 .[Web of Science ®] [Google Scholar]). Although GACV enjoys a great success in practice when applying for nonparametric logisitic regression, its performance for Poisson regression is not clear. Numerical simulations have been conducted to evaluate the GACV and AUBR based tuning methods. We found that GACV has a tendency to oversmooth the data when the intensity function is small. As a consequence, we suggest tuning the smoothing parameter using AUBR in practice.  相似文献   

8.
Abstract

In this article two methods are proposed to make inferences about the parameters of a finite mixture of distributions in the context of partially identifiable censored data. The first method focuses on a mixture of location and scale models and relies on an asymptotic approximation to a suitably constructed augmented likelihood; the second method provides a full Bayesian analysis of the mixture based on a Gibbs sampler. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to other methods which deal directly with the likelihood of the mixture. This may be crucial if the number of components in the mixture is not small. Our proposals are illustrated on a classical example on failure times for communication devices first studied by Mendenhall and Hader (Mendenhall, W., Hader, R. J. (1958 Mendenhall, W. and Hader, R. J. 1958. Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika, 45: 504520. [Crossref], [Web of Science ®] [Google Scholar]). Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika 45:504–520.). In addition, we study the coverage of the confidence intervals obtained from each of the methods by means of a small simulation exercise.  相似文献   

9.
ABSTRACT

This article considers three practical hypotheses involving the equicorrelation matrix for grouped normal data. We obtain statistics and computing formulae for common test procedures such as the score test and the likelihood ratio test. In addition, statistics and computing formulae are obtained for various small sample procedures as proposed in Skovgaard (2001 Skovgaard , I. M. . ( 2001 ). Likelihood asymptotics . Scand. J. Statist. . 28 : 332 . [CROSSREF] [Crossref], [Web of Science ®] [Google Scholar]). The properties of the tests for each of the three hypotheses are compared using Monte Carlo simulations.  相似文献   

10.
The order of experimental runs in a fractional factorial experiment is essential when the cost of level changes in factors is considered. The generalized foldover scheme given by [1] Coster, D. C. and Cheng, C. S. 1988. Minimum cost trend free run orders of fractional factorial designs. The Annals of Statistics, 16: 11881205. [Crossref], [Web of Science ®] [Google Scholar]gives an optimal order to experimental runs in an experiment with specified defining contrasts. An experiment can be specified by a design requirement such as resolution or estimation of some interactions. To meet such a requirement, we can find several sets of defining contrasts. Applying the generalized foldover scheme to these sets of defining contrasts, we obtain designs with different numbers of level changes and then the design with minimum number of level changes. The difficulty is to find all the sets of defining contrasts. An alternative approach is investigated by [2] Cheng, C. S., Martin, R. J. and Tang, B. 1998. Two-level factorial designs with extreme numbers of level changes. The Annals of Statistics, 26: 15221539. [Crossref], [Web of Science ®] [Google Scholar]for two-level fractional factorial experiments. In this paper, we investigate experiments with all factors in slevels.  相似文献   

11.
Abstract

In this paper we develop a Bayesian analysis for the nonlinear regression model with errors that follow a continuous autoregressive process. In this way, unequally spaced observations do not present a problem in the analysis. We employ the Gibbs sampler, (see Gelfand, A., Smith, A. (1990 Gelfand, A. and Smith, A. 1990. Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc., 85: 398409. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85:398–409.), as the foundation for making Bayesian inferences. We illustrate these Bayesian inferences with an analysis of a real data-set. Using these same data, we contrast the Bayesian approach with a generalized least squares technique.  相似文献   

12.
In this paper we introduce a new measure for the analysis of association in cross-classifications having ordered categories. Association is measured in terms of the odd-ratios in 2 × 2 subtables formed from adjacent rows and adjacent columns. We focus our attention in the uniform association model. Our measure is based in the family of divergences introduced by Burbea and Rao [1] Burbea, J. and Rao, C. R. 1982a. On the convexity of some divergence measures based on entropy functions. IEEE Transactions on Information Theory, 28: 489495. [Crossref], [Web of Science ®] [Google Scholar]. Some well-known sets of data are reanalyzed and a simulation study is presented to analyze the behavior of the new families of test statistics introduced in this paper.  相似文献   

13.
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b Aoki , R. , Bolfarine , H. , Achcar , J. A. , Pinto Jr. , D. L. ( 2003b ). Bayesian analysis of a multivariate null intercept errors-in-variables regression model . Journal of Biopharmaceutical Statistics 13 : 767775 .[Taylor & Francis Online] [Google Scholar]) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999 Hadgu , A. , Koch , G. ( 1999 ). Application of generalized estimating equations to a dental randomized clinical trial . Journal of Biopharmaceutical Statistics 9 ( 1 ): 161178 .[Taylor & Francis Online] [Google Scholar]), and a quality control data set.  相似文献   

14.
Abstract

Bhattacharyya and Soejoeti (Bhattacharyya, G. K., Soejoeti, Z. A. (1989 Bhattacharyya, G. K. and Soejoeti, Z. A. 1989. Tampered failure rate model for step-stress accelerated life test. Commun. Statist.—Theory Meth., 18(5): 16271643. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Tampered failure rate model for step-stress accelerated life test. Commun. Statist.—Theory Meth. 18(5):1627–1643.) pro- posed the TFR model for step-stress accelerated life tests. Under the TFR model, this article proves that the maximum likelihood estimate of the shape parameters is unique for the Weibull distribution in a multiple step-stress accelerated life test, and investigates the accuracy of the maximum likelihood estimate using the Monte-Carlo simulation.  相似文献   

15.
In many experiments where pre-treatment and post-treatment measurements are taken, investigators wish to determine if there is a difference between two treatment groups. For this type of data, the post-treatment variable is used as the primary comparison variable and the pre-treatment variable is used as a covariate. Although most of the discussion in this paper is written with the pre-treatment variable as the covariate the results are applicable to other choices of a covariate. Tests based on residuals have been proposed as alternatives to the usual covariance methods. Our objective is to investigate how the powers of these tests are affected when the conditional variance of the post-treatment variable depends on the magnitude of the pre-treatment variable. In particular, we investigate two cases. [1] Crager, Michael R. 1987. Analysis of Covariance in Parallel-Group Clinical Trials With Pretreatment Baselines. Biometrics, 43: 895901. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] The conditional variance of the post-treatment variable gradually increases as the magnitude of the pre-treatment variable increases. (In many biological models this is the case.) [2] Knoke, James D. 1991. Nonparametric Analysis of Covariance for Comparing Change in Randomized Studies with Baseline Values Subject to Error. Biometrics, 47: 523533. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] The conditional variance of the post-treatment variable is dependent upon natural or imposed subgroups contained within the pre-treatment variable. Power comparisons are made using Monte Carlo techniques.  相似文献   

16.
Abstract

Chiu [Chiu, S. N. (1999 Chiu, S. N. 1999. An unbiased estimator for the survival function of censored data. Commun. Statist. - Theory Meth., 28(9): 22492260. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). An unbiased estimator for the survival function of censored data. Commun. Statist. - Theory Meth. 28(9):2249–2260.] proposed a nonparametric estimator for the survival function which is based on observable censoring times in the general censoring model. His estimator is less efficient than the Product-Limit estimator. Considering an informative censoring model this drawback can partially be overcome. This is shown by a nonparametric, uniformly consistent estimator based on observable censoring times within the simple Koziol–Green model. Some asymptotic properties of the new estimator are investigated and it is compared with the well-known ACL-estimator.  相似文献   

17.
ABSTRACT

In the case of the equally spaced fixed design nonparametric regression, the local constant M-smoother (LCM) in Chu, Glad, Godtliebsen, and Marron [1] Chu, C.K., Glad, I., Godtliebsen, F. and Marron, J.S. 1998. Edge-Preserving Smoothers for Image Processing (with discussion). Journal of the American Statistical Association, 93: 526556. [Taylor & Francis Online], [Web of Science ®] [Google Scholar] has the interesting property of jump-preserving. However, it suffers from the problem of boundary effects. To correct for such adverse effects on the LCM, Rue, Chu, Godtliebsen, and Marron [2] Rue, H., Chu, C.K., Godtliebsen, F. and Marron, J.S. 2001. M-smoother with Local Linear Fit. To appear in Journal of Nonparametric Statistics [Google Scholar] apply the local linear fit to the “inside” of the kernel function, and propose the local linear M-smoother (LLM). Unfortunately, the LLM is more sensitive to random fluctuations, since an extra tuning parameter is included. To avoid such a practical drawback to the LLM, we propose a new version of the LCM by applying the local linear fit to the “outside” of the kernel function. Our proposed estimator employs both the same tuning parameter associated with the ordinary LCM and the same weights assigned to the observations by the local linear smoother in Fan 3-4 Fan, J. 1992. Design-adaptive Nonparametric Regression. Journal of the American Statistical Association, 87: 9981004. Fan, J. 1993. Local Linear Regression Smoothers and Their Minimax Efficiencies. Annals of Statistics, 21: 196216.  . It has the same asymptotic mean square error as the LLM. In practice, it can be calculated by using the fast computation algorithm designed for the ordinary LCM by Chu et al. [1] Chu, C.K., Glad, I., Godtliebsen, F. and Marron, J.S. 1998. Edge-Preserving Smoothers for Image Processing (with discussion). Journal of the American Statistical Association, 93: 526556. [Taylor & Francis Online], [Web of Science ®] [Google Scholar], and does not suffer from the drawback to the LLM. More importantly, our results obtained for the new version of the LCM in the one-dimensional case can be extended directly to the multidimensional case. Simulation studies demonstrate that the asymptotic effects hold for reasonable sample sizes.  相似文献   

18.
In calculating significance levels for statistical non inferiority tests, the critical regions that satisfy the Barnard convexity condition have a central role. According to a theorem proved by Röhmel and Mansmann (1999 Röhmel , J. , Mansmann , U. ( 1999 ). Unconditional nonasymptotic one sided tests for independent binomial proportions when the interest lies in showing noninferiority and or superiority . Biometr. J. 2 : 149170 .[Crossref], [Web of Science ®] [Google Scholar]), when the critical regions satisfy this condition, the significance level for non inferiority tests can be calculated much more efficiently. In this study, the sets that fulfil the Barnard convexity condition are called Barnard convex sets, and because of their relevance, we studied their properties independently of the context from which the sets originated. Among several results, we found that Barnard convex sets are a convex geometry and that each Barnard convex set has a unique basis. Also, we provide an algorithm for calculating the Barnard convex hull of any set. Finally, we present some applications of the concept of the Barnard convex hull of a set for non inferiority tests.  相似文献   

19.
Abstract

In this article we revisit Warner's (Warner, S. L. (1965 Warner, S. L. 1965. Randomized Response: a survey technique for elimination evasive answer bias. Journal of the American Statistical Association, 60: 6369. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Randomized response: a survey technique for elimination evasive answer bias. Journal of the American Statistical Association 60:63–69) randomize response model for estimating the proportion of “sensitive” attributed in a population and propose a two-stage sequential sampling procedure for it. We show that the new procedure can potentially reduce, on the average, the number of interviewees surveyed in the study while allowing instances with smaller error of estimation. The properties and some of the attractive features of the randomized response two-stage sequential sampling estimation procedure is discussed as illustrated.  相似文献   

20.
ABSTRACT

In this article we consider estimating the bivariate survival function observations where one of the components is subject to left truncation and right censoring and the other is subject to right censoring only. Two types of nonparametric estimators are proposed. One is in the form of inverse-probability-weighted average (Satten and Datta, 2001 Satten , G. A. , Datta , S. ( 2001 ). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average . Amer. Statist. 55 : 207210 . [CROSSREF] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and the other is a generalization of Dabrowska's 1988 Dabrowska , D. M. ( 1988 ). Kaplan–Meier estimate on the plane . Ann. Statist. 18 : 308325 . [Google Scholar] estimator. The two are then compared based on their empirical performances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号