首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
In this article, a multiple three-decision procedure is proposed to classify p (≥2) treatments as better or worse than the best of q (≥2) control treatments in one way layout. Critical constants required for the implementation of the proposed procedure are tabulated for some pre-specified values of probability of no misclassification. Power function of the proposed procedure is defined and a common sample size necessary to guarantee various pre-specified power levels are tabulated under two optimal allocation schemes. Finally the implementation of the proposed methodology is demonstrated through numerical examples based on real life data.  相似文献   

2.
In this paper, we extend the work of Gjestvang and Singh [A new randomized response model, J. R. Statist. Soc. Ser. B (Methodological) 68 (2006), pp. 523–530] to propose a new unrelated question randomized response model that can be used for any sampling scheme. The interesting thing is that the estimator based on one sample is free from the use of known proportion of an unrelated character, unlike Horvitz et al. [The unrelated question randomized response model, Social Statistics Section, Proceedings of the American Statistical Association, 1967, pp. 65–72], Greenberg et al. [The unrelated question randomized response model: Theoretical framework, J. Amer. Statist. Assoc. 64 (1969), pp. 520–539] and Mangat et al. [An improved unrelated question randomized response strategy, Calcutta Statist. Assoc. Bull. 42 (1992), pp. 167–168] models. The relative efficiency of the proposed model with respect to the existing competitors has been studied.  相似文献   

3.
Sarjinder Singh 《Statistics》2013,47(3):566-574
In this note, a dual problem to the calibration of design weights of the Deville and Särndal [Calibration estimators in survey sampling, J. Amer. Statist. Assoc. 87 (1992), pp. 376–382] method has been considered. We conclude that the chi-squared distance between the design weights and the calibrated weights equals the square of the standardized Z-score obtained by the difference between the known population total of the auxiliary variable and its corresponding Horvitz and Thompson [A generalization of sampling without replacement from a finite universe, J. Amer. Statist. Assoc. 47 (1952), pp. 663–685] estimator divided by the sample standard deviation of the auxiliary variable to obtain the linear regression estimator in survey sampling.  相似文献   

4.
In this article, new pseudo-Bayes and pseudo-empirical Bayes estimators for estimating the proportion of a potentially sensitive attribute in a survey sampling have been introduced. The proposed estimators are compared with the recent estimator proposed by Odumade and Singh [Efficient use of two decks of cards in randomized response sampling, Comm. Statist. Theory Methods 38 (2009), pp. 439–446] and Warner [Randomized response: A survey technique for eliminating evasive answer bias, J. Amer. Statist. Assoc. 60 (1965), pp. 63–69].  相似文献   

5.
Testing the order of integration of economic and financial time series has become a conventional procedure prior to any modelling exercise. In this paper, we investigate and compare the finite sample properties of the frequency-domain tests proposed by Robinson [Efficient tests of nonstationary hypotheses, J. Amer. Statist. Assoc. 89(428) (1994), pp. 1420–1437] and the time-domain procedure proposed by Hassler, Rodrigues, and Rubia [Testing for general fractional integration in the time domain, Econometric Theory 25 (2009), pp. 1793–1828] when applied to seasonal data. The results presented are of empirical relevance as they provide some guidance regarding the finite sample properties of these tests.  相似文献   

6.
This paper deals with a study of different types of tests for the two-sided c-sample scale problem. We consider the classical parametric test of Bartlett [M.S. Bartlett, Properties of sufficiency and statistical tests, Proc. R. Stat. Soc. Ser. A. 160 (1937), pp. 268–282] several nonparametric tests, especially the test of Fligner and Killeen [M.A. Fligner and T.J. Killeen, Distribution-free two-sample tests for scale, J. Amer. Statist. Assoc. 71 (1976), pp. 210–213], the test of Levene [H. Levene, Robust tests for equality of variances, in Contribution to Probability and Statistics, I. Olkin, ed., Stanford University Press, Palo Alto, 1960, pp. 278–292] and a robust version of it introduced by Brown and Forsythe [M.B. Brown and A.B. Forsythe, Robust tests for the equality of variances, J. Amer. Statist. Assoc. 69 (1974), pp. 364–367] as well as two adaptive tests proposed by Büning [H. Büning, Adaptive tests for the c-sample location problem – the case of two-sided alternatives, Comm. Statist.Theory Methods. 25 (1996), pp. 1569–1582] and Büning [H. Büning, An adaptive test for the two sample scale problem, Nr. 2003/10, Diskussionsbeiträge des Fachbereich Wirtschaftswissenschaft der Freien Universität Berlin, Volkswirtschaftliche Reihe, 2003]. which are based on the principle of Hogg [R.V. Hogg, Adaptive robust procedures. A partial review and some suggestions for future applications and theory, J. Amer. Statist. Assoc. 69 (1974), pp. 909–927]. For all the tests we use Bootstrap sampling strategies, too. We compare via Monte Carlo Methods all the tests by investigating level α and power β of the tests for distributions with different strength of tailweight and skewness and for various sample sizes. It turns out that the test of Fligner and Killeen in combination with the bootstrap is the best one among all tests considered.  相似文献   

7.
For the two-sided comparisons of several treatments with a control, a common statistical problem is to decide which treatments are better than the control and which are worse than the control. This paper studies a multiple three-decision procedure for this purpose, proposed by Bohrer (1979) and Bohrer et al. (1981), and provides tables of critical points to facilitate the application of the procedure. The paper defines a power function of the procedure, and tabulates sample sizes necessary to guarantee a given power level. It addresses the problem of optimal sampling allocation in order to maximize the power for a given total sample size, and considers generalization to the situation where the treatments might have unequal numbers of observations.  相似文献   

8.
Biao Zhang 《Statistics》2016,50(5):1173-1194
Missing covariate data occurs often in regression analysis. We study methods for estimating the regression coefficients in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866] on regression analyses with missing covariates, in which they pioneered the use of two working models, the working propensity score model and the working conditional score model. A recent approach to missing covariate data analysis is the empirical likelihood method of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503], which effectively combines unbiased estimating equations. In this paper, we consider an alternative likelihood approach based on the full likelihood of the observed data. This full likelihood-based method enables us to generate estimators for the vector of the regression coefficients that are (a) asymptotically equivalent to those of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the working propensity score model is correctly specified, and (b) doubly robust, like the augmented inverse probability weighting (AIPW) estimators of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Am Statist Assoc. 1994;89:846–866]. Thus, the proposed full likelihood-based estimators improve on the efficiency of the AIPW estimators when the working propensity score model is correct but the working conditional score model is possibly incorrect, and also improve on the empirical likelihood estimators of Qin, Zhang and Leung [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the reverse is true, that is, the working conditional score model is correct but the working propensity score model is possibly incorrect. In addition, we consider a regression method for estimation of the regression coefficients when the working conditional score model is correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866]. Finally, we compare the finite-sample performance of various estimators in a simulation study.  相似文献   

9.
The randomized response (RR) technique pioneered by Warner, S.L. (1965) [Randomised response: a survey technique for eliminating evasive answer bias. J. Amer. Statist. Assoc. 60 , 63–69] is a useful tool in estimating the proportion of persons in a community bearing sensitive or socially disapproved characteristics. Mangat, N.S. & Singh, R. (1990) [An alternative rendomized response procedure. Biometrika 77 , 439–442] proposed a modification of Warner's procedure by using two RR techniques. Presented here is a generalized two‐stage RR procedure and derivation of the condition under which the proposed procedure produces a more precise estimator of the population parameter. A comparative study on the performance of this two‐stage procedure and conventional RR techniques, assuming that the respondents' jeopardy level in this proposed method remains the same as that offered by the traditional RR procedures, is also reported. In addition, a numerical example compares the efficiency of the proposed method with the traditional RR procedures.  相似文献   

10.
The local polynomial quasi-likelihood estimation has several good statistical properties such as high minimax efficiency and adaptation of edge effects. In this paper, we construct a local quasi-likelihood regression estimator for a left truncated model, and establish the asymptotic normality of the proposed estimator when the observations form a stationary and α-mixing sequence, such that the corresponding result of Fan et al. [Local polynomial kernel regression for generalized linear models and quasilikelihood functions, J. Amer. Statist. Assoc. 90 (1995), pp. 141–150] is extended from the independent and complete data to the dependent and truncated one. Finite sample behaviour of the estimator is investigated via simulations too.  相似文献   

11.
In the parametric regression model, the covariate missing problem under missing at random is considered. It is often desirable to use flexible parametric or semiparametric models for the covariate distribution, which can reduce a potential misspecification problem. Recently, a completely nonparametric approach was developed by [H.Y. Chen, Nonparametric and semiparametric models for missing covariates in parameter regression, J. Amer. Statist. Assoc. 99 (2004), pp. 1176–1189; Z. Zhang and H.E. Rockette, On maximum likelihood estimation in parametric regression with missing covariates, J. Statist. Plann. Inference 47 (2005), pp. 206–223]. Although it does not require a model for the covariate distribution or the missing data mechanism, the proposed method assumes that the covariate distribution is supported only by observed values. Consequently, their estimator is a restricted maximum likelihood estimator (MLE) rather than the global MLE. In this article, we show the restricted semiparametric MLE could be very misleading in some cases. We discuss why this problem occurs and suggest an algorithm to obtain the global MLE. Then, we assess the performance of the proposed method via some simulation experiments.  相似文献   

12.
Abstract.  Wang & Wells [ J. Amer. Statist. Assoc. 95 (2000) 62] describe a non-parametric approach for checking whether the dependence structure of a random sample of censored bivariate data is appropriately modelled by a given family of Archimedean copulas. Their procedure is based on a truncated version of the Kendall process introduced by Genest & Rivest [ J. Amer. Statist. Assoc. 88 (1993) 1034] and later studied by Barbe et al . [ J. Multivariate Anal. 58 (1996) 197]. Although Wang & Wells (2000) determine the asymptotic behaviour of their truncated process, their model selection method is based exclusively on the observed value of its L 2-norm. This paper shows how to compute asymptotic p -values for various goodness-of-fit test statistics based on a non-truncated version of Kendall's process. Conditions for weak convergence are met in the most common copula models, whether Archimedean or not. The empirical behaviour of the proposed goodness-of-fit tests is studied by simulation, and power comparisons are made with a test proposed by Shih [ Biometrika 85 (1998) 189] for the gamma frailty family.  相似文献   

13.
Doubly robust (DR) estimators of the mean with missing data are compared. An estimator is DR if either the regression of the missing variable on the observed variables or the missing data mechanism is correctly specified. One method is to include the inverse of the propensity score as a linear term in the imputation model [D. Firth and K.E. Bennett, Robust models in probability sampling, J. R. Statist. Soc. Ser. B. 60 (1998), pp. 3–21; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146; H. Bang and J.M. Robins, Doubly robust estimation in missing data and causal inference models, Biometrics 61 (2005), pp. 962–972]. Another method is to calibrate the predictions from a parametric model by adding a mean of the weighted residuals [J.M Robins, A. Rotnitzky, and L.P. Zhao, Estimation of regression coefficients when some regressors are not always observed, J. Am. Statist. Assoc. 89 (1994), pp. 846–866; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146]. The penalized spline propensity prediction (PSPP) model includes the propensity score into the model non-parametrically [R.J.A. Little and H. An, Robust likelihood-based analysis of multivariate data with missing values, Statist. Sin. 14 (2004), pp. 949–968; G. Zhang and R.J. Little, Extensions of the penalized spline propensity prediction method of imputation, Biometrics, 65(3) (2008), pp. 911–918]. All these methods have consistency properties under misspecification of regression models, but their comparative efficiency and confidence coverage in finite samples have received little attention. In this paper, we compare the root mean square error (RMSE), width of confidence interval and non-coverage rate of these methods under various mean and response propensity functions. We study the effects of sample size and robustness to model misspecification. The PSPP method yields estimates with smaller RMSE and width of confidence interval compared with other methods under most situations. It also yields estimates with confidence coverage close to the 95% nominal level, provided the sample size is not too small.  相似文献   

14.
This paper considers the problem of testing equality between two independent binomial proportions. Hwang and Yang (Statist. Sinica 11 (2001) 807) apply the Neyman–Pearson fundamental lemma and the estimated truth approach to derive optimal procedures, named expected p-values. This p-value has been shown to be identical to the mid p-value in Lancaster (J. Amer. Statist. Assoc. (1961) 223) for the one-sided test. For the two-sided test, the paper proves the usual two-sided mid p-value is identical to the expected p-value in the balanced sample case.  相似文献   

15.
Meta-analysis refers to a quantitative method for combining results from independent studies in order to draw overall conclusions. We consider hierarchical models including selection models under a skewed heavy tailed error distribution proposed originally by Chen, Dey, and Shao [M. H. Chen, D. K. Dey, Q. M. Shao, A new skewed link model for dichotomous quantal response data, J. Amer. Statist. Assoc. 94 (1983), pp. 1172–1186.] and Branco and Dey [D. Branco and D.K. Dey, A general class of multivariate skew-elliptical distributions, J. Multivariate Anal. 79, pp. 99–113.]. These rich classes of models combine the information of independent studies, allowing investigation of variability both between and within studies and incorporating weight functions. We constructed a detailed computational scheme under skewed normal and skewed Student's t distribution using the MCMC method. Bayesian model selection was conducted by Bayes factor under a different skewed error. Finally, we illustrated our methodology using a real data example taken from Johnson [M.F. Johnson, Comparative efficacy of Naf and SMFP dentifrices in caries prevention: a meta-analysis overview, J Eur. Organ. Caries Res. 27 (1993), pp. 328–336.].  相似文献   

16.
We consider the problem of testing against trend and umbrella alternatives, with known and unknown peak, in two-way layouts with fixed effects. We consider the non-parametric two-way layout ANOVA model of Akritas and Arnold (J. Amer. Statist. Assoc. 89 (1994) 336), and use the non-parametric formulation of patterned alternatives introduced by Akritas and Brunner (Research Developments in Probability and Statistics: Festschrift in honor of Madan L. Puri, VSP, Zeist, The Netherlands, 1996, pp. 277–288). The hypotheses of no main effects and of no simple effects are both considered. New rank test statistics are developed to specifically detect these types of alternatives. For main effects, we consider two types of statistics, one using weights similar to Hettmansperger and Norton (J. Amer. Statist. Assoc. 82 (1987) 292) and one with weights which maximize the asymptotic efficacy. For simple effects, we consider in detail only statistics to detect trend or umbrella patterns with known peaks, and refer to Callegari (Ph.D. Thesis, University of Padova, Italy) for a discussion about possible statistics for umbrella alternatives with unknown peaks. The null asymptotic distributions of the new statistics are derived. A number of simulation studies investigate their finite sample behaviors and compare the achieved alpha levels and power with some alternative procedures. An application to data used in a clinical study is presented to illustrate how to utilize some of the proposed tests for main effects.  相似文献   

17.
In this paper, we describe an overall strategy for robust estimation of multivariate location and shape, and the consequent identification of outliers and leverage points. Parts of this strategy have been described in a series of previous papers (Rocke, Ann. Statist., in press; Rocke and Woodruff, Statist. Neerlandica 47 (1993), 27–42, J. Amer. Statist. Assoc., in press; Woodruff and Rocke, J. Comput. Graphical Statist. 2 (1993), 69–95; J. Amer. Statist. Assoc. 89 (1994), 888–896) but the overall structure is presented here for the first time. After describing the first-level architecture of a class of algorithms for this problem, we review available information about possible tactics for each major step in the process. The major steps that we have found to be necessary are as follows: (1) partition the data into groups of perhaps five times the dimension; (2) for each group, search for the best available solution to a combinatorial estimator such as the Minimum Covariance Determinant (MCD) — these are the preliminary estimates; (3) for each preliminary estimate, iterate to the solution of a smooth estimator chosen for robustness and outlier resistance; and (4) choose among the final iterates based on a robust criterion, such as minimum volume. Use of this algorithm architecture can enable reliable, fast, robust estimation of heavily contaminated multivariate data in high (> 20) dimension even with large quantities of data. A computer program implementing the algorithm is available from the authors.  相似文献   

18.
For the assessment of agreement using probability criteria, we obtain an exact test, and for sample sizes exceeding 30, we give a bootstrap-tt test that is remarkably accurate. We show that for assessing agreement, the total deviation index approach of Lin [2000. Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence. Statist. Med. 19, 255–270] is not consistent and may not preserve its asymptotic nominal level, and that the coverage probability approach of Lin et al. [2002. Statistical methods in assessing agreement: models, issues and tools. J. Amer. Statist. Assoc. 97, 257–270] is overly conservative for moderate sample sizes. We also show that the nearly unbiased test of Wang and Hwang [2001. A nearly unbiased test for individual bioequivalence problems using probability criteria. J. Statist. Plann. Inference 99, 41–58] may be liberal for large sample sizes, and suggest a minor modification that gives numerically equivalent approximation to the exact test for sample sizes 30 or less. We present a simple and accurate sample size formula for planning studies on assessing agreement, and illustrate our methodology with a real data set from the literature.  相似文献   

19.
A stratified Warner''s randomized response model   总被引:2,自引:0,他引:2  
This paper proposes a new stratified randomized response model based on Warner's (J. Amer. Statist. Assoc. 60 (1965) 63) model that has an optimal allocation and large gain in precision. It also presents a drawback of the Hong et al. (Korean J. Appl. Statist. 7 (1994) 141) model under their proportional sampling assumption. It is shown that the proposed model is more efficient than the Hong et al. (Korean J. Appl. Statist. 7 (1994) 141) stratified randomized response model. Additionally, it is shown that the estimator based on the proposed method is more efficient than the Warner (J. Amer. Statist. Assoc. 60 (1965) 63), the Mangat and Singh (Biometrika 77 (1990) 439) and the Mangat (J. Roy. Statist. SQC. Ser. B 56 (1) (1994) 93) estimators under the conditions presented in both the case of completely truthful reporting and that of not completely truthful reporting by the respondents.  相似文献   

20.
Khuri (Technometrics 27 (1985) 213) and Levy and Neill (Comm. Statist. A 19 (1990) 1987) presented regression lack of fit tests for multiresponse data with replicated observations available at points in the experimental region, thereby extending the classical univariate lack of fit test given by Fisher (J. Roy. Statist. Soc. 85 (1922) 597). In this paper, multivariate tests for lack of fit in a linear multiresponse model are derived for the common circumstance in which replicated observations are not obtained. The tests are based on the union–intersection principle, and provide multiresponse extensions of the univariate tests for between- and within-cluster lack of fit introduced by Christensen (Ann. of Statist. 17 (1989) 673; J. Amer. Statist. Assoc. 86 (1991) 752). Since the properties of these tests depend on the choice of multivariate clusters of the observations, a multiresponse generalization of the maximin power clustering criterion given by Miller, Neill and Sherfey (Ann. of Statist. 26 (1998) 1411; J. Amer. Statist. Assoc. 94 (1999) 610) is also developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号