共查询到20条相似文献,搜索用时 46 毫秒
1.
《统计学通讯:理论与方法》2013,42(9):1789-1799
Abstract In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article. 相似文献
2.
《统计学通讯:理论与方法》2013,42(5):1177-1182
ABSTRACT Hoerl and Kennard (1970a) introduced the ridge regression estimator as an alternative to the ordinary least squares estimator in the presence of multicollinearity. In this article, a new approach for choosing the ridge parameter (K), when multicollinearity among the columns of the design matrix exists, is suggested and evaluated by simulation techniques, in terms of mean squared errors (MSE). A number of factors that may affect the properties of these methods have been varied. The MSE from this approach has shown to be smaller than using Hoerl and Kennard (1970a) in almost all situations. 相似文献
3.
《统计学通讯:理论与方法》2013,42(6):1019-1030
ABSTRACT In this paper, we present a modified Kelly and Rice method for testing synergism. This approach is consistent with Berenbaum's1-3 framework for additivity. The delta method[4] is applied to obtain the estimated variance for the predicted additivity proportion. A Monte Carlo simulation study for the evaluation of the method's performance, i.e., global overall tests for synergism, is also discussed. Kelly and Rice[5] do not provide a correct test statistic because the variance is underestimated. Hence, the performance of the Kelly–Rice[5] method is generally anti-conservative, based on the simulation findings. In addition, the overall test of synergism with χ2(r) from the modified Kelly and Rice method for larger sample sizes is better than that with χ2(1) from the modified Kelly and Rice method. 相似文献
4.
《统计学通讯:理论与方法》2013,42(5):875-885
The order of experimental runs in a fractional factorial experiment is essential when the cost of level changes in factors is considered. The generalized foldover scheme given by [1]gives an optimal order to experimental runs in an experiment with specified defining contrasts. An experiment can be specified by a design requirement such as resolution or estimation of some interactions. To meet such a requirement, we can find several sets of defining contrasts. Applying the generalized foldover scheme to these sets of defining contrasts, we obtain designs with different numbers of level changes and then the design with minimum number of level changes. The difficulty is to find all the sets of defining contrasts. An alternative approach is investigated by [2]for two-level fractional factorial experiments. In this paper, we investigate experiments with all factors in slevels. 相似文献
5.
《统计学通讯:理论与方法》2013,42(10):1951-1980
Abstract The heteroskedasticity-consistent covariance matrix estimator proposed by White [White, H. A. (1980). Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838], also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto et al. [Cribari–Neto, F., Ferrari, S. L. P., Cordeiro, G. M. (2000). Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika 87:907–918] have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that are also commonly used by practitioners. These include the HC1, HC2, and HC3 estimators, which have proven to have superior small-sample behavior relative to White's estimator. This paper defines a general bias-correction mechamism that can be applied not only to White's estimator, but to variants of this estimator as well, such as HC1, HC2, and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators. 相似文献
6.
《统计学通讯:理论与方法》2013,42(3):603-617
Abstract Adaptive choice of smoothing parameters for nonparametric Poisson regression (O'Sullivan et al., 1986) is considered in this article. A computable approximation of the unbiased risk estimate (AUBR) for Poisson regression is introduced. This approximation can be used to automatically tune the smoothing parameter for the penalized likelihood estimator. An alternative choice is the generalized approximate cross validation (GACV) proposed by Xiang and Wahba (1996). Although GACV enjoys a great success in practice when applying for nonparametric logisitic regression, its performance for Poisson regression is not clear. Numerical simulations have been conducted to evaluate the GACV and AUBR based tuning methods. We found that GACV has a tendency to oversmooth the data when the intensity function is small. As a consequence, we suggest tuning the smoothing parameter using AUBR in practice. 相似文献
7.
《统计学通讯:理论与方法》2013,42(11):2153-2162
Abstract Kernel methods are very popular in nonparametric density estimation. In this article we suggest a simple estimator which reduces the bias to the fourth power of the bandwidth, while the variance of the estimator increases only by at most a moderate constant factor. Our proposal turns out to be a fourth order kernel estimator and may be regarded as a new version of the generalized jackknifing approach (Schucany W. R., Sommers, J. P. (1977). Improvement of Kernal type estimators. Journal of the American Statistical Association 72:420–423.) applied to kernel density estimation. 相似文献
8.
《统计学通讯:理论与方法》2013,42(5):833-848
ABSTRACT In the case of the equally spaced fixed design nonparametric regression, the local constant M-smoother (LCM) in Chu, Glad, Godtliebsen, and Marron [1] has the interesting property of jump-preserving. However, it suffers from the problem of boundary effects. To correct for such adverse effects on the LCM, Rue, Chu, Godtliebsen, and Marron [2] apply the local linear fit to the “inside” of the kernel function, and propose the local linear M-smoother (LLM). Unfortunately, the LLM is more sensitive to random fluctuations, since an extra tuning parameter is included. To avoid such a practical drawback to the LLM, we propose a new version of the LCM by applying the local linear fit to the “outside” of the kernel function. Our proposed estimator employs both the same tuning parameter associated with the ordinary LCM and the same weights assigned to the observations by the local linear smoother in Fan 3-4. It has the same asymptotic mean square error as the LLM. In practice, it can be calculated by using the fast computation algorithm designed for the ordinary LCM by Chu et al. [1], and does not suffer from the drawback to the LLM. More importantly, our results obtained for the new version of the LCM in the one-dimensional case can be extended directly to the multidimensional case. Simulation studies demonstrate that the asymptotic effects hold for reasonable sample sizes. 相似文献
9.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
10.
《统计学通讯:理论与方法》2013,42(6):1119-1133
Abstract For randomly censored data, (Satten, G. A., Datta S. (2001). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. Ass. 55:207–210) showed that the Kaplan–Meier estimator (product-limit estimator (PLE)) can be expressed as an inverse-probability-weighted average. In this article, we consider the other two PLEs: the truncation PLE and the censoring-truncation PLE. For the data subject to left-truncation or both left-truncation and right-censoring, it is shown that these two PLEs can be expressed as inverse-probability-weighted averages. 相似文献
11.
《统计学通讯:理论与方法》2013,42(12):2655-2681
In this paper we introduce a new measure for the analysis of association in cross-classifications having ordered categories. Association is measured in terms of the odd-ratios in 2 × 2 subtables formed from adjacent rows and adjacent columns. We focus our attention in the uniform association model. Our measure is based in the family of divergences introduced by Burbea and Rao [1]. Some well-known sets of data are reanalyzed and a simulation study is presented to analyze the behavior of the new families of test statistics introduced in this paper. 相似文献
12.
《统计学通讯:理论与方法》2013,42(8):1631-1646
Abstract In this paper we develop a Bayesian analysis for the nonlinear regression model with errors that follow a continuous autoregressive process. In this way, unequally spaced observations do not present a problem in the analysis. We employ the Gibbs sampler, (see Gelfand, A., Smith, A. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85:398–409.), as the foundation for making Bayesian inferences. We illustrate these Bayesian inferences with an analysis of a real data-set. Using these same data, we contrast the Bayesian approach with a generalized least squares technique. 相似文献
13.
《统计学通讯:理论与方法》2013,42(7):1675-1685
ABSTRACT In this article we consider estimating the bivariate survival function observations where one of the components is subject to left truncation and right censoring and the other is subject to right censoring only. Two types of nonparametric estimators are proposed. One is in the form of inverse-probability-weighted average (Satten and Datta, 2001) and the other is a generalization of Dabrowska's 1988 estimator. The two are then compared based on their empirical performances. 相似文献
14.
《统计学通讯:理论与方法》2013,42(9):1515-1529
ABSTRACT This paper develops corrected score tests for heteroskedastic t regression models, thus generalizing results by Cordeiro, Ferrari and Paula[1] and Cribari-Neto and Ferrari[2] for normal regression models and by Ferrari and Arellano-Valle[3] for homoskedastic t regression models. We present, in matrix notation, Bartlett-type correction formulae to improve score tests in this class of models. The corrected score statistics have a chi-squared distribution to order n ?1, where n is the sample size. We apply our main result to a few special models and present simulation results comparing the performance of the usual score tests and their corrected versions. 相似文献
15.
《统计学通讯:理论与方法》2013,42(4):857-873
ABSTRACT This article considers three practical hypotheses involving the equicorrelation matrix for grouped normal data. We obtain statistics and computing formulae for common test procedures such as the score test and the likelihood ratio test. In addition, statistics and computing formulae are obtained for various small sample procedures as proposed in Skovgaard (2001). The properties of the tests for each of the three hypotheses are compared using Monte Carlo simulations. 相似文献
16.
《统计学通讯:理论与方法》2013,42(4):749-774
Abstract In this article two methods are proposed to make inferences about the parameters of a finite mixture of distributions in the context of partially identifiable censored data. The first method focuses on a mixture of location and scale models and relies on an asymptotic approximation to a suitably constructed augmented likelihood; the second method provides a full Bayesian analysis of the mixture based on a Gibbs sampler. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to other methods which deal directly with the likelihood of the mixture. This may be crucial if the number of components in the mixture is not small. Our proposals are illustrated on a classical example on failure times for communication devices first studied by Mendenhall and Hader (Mendenhall, W., Hader, R. J. (1958). Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika 45:504–520.). In addition, we study the coverage of the confidence intervals obtained from each of the methods by means of a small simulation exercise. 相似文献
17.
《统计学通讯:理论与方法》2013,42(12):2321-2338
Abstract Bhattacharyya and Soejoeti (Bhattacharyya, G. K., Soejoeti, Z. A. (1989). Tampered failure rate model for step-stress accelerated life test. Commun. Statist.—Theory Meth. 18(5):1627–1643.) pro- posed the TFR model for step-stress accelerated life tests. Under the TFR model, this article proves that the maximum likelihood estimate of the shape parameters is unique for the Weibull distribution in a multiple step-stress accelerated life test, and investigates the accuracy of the maximum likelihood estimate using the Monte-Carlo simulation. 相似文献
18.
《统计学通讯:理论与方法》2013,42(7):1107-1122
ABSTRACT We consider the issue of bias correction when the kernel method is used in constructing confidence intervals for wildlife abundance based on transect data. Our method is based on some linear model which is the limit of the weak convergence of the kernel estimate process (after suitable centering and scaling) (See Bhattacharya and Mack,[21] also Mack and Müller[22]). The implementation of this method is demonstrated on a well-known example in transect sampling, alongside with two other bias-reduction devices: one based on the jackknife and the other one involving the estimation of the second derivative term in the Taylor expansion of the asymptotic bias. Some comments are made in comparison with the Fourier method. 相似文献
19.
《统计学通讯:理论与方法》2013,42(11):2123-2131
ABSTRACT There are several indices for measuring the similarity of two populations, including the ratio of the number of shared species to the number of distinct species (Jaccard's index) and the conditional probability of observing a shared species (Smith et al., 1996). However, these indices only take into account the number of species and species proportions of shared species. In this article, we propose a new similarity index which includes the species proportions of both the shared and non shared species in each population, and also propose a Nonparametric Maximum Likelihood Estimator (NPMLE) for this index. Bootstrap and delta methods are used to evaluate the standard errors of the NPMLE. Based on a loss function, we also compare a class of nonparametric estimators for the proposed index in various situations. 相似文献
20.
《统计学通讯:理论与方法》2013,42(10):2005-2021
In many experiments where pre-treatment and post-treatment measurements are taken, investigators wish to determine if there is a difference between two treatment groups. For this type of data, the post-treatment variable is used as the primary comparison variable and the pre-treatment variable is used as a covariate. Although most of the discussion in this paper is written with the pre-treatment variable as the covariate the results are applicable to other choices of a covariate. Tests based on residuals have been proposed as alternatives to the usual covariance methods. Our objective is to investigate how the powers of these tests are affected when the conditional variance of the post-treatment variable depends on the magnitude of the pre-treatment variable. In particular, we investigate two cases. [1] The conditional variance of the post-treatment variable gradually increases as the magnitude of the pre-treatment variable increases. (In many biological models this is the case.) [2] The conditional variance of the post-treatment variable is dependent upon natural or imposed subgroups contained within the pre-treatment variable. Power comparisons are made using Monte Carlo techniques. 相似文献