共查询到20条相似文献,搜索用时 31 毫秒
1.
《统计学通讯:理论与方法》2013,42(5):875-885
The order of experimental runs in a fractional factorial experiment is essential when the cost of level changes in factors is considered. The generalized foldover scheme given by [1]gives an optimal order to experimental runs in an experiment with specified defining contrasts. An experiment can be specified by a design requirement such as resolution or estimation of some interactions. To meet such a requirement, we can find several sets of defining contrasts. Applying the generalized foldover scheme to these sets of defining contrasts, we obtain designs with different numbers of level changes and then the design with minimum number of level changes. The difficulty is to find all the sets of defining contrasts. An alternative approach is investigated by [2]for two-level fractional factorial experiments. In this paper, we investigate experiments with all factors in slevels. 相似文献
2.
《统计学通讯:理论与方法》2013,42(12):2655-2681
In this paper we introduce a new measure for the analysis of association in cross-classifications having ordered categories. Association is measured in terms of the odd-ratios in 2 × 2 subtables formed from adjacent rows and adjacent columns. We focus our attention in the uniform association model. Our measure is based in the family of divergences introduced by Burbea and Rao [1]. Some well-known sets of data are reanalyzed and a simulation study is presented to analyze the behavior of the new families of test statistics introduced in this paper. 相似文献
3.
《Econometric Reviews》2013,32(3):309-336
ABSTRACT We examine empirical relevance of three alternative asymptotic approximations to the distribution of instrumental variables estimators by Monte Carlo experiments. We find that conventional asymptotics provides a reasonable approximation to the actual distribution of instrumental variables estimators when the sample size is reasonably large. For most sample sizes, we find Bekker[11] asymptotics provides reasonably good approximation even when the first stage R 2 is very small. We conclude that reporting Bekker[11] confidence interval would suffice for most microeconometric (cross-sectional) applications, and the comparative advantage of Staiger and Stock[5] asymptotic approximation is in applications with sample sizes typical in macroeconometric (time series) applications. 相似文献
4.
《统计学通讯:理论与方法》2013,42(12):2699-2705
In this paper we introduce a class of estimators which includes the ordinary least squares (OLS), the principal components regression (PCR) and the Liu estimator [1]. In particular, we show that our new estimator is superior, in the scalar mean-squared error (mse) sense, to the Liu estimator, to the OLS estimator and to the PCR estimator. 相似文献
5.
《统计学通讯:理论与方法》2013,42(10):2023-2032
We developed an alternative random permutation testing method for multiple linear regression, which is an improvement over the existing one proposed by [1] or [2]. 相似文献
6.
《统计学通讯:理论与方法》2013,42(9):1789-1799
Abstract In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article. 相似文献
7.
《统计学通讯:理论与方法》2013,42(2):371-380
Palmer and Broemeling [1] compare Bayes and maximum likelihood estimates of the intraclass correlation (ICC). The prior information in their derivation of the Bayes estimator is placed on the variance components instead of the ICC itself. This paper finds a Bayes estimator of the ICC with the prior placed on the ICC. Bayes estimates based on three different priors are then compared to method of moments estimate. 相似文献
8.
A NOTE ON ESTIMATING THE NUMBER OF SUPERIMPOSED EXPONENTIAL SIGNALS BY THE CROSS-VALIDATION APPROACH
《统计学通讯:理论与方法》2013,42(8-9):1579-1589
In this paper, a procedure based on the delete-1 cross-validation is given for estimating the number of superimposed exponential signals, its limiting behavior is explored and it is shown that the probability of overestimating the true number of signals is greater than a positive constant for sufficiently large samples. Also a general procedure based on the cross-validation is presented when the deletion proceeds according to a collection of subsets of indices. The result is similar to the delete-1 cross-validation if the number of deletions is fixed. The simulation results are provided for the performance of the procedure when the collections of subsets of indices are chosen as those suggested by Shao [1]in a linear model selection problem. 相似文献
9.
《统计学通讯:模拟与计算》2013,42(4):787-803
It is known that, in the presence of short memory components, the estimation of the fractional parameter d in an Autoregressive Fractionally Integrated Moving Average, ARFIMA(p, d, q), process has some difficulties (see [1]). In this paper, we continue the efforts made by Smith et al. [1] and Beveridge and Oickle [2] by conducting a simulation study to evaluate the convergence properties of the iterative estimation procedure suggested by Hosking [3]. In this context we consider some semiparametric approaches and a parametric method proposed by Fox-Taqqu[4]. We also investigate the method proposed by Robinson [5] and a modification using the smoothed periodogram function. 相似文献
10.
《统计学通讯:模拟与计算》2013,42(3):489-511
We consider the semiparametric regression model introduced by [1]. The dependent variable y is linked to the index x′ β through an unknown link function. [1] and [2] present Slicing methods (the Sliced Inverse Regression methods SIR-I, SIR-II and SIRα) in order to estimate the direction of the unknown slope parameter β. These methods are computationally simple and fast but depend on the choice of an arbitrary slicing fixed by the user. When the sample size is small, the number and the position of slices have an influence on the estimated direction. In this paper, we suggest to use the corresponding Pooled Slicing methods: PSIR-I (proposed by [3]), PSIR-II and PSIRα. These methods combine the results from a number of slicings. We compare the sample behaviour of Slicing and Pooled Slicing methods on simulations. We also propose a practical choice of α in SIRα and PSIRα methods. 相似文献
11.
《统计学通讯:理论与方法》2013,42(6):1019-1030
ABSTRACT In this paper, we present a modified Kelly and Rice method for testing synergism. This approach is consistent with Berenbaum's1-3 framework for additivity. The delta method[4] is applied to obtain the estimated variance for the predicted additivity proportion. A Monte Carlo simulation study for the evaluation of the method's performance, i.e., global overall tests for synergism, is also discussed. Kelly and Rice[5] do not provide a correct test statistic because the variance is underestimated. Hence, the performance of the Kelly–Rice[5] method is generally anti-conservative, based on the simulation findings. In addition, the overall test of synergism with χ2(r) from the modified Kelly and Rice method for larger sample sizes is better than that with χ2(1) from the modified Kelly and Rice method. 相似文献
12.
This paper is the generalization of weight-fused elastic net (Fu and Xu, 2012), which performs group variable selection by combining weight-fused LASSO(wfLasso) and elastic net (Zou and Hastie, 2005) penalties. In this study, the elastic net penalty is replaced by adaptive elastic net penalty (AdaEnet) (Zou and Zhang, 2009), and a new group variable selection algorithm with oracle property (Fan and Li, 2001; Zou, 2006) is obtained. 相似文献
13.
《统计学通讯:理论与方法》2013,42(5):1177-1182
ABSTRACT Hoerl and Kennard (1970a) introduced the ridge regression estimator as an alternative to the ordinary least squares estimator in the presence of multicollinearity. In this article, a new approach for choosing the ridge parameter (K), when multicollinearity among the columns of the design matrix exists, is suggested and evaluated by simulation techniques, in terms of mean squared errors (MSE). A number of factors that may affect the properties of these methods have been varied. The MSE from this approach has shown to be smaller than using Hoerl and Kennard (1970a) in almost all situations. 相似文献
14.
Marcelo de Paula 《统计学通讯:理论与方法》2013,42(19):5762-5786
ABSTRACTIn this article, we propose an approach for incorporating continuous and discrete original outcome distributions into the usual exponential family regression models. The new approach is an extension of the works of Suissa (1991) and Suissa and Blais (1995), which present methods to estimate the risk of an event defined in a sample subspace of an original continuous outcome variable. Simulation studies are presented in order to illustrate the performance of the developed methodology. Real data sets are analyzed by using the proposed models. 相似文献
15.
《Econometric Reviews》2013,32(3):383-393
ABSTRACT This paper considers computation of fitted values and marginal effects in the Box–Cox regression model. Two methods, 1 the “smearing” technique suggested by Duan (see Ref. [10]) and 2 direct numerical integration, are examined and compared with the “naive” method often used in econometrics. 相似文献
16.
Gülesen Üstündaĝ Şiray 《统计学通讯:理论与方法》2013,42(22):4742-4756
Omission of some relevant explanatory variables and multicollinearity in regression models are very serious problems in applied works. There are some papers examining the multicollinearity and misspecification which is due to omission of some relevant explanatory variables, concurrently. To remedy the problem of multicollinearity, Kaç?ranlar and Sakall?o?lu (2001) proposed the r-d class estimator that includes the ordinary least squares, principal components regression, and Liu estimators as special cases. The aim of this paper is to examine the performance of the r-d class estimator in misspecificied linear models. 相似文献
17.
《统计学通讯:理论与方法》2013,42(10):1951-1980
Abstract The heteroskedasticity-consistent covariance matrix estimator proposed by White [White, H. A. (1980). Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838], also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto et al. [Cribari–Neto, F., Ferrari, S. L. P., Cordeiro, G. M. (2000). Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika 87:907–918] have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that are also commonly used by practitioners. These include the HC1, HC2, and HC3 estimators, which have proven to have superior small-sample behavior relative to White's estimator. This paper defines a general bias-correction mechamism that can be applied not only to White's estimator, but to variants of this estimator as well, such as HC1, HC2, and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators. 相似文献
18.
《统计学通讯:理论与方法》2013,42(4):857-873
ABSTRACT This article considers three practical hypotheses involving the equicorrelation matrix for grouped normal data. We obtain statistics and computing formulae for common test procedures such as the score test and the likelihood ratio test. In addition, statistics and computing formulae are obtained for various small sample procedures as proposed in Skovgaard (2001). The properties of the tests for each of the three hypotheses are compared using Monte Carlo simulations. 相似文献
19.
《统计学通讯:理论与方法》2013,42(8-9):1661-1674
Based on Bradley Efron's observation that individual resamples in the regular bootstrap have support on approximately 63% of the original observations, C. R. Rao, P. K. Pathak and V. I. Koltchinskii [1]have proposed a sequential resampling scheme. This sequential bootstrap stabilizes the information content of each resample by fixing the number of unique observations and letting N, the number of observatons in each resample, vary. The Rao-Pathak-Koltchinskii paper establishes the asymptotic correctness (consistency) of the sequential bootstrap. The main object of our investigation is to study the empirical properties of the Rao-Pathak-Koltchinskii sequential bootstrap as compared to the regular bootstrap. In all our settings, sequential bootstrap performs as well or better than regular bootstrap. In the particular case where we estimate standard errors of sample medians, we find that sequential bootstrap outperforms regular bootstrap by reducing variability in the final bootstrap estimates. 相似文献
20.
《统计学通讯:理论与方法》2013,42(5):799-813
The local influence approach of Cook [1]to regression diagnostic is developed and discussed, and compared with Cook's [2]deletion approach. The ability of the local influence approach to handle cases simultaneously, as well as some of its theoretical and practical difficulties, are reviewed. The perturbation ideas of the approach are applied to the linear model making distinction between the local perturbations on the assumptions of the model and the data. 相似文献