首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 7 毫秒
1.
Two standard mixed models with interactions are discussed. When each is viewed in the context of superpopulation models, the mixed models controversy is resolved. The tests suggested by the expected mean squares under the constrained-parameters model are correct for testing the main effects and interactions under both the unconstrained-and constrained-parameters models.  相似文献   

2.
This paper developed an exact method of random permutations when testing both interaction and main effects in the two-way ANOVA model. The method of this paper can be regarded as a much improved model when compared with those of the previous studies such as Still and White (1981) and ter Braak (1992). We further conducted a simulation experiment in order to check the statistical performance of the proposed method. The proposed method works relatively well for small sample sizes compare with the existing methods. This work was supported by Korea Science and Engineering Foundation Grant (R14-2003-002-0100)  相似文献   

3.
Summary.  Long-term experiments are commonly used tools in agronomy, soil science and other disciplines for comparing the effects of different treatment regimes over an extended length of time. Periodic measurements, typically annual, are taken on experimental units and are often analysed by using customary tools and models for repeated measures. These models contain nothing that accounts for the random environmental variations that typically affect all experimental units simultaneously and can alter treatment effects. This added variability can dominate that from all other sources and can adversely influence the results of a statistical analysis and interfere with its interpretation. The effect that this has on the standard repeated measures analysis is quantified by using an alternative model that allows for random variations over time. This model, however, is not useful for analysis because the random effects are confounded with fixed effects that are already in the repeated measures model. Possible solutions are reviewed and recommendations are made for improving statistical analysis and interpretation in the presence of these extra random variations.  相似文献   

4.
This paper concerns maximum likelihood estimation for the semiparametric shared gamma frailty model; that is the Cox proportional hazards model with the hazard function multiplied by a gamma random variable with mean 1 and variance θ. A hybrid ML-EM algorithm is applied to 26 400 simulated samples of 400 to 8000 observations with Weibull hazards. The hybrid algorithm is much faster than the standard EM algorithm, faster than standard direct maximum likelihood (ML, Newton Raphson) for large samples, and gives almost identical results to the penalised likelihood method in S-PLUS 2000. When the true value θ0 of θ is zero, the estimates of θ are asymptotically distributed as a 50–50 mixture between a point mass at zero and a normal random variable on the positive axis. When θ0 > 0, the asymptotic distribution is normal. However, for small samples, simulations suggest that the estimates of θ are approximately distributed as an x ? (100 ? x)% mixture, 0 ≤ x ≤ 50, between a point mass at zero and a normal random variable on the positive axis even for θ0 > 0. In light of this, p-values and confidence intervals need to be adjusted accordingly. We indicate an approximate method for carrying out the adjustment.  相似文献   

5.
Regression models that account for main state effects and nested county effects are considered for the assessment of farmland values. Empirical predictors obtained by replacing the unknown variances in the formulas of the optimal predictors by maximum likelihood estimates are presented. The computations are carried out by simple iterations between two SAS procedures. Estimators for the prediction variances are derived, and a modification to secure the robustness of the predictors is proposed. The procedure is applied to data on nonirrigated cropland in the Corn Belt states and is shown to yield predictors with considerably lower prediction mean squared errors than the survey estimators and other regression-type estimators.  相似文献   

6.
Detecting the number of signals and estimating the parameters of the signals is an important problem in signal processing. Quite a number of papers appeared in the last twenty years regarding the estimation of the parameters of the sinusoidal components but not that much of attention has been given in estimating the number of terms present in a sinusoidal signal. Fuchs developed a criterion based on the perturbation analysis of the data auto correlation matrix to estimate the number of sinusoids, which is in some sense a subjective-based method. Recently Reddy and Biradar proposed two criteria based on AIC and MDL and developed an analytical framework for analyzing the performance of these criteria. In this paper we develop a method using the extended order modelling and singular value decomposition technique similar to that of Reddy and Biradar. We use penalty function technique but instead of using any fixed penalty function like AIC or MDL, a class of penalty functions satisfying some special properties has been used. We prove that any penalty function from that special class will give consistent estimate under the assumptions that the error random variables are independent and identically distributed with mean zero and finite variance. We also obtain the probabilities of wrong detection for any particular penalty function under somewhat weaker assumptions than that of Reddy and Biradar of Kaveh et al. It gives some idea to choose the proper penalty function for any particular model. Simulations are performed to verify the usefulness of the analysis and to compare our methods with the existing ones.  相似文献   

7.
In designed experiments and in particular longitudinal studies, the aim may be to assess the effect of a quantitative variable such as time on treatment effects. Modelling treatment effects can be complex in the presence of other sources of variation. Three examples are presented to illustrate an approach to analysis in such cases. The first example is a longitudinal experiment on the growth of cows under a factorial treatment structure where serial correlation and variance heterogeneity complicate the analysis. The second example involves the calibration of optical density and the concentration of a protein DNase in the presence of sampling variation and variance heterogeneity. The final example is a multienvironment agricultural field experiment in which a yield–seeding rate relationship is required for several varieties of lupins. Spatial variation within environments, heterogeneity between environments and variation between varieties all need to be incorporated in the analysis. In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure. The key result that allows coherent and flexible empirical model building in complex situations is the linear mixed model representation of the cubic smoothing spline. An extension is proposed in which trend is partitioned into smooth and non-smooth components. Estimation and inference, the analysis of the three examples and a discussion of extensions and unresolved issues are also presented.  相似文献   

8.
Modeling data that are non-normally distributed with random effects is the major challenge in analyzing binomial data in split-plot designs. Seven methods for analyzing such data using mixed, generalized linear, or generalized linear mixed models are compared for the size and power of the tests. This study shows that analyzing random effects properly is more important than adjusting the analysis for non-normality. Methods based on mixed and generalized linear mixed models hold Type I error rates better than generalized linear models. Mixed model methods tend to have higher power than generalized linear mixed models when the sample size is small.  相似文献   

9.
The Akaike Information Criterion (AIC) is developed for selecting the variables of the nested error regression model where an unobservable random effect is present. Using the idea of decomposing the likelihood into two parts of “within” and “between” analysis of variance, we derive the AIC when the number of groups is large and the ratio of the variances of the random effects and the random errors is an unknown parameter. The proposed AIC is compared, using simulation, with Mallows' C p , Akaike's AIC, and Sugiura's exact AIC. Based on the rates of selecting the true model, it is shown that the proposed AIC performs better.  相似文献   

10.
Analysis of means (ANOM) is a powerful tool for comparing means and variances in fixed-effects models. The graphical exhibit of ANOM is considered as a great advantage because of its interpretability and its ability to evaluate the practical significance of the mean effects. However, the presence of random factors may be problematic for the ANOM method. In this paper, we propose an ANOM approach that can be applied to test random effects in many different balanced statistical models including fixed-, random- and mixed-effects models. The proposed approach utilizes the range of the treatment averages for identifying the dispersions of the underlying populations. The power performance of the proposed procedure is compared to the analysis of variance (ANOVA) approach in a wide range of situations via a Monte Carlo simulation study. Illustrative examples are used to demonstrate the usefulness of the proposed approach and its graphical exhibits, provide meaningful interpretations, and discuss the statistical and practical significance of factor effects.  相似文献   

11.
The variance of short-term systematic measurement errors for the difference of paired data is estimated. The difference of paired data is determined by subtracting the measurement results of two methods, which measure the same item only once without measurement repetition. The unbiased estimators for short-term systematic measurement error variances based on the one-way random effects model are not fit for practical purpose because they can be negative. The estimators, which are derived for balanced data as well as for unbalanced data, are always positive but biased. The basis of these positive estimators is the one-way random effects model. The biases, variances, and the mean squared errors of the positive estimators are derived as well as their estimators. The positive estimators are fit for practical purpose.  相似文献   

12.
Regression models with random effects are proposed for joint analysis of negative binomial and ordinal longitudinal data with nonignorable missing values under fully parametric framework. The presented model simultaneously considers a multivariate probit regression model for the missing mechanisms, which provides the ability of examining the missing data assumptions and a multivariate mixed model for the responses. Random effects are used to take into account the correlation between longitudinal responses of the same individual. A full likelihood-based approach that allows yielding maximum likelihood estimates of the model parameters is used. The model is applied to a medical data, obtained from an observational study on women, where the correlated responses are the ordinal response of osteoporosis of the spine and negative binomial response is the number of joint damage. A sensitivity of the results to the assumptions is also investigated. The effect of some covariates on all responses are investigated simultaneously.  相似文献   

13.
We consider the problem of testing hypotheses on the difference of the coefficients of variation from several two-armed experiments with normally distributed outcomes. In particular, we deal with testing the homogeneity of the difference of the coefficients of variation and testing the equality of the difference of the coefficients of variation to a specified value. The test statistics proposed are derived in a limiting one-way classification with fixed effects and heteroscedastic error variances, using results from analysis of variance. By way of simulation, the performance of these test statistics is compared for both testing problems considered.  相似文献   

14.
The Stein, that one could improve frequentist risk by combining “independent” problems, has long been an intriguing paradox to statistics. We briefly review the Bayesian view of the paradox, and indicate that previous justifications of the Stein effect, through concerns of “Bayesian robustness,” were misleading. In the course of doing so, several existing robust Bayesian and Stein-effect estimators are compared for a variety of situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号