首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Suppose two Poisson processes with rates γ1 and γ2 are observed for fixed times tl and t2. This paper considers hypothesis tests and confidence intervals for the parameter ρ = γ21. Uniformly most powerful unbiased tests and uniformly most accurate unbiased confidence intervals exist for ρ, but they require randomization and so are not used in practice. Several alternative procedures have been proposed. In the context of one-sided hypothesis tests these procedures are reviewed and compared on numerical grounds and by use of the conditionality and repeated sampling principles. It is argued that a conditional binomial test which rejects with conditional level closest to but not necessarily less than, the nominal a is the most reasonable. This test is different from the usual conditional binomial test which rejects with conditional level closeset to but less than or equal to the nominal α Numerical results indicate that an approximate procedure based on the Poisson variance stabilizing transformation has properties similar to the preferred conditional binomial test. Values for λ1 = t1λ1 required to achieve a specified power are considered. These results are also discussed in terms of test inversion to obtain confidence intervals.  相似文献   

2.

This paper compares several methods for constructing a confidence interval on contrasts of fixed effects in a balanced three-factor mixed factorial design with one fixed effect and two random effects. In particular, confidence intervals constructed using PROC MIXED of SAS are compared to other intervals that have been proposed in the literature. Computer simulation is used to compare interval lengths, and determine each method's ability to maintain the stated confidence coefficient.  相似文献   

3.
When a two-level multilevel model (MLM) is used for repeated growth data, the individuals constitute level 2 and the successive measurements constitute level 1, which is nested within the individuals that make up level 2. The heterogeneity among individuals is represented by either the random-intercept or random-coefficient (slope) model. The variance components at level 1 involve serial effects and measurement errors under constant variance or heteroscedasticity. This study hypothesizes that missing serial effects or/and heteroscedasticity may bias the results obtained from two-level models. To illustrate this effect, we conducted two simulation studies, where the simulated data were based on the characteristics of an empirical mouse tumour data set. The results suggest that for repeated growth data with constant variance (measurement error) and misspecified serial effects (ρ > 0.3), the proportion of level-2 variation (intra-class correlation coefficient) increases with ρ and the two-level random-coefficient model is the minimum AIC (or AICc) model when compared with the fixed model, heteroscedasticity model, and random-intercept model. In addition, the serial effect (ρ > 0.1) and heteroscedasticity are both misspecified, implying that the two-level random-coefficient model is the minimum AIC (or AICc) model when compared with the fixed model and random-intercept model. This study demonstrates that missing serial effects and/or heteroscedasticity may indicate heterogeneity among individuals in repeated growth data (mixed or two-level MLM). This issue is critical in biomedical research.  相似文献   

4.
We consider the evaluation of laboratory practice through the comparison of measurements made by participating metrology laboratories when the measurement procedures are considered to have both fixed effects (the residual error due to unrecognised sources of error) and random effects (drawn from a distribution of known variance after correction for all known systematic errors). We show that, when estimating the participant fixed effects, the random effects described can be ignored. We also derive the adjustment to the variance estimates of the participant fixed effects due to these random effects.  相似文献   

5.
Abstract

In this article we consider the problem of constructing confidence intervals for a linear regression model with unbalanced nested error structure. A popular approach is the likelihood-based method employed by PROC MIXED of SAS. In this article, we examine the ability of MIXED to produce confidence intervals that maintain the stated confidence coefficient. Our results suggest that intervals for the regression coefficients work well, but intervals for the variance component associated with the primary level cannot be recommended. Accordingly, we propose alternative methods for constructing confidence intervals on the primary level variance component. Computer simulation is used to compare the proposed methods. A numerical example and SAS code are provided to demonstrate the methods.  相似文献   

6.
A semi-parametric additive model for variance heterogeneity   总被引:1,自引:0,他引:1  
This paper presents a flexible model for variance heterogeneity in a normal error model. Specifically, both the mean and variance are modelled using semi-parametric additive models. We call this model a Mean And Dispersion Additive Model (MADAM). A successive relaxation algorithm for fitting the model is described and justified as maximizing a penalized likelihood function with penalties for lack of smoothness in the additive non-parametric functions in both mean and variance models. The algorithm is implemented in GLIM4, allowing flexible and interactive modelling of variance heterogeneity. Two data sets are used for demonstration.  相似文献   

7.
Abstract

Asymptotic confidence intervals are given for two functions of multinomial outcome probabilities: Gini's diversity measure and Shannon's entropy. “Adjusted” proportions are used in all asymptotic mean and variance formulas, along with a possible logarithmic transformation. Exact confidence coefficients are computed in some cases. Monte Carlo simulation is used in other cases to compare actual coverages to nominal ones. Some recommendations are made.  相似文献   

8.
ABSTRACT

It is well known that ignoring heteroscedasticity in regression analysis adversely affects the efficiency of estimation and renders the usual procedure for constructing prediction intervals inappropriate. In some applications, such as off-line quality control, knowledge of the variance function is also of considerable interest in its own right. Thus the modeling of variance constitutes an important part of regression analysis. A common practice in modeling variance is to assume that a certain function of the variance can be closely approximated by a function of a known parametric form. The logarithm link function is often used even if it does not fit the observed variation satisfactorily, as other alternatives may yield negative estimated variances. In this paper we propose a rich class of link functions for more flexible variance modeling which alleviates the major difficulty of negative variances. We suggest also an alternative analysis for heteroscedastic regression models that exploits the principle of “separation” discussed in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31). The proposed method does not require any distributional assumptions once an appropriate link function for modeling variance has been chosen. Unlike the analysis in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31), the estimated variances and their associated asymptotic variances are found in the original metric (although a transformation has been applied to achieve separation in a different scale), making interpretation of results considerably easier.  相似文献   

9.
The among variance component in the balanced one-factor nested components-of-variance model is of interest in many fields of application. Except for an artificial method that uses a set of random numbers which is of no use in practical situations, an exact-size confidence interval on the among variance has not yet been derived. This paper provides a detailed comparison of three approximate confidence intervals which possess certain desired properties and have been shown to be the better methods among many available approximate procedures. Specifically, the minimum and the maximum of the confidence coefficients for the one- and two-sided intervals of each method are obtained. The expected lengths of the intervals are also compared.  相似文献   

10.
ABSTRACT

In a regression model with a random individual and a random time effect explicit representations of the nonnegative quadratic minimum biased estimators of the corresponding variances are deduced. These estimators always exist and are unique. Moreover, under normality assumption of the dependent variable unbiased estimators of the mean squared errors of the variance estimates are derived. Finally, confidence intervals on the variance components are considered.  相似文献   

11.
Abstract

Markov processes offer a useful basis for modeling the progression of organisms through successive stages of their life cycle. When organisms are examined intermittently in developmental studies, likelihoods can be constructed based on the resulting panel data in terms of transition probability functions. In some settings however, organisms cannot be tracked individually due to a difficulty in identifying distinct individuals, and in such cases aggregate counts of the number of organisms in different stages of development are recorded at successive time points. We consider the setting in which such aggregate counts are available for each of a number of tanks in a developmental study. We develop methods which accommodate clustering of the transition rates within tanks using a marginal modeling approach followed by robust variance estimation, and through use of a random effects model. Composite likelihood is proposed as a basis of inference in both settings. An extension which incorporates mortality is also discussed. The proposed methods are shown to perform well in empirical studies and are applied in an illustrative example on the growth of the Arabidopsis thaliana plant.  相似文献   

12.
ABSTRACT

In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called clubSandwich. This article has online supplementary materials.  相似文献   

13.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

14.
ABSTRACT

This paper is devoted to the fixed block effects model analysed with most of the classical designs. First, we find regularities conditions for such designs. Then, we obtain explicitly all the least squares estimators of the model. A particular attention is given to orthogonal blocked designs and their optimal properties.  相似文献   

15.
The paper deals with generalized confidence intervals for the between-group variance in one-way heteroscedastic (unbalanced) ANOVA with random effects. The approach used mimics the standard one applied in mixed linear models with two variance components, where interval estimators are based on a minimal sufficient statistic derived after an initial reduction by the principle of invariance. A minimal sufficient statistic under heteroscedasticity is found to resemble its homoscedastic counterpart and further analogies between heteroscedastic and homoscedastic cases lead us to two classes of fiducial generalized pivots for the between-group variance. The procedures suggested formerly by Wimmer and Witkovský [Between group variance component interval estimation for the unbalanced heteroscedastic one-way random effects model, J. Stat. Comput. Simul. 73 (2003), pp. 333–346] and Li [Comparison of confidence intervals on between group variance in unbalanced heteroscedastic one-way random models, Comm. Statist. Simulation Comput. 36 (2007), pp. 381–390] are found to belong to these two classes. We comment briefly on some of their properties that were not mentioned in the original papers. In addition, properties of another particular generalized pivot are considered.  相似文献   

16.
The method of target estimation developed by Cabrera and Fernholz [(1999). Target estimation for bias and mean square error reduction. The Annals of Statistics, 27(3), 1080–1104.] to reduce bias and variance is applied to logistic regression models of several parameters. The expectation functions of the maximum likelihood estimators for the coefficients in the logistic regression models of one and two parameters are analyzed and simulations are given to show a reduction in both bias and variability after targeting the maximum likelihood estimators. In addition to bias and variance reduction, it is found that targeting can also correct the skewness of the original statistic. An example based on real data is given to show the advantage of using target estimators for obtaining better confidence intervals of the corresponding parameters. The notion of the target median is also presented with some applications to the logistic models.  相似文献   

17.
The traditional method for estimating or predicting linear combinations of the fixed effects and realized values of the random effects in mixed linear models is first to estimate the variance components and then to proceed as if the estimated values of the variance components were the true values. This two-stage procedure gives unbiased estimators or predictors of the linear combinations provided the data vector is symmetrically distributed about its expected value and provided the variance component estimators are translation-invariant and are even functions of the data vector. The standard procedures for estimating the variance components yield even, translation-invariant estimators.  相似文献   

18.

Research in many disciplines involves data with spatially correlated observations. Spatial dependence violates the independent errors assumption required for techniques such as the standard one-way analysis of variance for a completely randomized design. The testing methodology within the correlated errors approach has not been investigated within a spatial context. For one-way fixed effects analysis of variance, a permutation test and tests associated with the correlated errors approach are investigated through simulation. No single test was superior with respect to both power and size but the standard Wald F test and a simple adjustment to it performed well overall.  相似文献   

19.
In a one-way fixed effects analysis of variance model, when normal variances are unknown and possibly unequal, a one-sided range test for testing the null hypothesis H 0 : μ 1 = … = μk against an ordered alternative Ha : μ 1 ≤ … ≤ μk by a single-stage and a two-stage procedure, respectively, is proposed. The critical values under H 0 and the power under a specific alternative are calculated. Relation between the one-stage and the two-stage test procedures is discussed. A numerical example to illustrate these procedures is given.  相似文献   

20.
Abstract

This paper considers an optimal investment-reinsurance problem with default risk under the mean-variance criterion. We assume that the insurer is allowed to purchase proportional reinsurance and invest his/her surplus in a risk-free asset, a stock and a defaultable bond. The goal is to maximize the expectation and minimize the variance of the terminal wealth. We first formulate the problem to stochastic linear-quadratic (LQ) control problem with constraints. Then the optimal investment-reinsurance strategies and the corresponding value functions are obtained via the viscosity solutions of Hamilton-Jacobi-Bellman (HJB) equations for the post-default case and pre-default case, respectively. Finally, we provide numerical examples to illustrate the effects of model parameters on the optimal strategies and value functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号