首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Empirical Bayes (EB) estimates in general linear mixed models are useful for the small area estimation in the sense of increasing precision of estimation of small area means. However, one potential difficulty of EB is that the overall estimate for a larger geographical area based on a (weighted) sum of EB estimates is not necessarily identical to the corresponding direct estimate such as the overall sample mean. Another difficulty is that EB estimates yield over‐shrinking, which results in the sampling variance smaller than the posterior variance. One way to fix these problems is the benchmarking approach based on the constrained empirical Bayes (CEB) estimators, which satisfy the constraints that the aggregated mean and variance are identical to the requested values of mean and variance. In this paper, we treat the general mixed models, derive asymptotic approximations of the mean squared error (MSE) of CEB and provide second‐order unbiased estimators of MSE based on the parametric bootstrap method. These results are applied to natural exponential families with quadratic variance functions. As a specific example, the Poisson‐gamma model is dealt with, and it is illustrated that the CEB estimates and their MSE estimates work well through real mortality data.  相似文献   

2.
Confidence intervals for the difference of two binomial proportions are well known, however, confidence intervals for the weighted sum of two binomial proportions are less studied. We develop and compare seven methods for constructing confidence intervals for the weighted sum of two independent binomial proportions. The interval estimates are constructed by inverting the Wald test, the score test and the Likelihood ratio test. The weights can be negative, so our results generalize those for the difference between two independent proportions. We provide a numerical study that shows that these confidence intervals based on large‐sample approximations perform very well, even when a relatively small amount of data is available. The intervals based on the inversion of the score test showed the best performance. Finally, we show that as for the difference of two binomial proportions, adding four pseudo‐outcomes to the Wald interval for the weighted sum of two binomial proportions improves its coverage significantly, and we provide a justification for this correction.  相似文献   

3.
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points.  相似文献   

4.
For a one-way mixed Gaussian ANOVA model we prove local asymptotic normality and local asymptotic minimaxity of maximum likelihood estimates (MLE) and of its certain iterative approximations. A geometric rate of convergence in probability is proved for these iterative estimates to MLE. Asymptotically optimal designs for large samples are studied.  相似文献   

5.
We consider a bootstrap method for Markov chains where the original chain is broken into a (random) number of cycles based on an atom (regeneration point) and the bootstrap scheme resamples from these cycles. We investigate the asymptotic accuracy of this method for the case of a sum (or a sample mean) related to the Markov chain. Under some standard moment conditions, the method is shown to be at least as good as the normal approximation, and better (second-order accurate) in the case of nonlattice summands. We give three examples to illustrate the applicability of our results.  相似文献   

6.
Abstract.  Pareto sampling was introduced by Rosén in the late 1990s. It is a simple method to get a fixed size π ps sample though with inclusion probabilities only approximately as desired. Sampford sampling, introduced by Sampford in 1967, gives the desired inclusion probabilities but it may take time to generate a sample. Using probability functions and Laplace approximations, we show that from a probabilistic point of view these two designs are very close to each other and asymptotically identical. A Sampford sample can rapidly be generated in all situations by letting a Pareto sample pass an acceptance–rejection filter. A new very efficient method to generate conditional Poisson ( CP ) samples appears as a byproduct. Further, it is shown how the inclusion probabilities of all orders for the Pareto design can be calculated from those of the CP design. A new explicit very accurate approximation of the second-order inclusion probabilities, valid for several designs, is presented and applied to get single sum type variance estimates of the Horvitz–Thompson estimator.  相似文献   

7.
We study the asymptotic behavior of the weighted sum of correlated chi-squared random variables. Both chi-squared and normal distributions are proved to approximate the exact distribution. These two approximations are established by matching the first two cumulants. Simulation comparison is made to study the performance of two approximations numerically. We find that the chi-squared approximation performs better than the normal one in the study.  相似文献   

8.
In this paper, we discuss the problem of constructing designs in order to maximize the accuracy of nonparametric curve estimation in the possible presence of heteroscedastic errors. Our approach is to exploit the flexibility of wavelet approximations to approximate the unknown response curve by its wavelet expansion thereby eliminating the mathematical difficulty associated with the unknown structure. It is expected that only finitely many parameters in the resulting wavelet response can be estimated by weighted least squares. The bias arising from this, compounds the natural variation of the estimates. Robust minimax designs and weights are then constructed to minimize mean-squared-error-based loss functions of the estimates. We find the periodic and symmetric properties of the Euclidean norm of the multiwavelet system useful in eliminating some of the mathematical difficulties involved. These properties lead us to restrict the search for robust minimax designs to a specific class of symmetric designs. We also construct minimum variance unbiased designs and weights which minimize the loss functions subject to a side condition of unbiasedness. We discuss an example from the nonparametric literature.  相似文献   

9.
We propose a weighted delete-one-cluster Jackknife based framework for few clusters with severe cluster-level heterogeneity. The proposed method estimates the mean for a condition by a weighted sum of estimates from each of the Jackknife procedures. Influence from a heterogeneous cluster can be weighted appropriately, and the conditional mean can be estimated with higher precision. An algorithm for estimating the variance of the proposed estimator is also provided, followed by the cluster permutation test for the condition effect assessment. Our simulation studies demonstrate that the proposed framework has good operating characteristics.  相似文献   

10.
ABSTRACT

This article investigates the finite sample properties of a range of inference methods for propensity score-based matching and weighting estimators frequently applied to evaluate the average treatment effect on the treated. We analyze both asymptotic approximations and bootstrap methods for computing variances and confidence intervals in our simulation designs, which are based on German register data and U.S. survey data. We vary the design w.r.t. treatment selectivity, effect heterogeneity, share of treated, and sample size. The results suggest that in general, theoretically justified bootstrap procedures (i.e., wild bootstrapping for pair matching and standard bootstrapping for “smoother” treatment effect estimators) dominate the asymptotic approximations in terms of coverage rates for both matching and weighting estimators. Most findings are robust across simulation designs and estimators.  相似文献   

11.
We discuss the effects of model misspecifications on higher-order asymptotic approximations of the distribution of estimators and test statistics. In particular we show that small deviations from the model can wipe out the nominal improvements of the accuracy obtained at the model by second-order approximations of the distribution of classical statistics. Although there is no guarantee that the first-order robustness properties of robust estimators and tests will carry over to second-order in a neighbourhood of the model, the behaviour of robust procedures in terms of second-order accuracy is generally more stable and reliable than that of their classical counterparts. Finally, we discuss some related work on robust adjustments of the profile likelihood and outline the role of computer algebra in this type of research.  相似文献   

12.
In this paper ve obtain an asymptotic expression for the upper tail area of the distribution of an infinite weighted sum of chi-square random variables and show how this can be applied to distributions of various goodness of fit test statistics. Results obtained by this general approach are comparable with those reported previously in the literature. In the case of the Cramer-von Mises statistic an empirical adjustment is given vhich significantly improves on previous approximations. For the Kuiper statistic the corresponding empirical adjustment leads to an existing highly accurate approximation.  相似文献   

13.
Robust estimates for the parameters in the general linear model are proposed which are based on weighted rank statistics. The method is based on the minimization of a dispersion function defined by a weighted Gini's mean difference. The asymptotic distribution of the estimate is derived with an asymptotic linearity result. An influence function is determined to measure how the weights can reduce the influence of high-leverage points. The weights can also be used to base the ranking on a restricted set of comparisons. This is illustrated in several examples with stratified samples, treatment vs control groups and ordered alternatives.  相似文献   

14.
ABSTRACT

Asymptotic distributions of the standardized estimators of the squared and non squared multiple correlation coefficients under nonnormality were obtained using Edgeworth expansion up to O(1/n). Conditions for the normal-theory asymptotic biases and variances to hold under nonnormality were derived with respect to the parameter values and the weighted sum of the cumulants of associated variables. The condition for the cumulants indicates a compensatory effect to yield the robust normal-theory lower-order cumulants. Simulations were performed to see the usefulness of the formulas of the asymptotic expansions using the model with the asymptotic robustness under nonnormality, which showed that the approximations by Edgeworth expansions were satisfactory.  相似文献   

15.
One of the general problems in clinical trials and mortality rates is the comparison of competing risks. Most of the test statistics used for independent and dependent risks with censored data belong to the class of weighted linear rank tests in its multivariate version. In this paper, we introduce the saddlepoint approximations as accurate and fast approximations for the exact p-values of this class of tests instead of the asymptotic and permutation simulated calculations. Real data examples and extensive simulation studies showed the accuracy and stability performance of the saddlepoint approximations over different scenarios of lifetime distributions, sample sizes and censoring.  相似文献   

16.
Summary.  A fundamental issue in applied multivariate extreme value analysis is modelling dependence within joint tail regions. The primary focus of this work is to extend the classical pseudopolar treatment of multivariate extremes to develop an asymptotically motivated representation of extremal dependence that also encompasses asymptotic independence. Starting with the usual mild bivariate regular variation assumptions that underpin the coefficient of tail dependence as a measure of extremal dependence, our main result is a characterization of the limiting structure of the joint survivor function in terms of an essentially arbitrary non-negative measure that must satisfy some mild constraints. We then construct parametric models from this new class and study in detail one example that accommodates asymptotic dependence, asymptotic independence and asymmetry within a straightforward parsimonious parameterization. We provide a fast simulation algorithm for this example and detail likelihood-based inference including tests for asymptotic dependence and symmetry which are useful for submodel selection. We illustrate this model by application to both simulated and real data. In contrast with the classical multivariate extreme value approach, which concentrates on the limiting distribution of normalized componentwise maxima, our framework focuses directly on the structure of the limiting joint survivor function and provides significant extensions of both the theoretical and the practical tools that are available for joint tail modelling.  相似文献   

17.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

18.
A class of test statistics is introduced which is sensitive against the alternative of stochastic ordering in the two-sample censored data problem. The test statistics for evaluating a cumulative weighted difference in survival distributions are developed while taking into account the imbalances in base-line covariates between two groups. This procedure can be used to test the null hypothesis of no treatment effect, especially when base-line hazards cross and prognostic covariates need to be adjusted. The statistics are semiparametric, not rank based, and can be written as integrated weighted differences in estimated survival functions, where these survival estimates are adjusted for covariate imbalances. The asymptotic distribution theory of the tests is developed, yielding test procedures that are shown to be consistent under a fixed alternative. The choice of weight function is discussed and relies on stability and interpretability considerations. An example taken from a clinical trial for acquired immune deficiency syndrome is presented.  相似文献   

19.
Methods for analyzing and modeling count data time series are used in various fields of practice, and they are particularly relevant for applications in finance and economy. We consider the binomial autoregressive (AR(1)) model for count data processes with a first-order AR dependence structure and a binomial marginal distribution. We present four approaches for estimating its model parameters based on given time series data, and we derive expressions for the asymptotic distribution of these estimators. Then we investigate the finite-sample performance of the estimators and of the respective asymptotic approximations in a simulation study, including a discussion of the 2-block jackknife. We illustrate our methods and findings with a real-data example about transactions at the Korea stock market. We conclude with an application of our results for obtaining reliable estimates for process capability indices.  相似文献   

20.
The Bayesian design approach accounts for uncertainty of the parameter values on which optimal design depends, but Bayesian designs themselves depend on the choice of a prior distribution for the parameter values. This article investigates Bayesian D-optimal designs for two-parameter logistic models, using numerical search. We show three things: (1) a prior with large variance leads to a design that remains highly efficient under other priors, (2) uniform and normal priors lead to equally efficient designs, and (3) designs with four or five equidistant equally weighted design points are highly efficient relative to the Bayesian D-optimal designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号