首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

2.
The commonly made assumption that all stochastic error terms in the linear regression model share the same variance (homoskedasticity) is oftentimes violated in practical applications, especially when they are based on cross-sectional data. As a precaution, a number of practitioners choose to base inference on the parameters that index the model on tests whose statistics employ asymptotically correct standard errors, i.e. standard errors that are asymptotically valid whether or not the errors are homoskedastic. In this paper, we use numerical integration methods to evaluate the finite-sample performance of tests based on different (alternative) heteroskedasticity-consistent standard errors. Emphasis is placed on a few recently proposed heteroskedasticity-consistent covariance matrix estimators. Overall, the results favor the HC4 and HC5 heteroskedasticity-robust standard errors. We also consider the use of restricted residuals when constructing asymptotically valid standard errors. Our results show that the only test that clearly benefits from such a strategy is the HC0 test.  相似文献   

3.
Small‐area estimation of poverty‐related variables is an increasingly important analytical tool in targeting the delivery of food and other aid in developing countries. We compare two methods for the estimation of small‐area means and proportions, namely empirical Bayes and composite estimation, with what has become the international standard method of Elbers, Lanjouw & Lanjouw (2003) . In addition to differences among the sets of estimates and associated estimated standard errors, we discuss data requirements, design and model selection issues and computational complexity. The Elbers, Lanjouw and Lanjouw (ELL) method is found to produce broadly similar estimates but to have much smaller estimated standard errors than the other methods. The question of whether these standard error estimates are downwardly biased is discussed. Although the question cannot yet be answered in full, as a precautionary measure it is strongly recommended that the ELL model be modified to include a small‐area‐level error component in addition to the cluster‐level and household‐level errors it currently contains. This recommendation is particularly important because the allocation of billions of dollars of aid funding is being determined and monitored via ELL. Under current aid distribution mechanisms, any downward bias in estimates of standard error may lead to allocations that are suboptimal because distinctions are made between estimated poverty levels at the small‐area level that are not significantly different statistically.  相似文献   

4.
In this article, we develop an empirical Bayesian approach for the Bayesian estimation of parameters in four bivariate exponential (BVE) distributions. We have opted for gamma distribution as a prior for the parameters of the model in which the hyper parameters have been estimated based on the method of moments and maximum likelihood estimates (MLEs). A simulation study was conducted to compute empirical Bayesian estimates of the parameters and their standard errors. We use moment estimators or MLEs to estimate the hyper parameters of the prior distributions. Furthermore, we compare the posterior mode of parameters obtained by different prior distributions and the Bayesian estimates based on gamma priors are very close to the true values as compared to improper priors. We use MCMC method to obtain the posterior mean and compared the same using the improper priors and the classical estimates, MLEs.  相似文献   

5.
In this article, we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo expectation–maximization (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix-based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group study of significant cervical lesion diagnosis in women with atypical glandular cells of undetermined significance to compare the diagnostic accuracy of a histology-based evaluation, a carbonic anhydrase-IX biomarker-based test and a human papillomavirus DNA test.  相似文献   

6.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

7.
The authors show that for balanced data, the estimates of effects of interest and of their standard errors are unaffected when a covariate is removed from a multiplicative Poisson model. As they point out, this is not verified in the analogous linear model, nor in the logistic model. In the first case, only the estimated coefficients remain the same, while in the second case, both the estimated effects and their standard errors can change.  相似文献   

8.
Prediction in multilevel generalized linear models   总被引:2,自引:0,他引:2  
Summary.  We discuss prediction of random effects and of expected responses in multilevel generalized linear models. Prediction of random effects is useful for instance in small area estimation and disease mapping, effectiveness studies and model diagnostics. Prediction of expected responses is useful for planning, model interpretation and diagnostics. For prediction of random effects, we concentrate on empirical Bayes prediction and discuss three different kinds of standard errors; the posterior standard deviation and the marginal prediction error standard deviation (comparative standard errors) and the marginal sampling standard deviation (diagnostic standard error). Analytical expressions are available only for linear models and are provided in an appendix . For other multilevel generalized linear models we present approximations and suggest using parametric bootstrapping to obtain standard errors. We also discuss prediction of expectations of responses or probabilities for a new unit in a hypothetical cluster, or in a new (randomly sampled) cluster or in an existing cluster. The methods are implemented in gllamm and illustrated by applying them to survey data on reading proficiency of children nested in schools. Simulations are used to assess the performance of various predictions and associated standard errors for logistic random-intercept models under a range of conditions.  相似文献   

9.
Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparison of predictive errors between two models. We implement the computations in an R package called loo and demonstrate using models fit with the Bayesian inference package Stan.  相似文献   

10.
One of the areas receiving little attention in the past in index-number theory is providing standard errors for the index number estimates. Recently, Clements and Izan and Selvanathan used the stochastic approach to index numbers to derive standard errors for the rate of inflation and Laspeyres and Paasche index numbers. In this article, we use this approach to compute standard errors associated with purchasing power parities computed using Geary-Khamis aggregation procedure in the International Comparisons Project of the United Nations. We assess the quality of the standard errors using Efron's bootstrap technique.  相似文献   

11.
General mixed linear models for experiments conducted over a series of sltes and/or years are described. The ordinary least squares (OLS) estlmator is simple to compute, but is not the best unbiased estimator. Also, the usuaL formula for the varlance of the OLS estimator is not correct and seriously underestimates the true variance. The best linear unbiased estimator is the generalized least squares (GLS) estimator. However, t requires an inversion of the variance-covariance matrix V, whlch is usually of large dimension. Also, in practice, V is unknown.

We presented an estlmator [Vcirc] of the matrix V using the estimators of variance components [for sites, blocks (sites), etc.]. We also presented a simple transformation of the data, such that an ordinary least squares regression of the transformed data gives the estimated generalized least squares (EGLS) estimator. The standard errors obtained from the transformed regression serve as asymptotic standard errors of the EGLS estimators. We also established that the EGLS estlmator is unbiased.

An example of fitting a linear model to data for 18 sites (environments) located in Brazil is given. One of the site variables (soil test phosphorus) was measured by plot rather than by site and this established the need for a covariance model such as the one used rather than the usual analysis of variance model. It is for this variable that the resulting parameter estimates did not correspond well between the OLS and EGLS estimators. Regression statistics and the analysis of variance for the example are presented and summarized.  相似文献   

12.
Modelling udder infection data using copula models for quadruples   总被引:1,自引:0,他引:1  
We study copula models for correlated infection times in the four udder quarters of dairy cows. Both a semi-parametric and a nonparametric approach are considered to estimate the marginal survival functions, taking into account the effect of a binary udder quarter level covariate. We use a two-stage estimation approach and we briefly discuss the asymptotic behaviour of the estimators obtained in the first and the second stage of the estimation. A pseudo-likelihood ratio test is used to select an appropriate copula from the power variance copula family that describes the association between the outcomes in a cluster. We propose a new bootstrap algorithm to obtain the p-value for this test. This bootstrap algorithm also provides estimates for the standard errors of the estimated parameters in the copula. The proposed methods are applied to the udder infection data. A small simulation study for a setting similar to the setting of the udder infection data gives evidence that the proposed method provides a valid approach to select an appropriate copula within the power variance copula family.  相似文献   

13.
We consider the issue of performing testing inferences on the parameters that index the linear regression model under heteroskedasticity of unknown form. Quasi-t test statistics use asymptotically correct standard errors obtained from heteroskedasticity-consistent covariance matrix estimators. An alternative approach involves making an assumption about the functional form of the response variances and jointly modelling mean and dispersion effects. In this paper we compare the accuracy of testing inferences made using the two approaches. We consider several different quasi-t tests and also z tests performed after estimated generalized least squares estimation which was carried out using three different estimation strategies. The numerical evidence shows that some quasi-t tests are typically considerably less size distorted in small samples than the tests carried out after the jointly modelling of mean and dispersion effects. Finally, we present and discuss two empirical applications.  相似文献   

14.
Constructing spatial density maps of seismic events, such as earthquake hypocentres, is complicated by the fact that events are not located precisely. In this paper, we present a method for estimating density maps from event locations that are measured with error. The estimator is based on the simulation–extrapolation method of estimation and is appropriate for location errors that are either homoscedastic or heteroscedastic. A simulation study shows that the estimator outperforms the standard estimator of density that ignores location errors in the data, even when location errors are spatially dependent. We apply our method to construct an estimated density map of earthquake hypocenters using data from the Alaska earthquake catalogue.  相似文献   

15.
We propose a latent variable model for informative missingness in longitudinal studies which is an extension of latent dropout class model. In our model, the value of the latent variable is affected by the missingness pattern and it is also used as a covariate in modeling the longitudinal response. So the latent variable links the longitudinal response and the missingness process. In our model, the latent variable is continuous instead of categorical and we assume that it is from a normal distribution. The EM algorithm is used to obtain the estimates of the parameter we are interested in and Gauss–Hermite quadrature is used to approximate the integration of the latent variable. The standard errors of the parameter estimates can be obtained from the bootstrap method or from the inverse of the Fisher information matrix of the final marginal likelihood. Comparisons are made to the mixed model and complete-case analysis in terms of a clinical trial dataset, which is Weight Gain Prevention among Women (WGPW) study. We use the generalized Pearson residuals to assess the fit of the proposed latent variable model.  相似文献   

16.
The problem of error estimation of parameters b in a linear model,Y = Xb+ e, is considered when the elements of the design matrix X are functions of an unknown ‘design’ parameter vector c. An estimated value c is substituted in X to obtain a derived design matrix [Xtilde]. Even though the usual linear model conditions are not satisfied with [Xtilde], there are situations in physical applications where the least squares solution to the parameters is used without concern for the magnitude of the resulting error. Such a solution can suffer from serious errors.

This paper examines bias and covariance errors of such estimators. Using a first-order Taylor series expansion, we derive approximations to the bias and covariance matrix of the estimated parameters. The bias approximation is a sum of two terms:One is due to the dependence between ? and Y; the other is due to the estimation errors of ? and is proportional to b, the parameter being estimated. The covariance matrix approximation, on the other hand, is composed of three omponents:One component is due to the dependence between ? and Y; the second is the covariance matrix ∑b corresponding to the minimum variance unbiased b, as if the design parameters were known without error; and the third is an additional component due to the errors in the design parameters. It is shown that the third error component is directly proportional to bb'. Thus, estimation of large parameters with wrong design matrix [Xtilde] will have larger errors of estimation. The results are illustrated with a simple linear example.  相似文献   

17.
In this paper, a linear mixed effects model is used to fit skewed longitudinal data in the presence of dropout. Two distributional assumptions are considered to produce background for heavy tailed models. One is the linear mixed model with skew-normal random effects and normal errors and the other one is the linear mixed model with skew-normal errors and normal random effects. An ECM algorithm is developed to obtain the parameter estimates. Also an empirical Bayes approach is used for estimating random effects. A simulation study is implemented to investigate the performance of the presented algorithm. Results of an application are also reported where standard errors of estimates are calculated using the Bootstrap approach.  相似文献   

18.
A correlation curve measures the strength of the association between two variables locally at different values of covariate. This paper studies how to estimate the correlation curve under the multiplicative distortion measurement errors setting. The unobservable variables are both distorted in a multiplicative fashion by an observed confounding variable. We obtain asymptotic normality results for the estimated correlation curve. We conduct Monte Carlo simulation experiments to examine the performance of the proposed estimator. The estimated correlation curve is applied to analyze a real dataset for an illustration.  相似文献   

19.
We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1, 1990 to April 18, 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.  相似文献   

20.
New multiple comparison with a control (MCC) procedures are developed in repeated measures incomplete block design settings based on R-estimates. It is assumed that the errors within each subject are exchangeable random variables. The R-estimators of the treatment effects are obtained by minimizing a sum of Jaeckel (1972)-type dispersion functions. Based on the R-estimators, Dunnett-type multiple comparison procedures are developed for comparing test-treatments with a control-treatment. Under exchangeable errors, it is demonstrated that for Cox-type designs, the new procedures are more efficient than the existing nonparametric procedures. The new MCC procedures are applied to a data set in a clinical trial which consists of patients with reversible obstructive pulmonary disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号