首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

2.
Female labor participation models have been usually studied through probit and logit specifications. Little attention has been paid to verify the assumptions that are used in these sort of models, basically distributional assumptions and homoskedasticity. In this paper we apply semiparametirc methods in order to test the previous hypothesis. We also estimate a Spanish female labor participation model using both parametric and semiparametirc approaches. The parametirc model includes fixed and random coefficients probit specification. The estimation procedures are parametric maximum likelihood for both probit and logit models, and semiparametric quasi maximum likelihood following Klein and Spady (1993). The results depend cricially in the assumed model.  相似文献   

3.
We formulate a traditional growth and yield model as a Bayes model. We attempt to introduce as few new assumptions as possible. Zellner's Bayesian method of moments procedure is used, because the published model did not include any distributional assumptions. We generate predictive posterior samples for a number of stand variables using the Gibbs sampler. The means of the samples compare favorably with the predictions from the published model. In addition, our model delivers distributions of outcomes, from which it is easy to establish measures of uncertainty, such as highest posterior density regions.  相似文献   

4.
We formulate a traditional growth and yield model as a Bayes model. We attempt to introduce as few new assumptions as possible. Zellner's Bayesian method of moments procedure is used, because the published model did not include any distributional assumptions. We generate predictive posterior samples for a number of stand variables using the Gibbs sampler. The means of the samples compare favorably with the predictions from the published model. In addition, our model delivers distributions of outcomes, from which it is easy to establish measures of uncertainty, such as highest posterior density regions.  相似文献   

5.
While randomization inference is well developed for continuous and binary outcomes, there has been comparatively little work for outcomes with nonnegative support and clumping at zero. Typically, outcomes of this type have been modeled using parametric models that impose strong distributional assumptions. This article proposes new randomization inference procedures for nonnegative outcomes with clumping at zero. Instead of making distributional assumptions, we propose various assumptions about the nature of the response to treatment and use permutation inference for both testing and estimation. This approach allows for some natural goodness-of-fit tests for model assessment, as well as flexibility in selecting test statistics sensitive to different potential alternatives. We illustrate our approach using two randomized trials, where job training interventions were designed to increase earnings of participants.  相似文献   

6.
A variety of statistical regression models have been proposed for the comparison of ROC curves for different markers across covariate groups. Pepe developed parametric models for the ROC curve that induce a semiparametric model for the market distributions to relax the strong assumptions in fully parametric models. We investigate the analysis of the power ROC curve using these ROC-GLM models compared to the parametric exponential model and the estimating equations derived from the usual partial likelihood methods in time-to-event analyses. In exploring the robustness to violations of distributional assumptions, we find that the ROC-GLM provides an extra measure of robustness.  相似文献   

7.
Useful models for time series of counts or simply wrong ones?   总被引:1,自引:0,他引:1  
There has been a considerable and growing interest in low integer-valued time series data leading to a diversification of modelling approaches. In addition to static regression models, both observation-driven and parameter-driven models are considered here. We compare and contrast a variety of time series models for counts using two very different data sets as a testbed. A range of diagnostic devices is employed to help inform model adequacy. Special attention is paid to dynamic structure and underlying distributional assumptions including associated dispersion properties. Competing models show attractive features, but overall no one modelling approach is seen to dominate.  相似文献   

8.
Advances in computation mean that it is now possible to fit a wide range of complex models to data, but there remains the problem of selecting a model on which to base reported inferences. Following an early suggestion of Box & Tiao, it seems reasonable to seek 'inference robustness' in reported models, so that alternative assumptions that are reasonably well supported would not lead to substantially different conclusions. We propose a four-stage modelling strategy in which we iteratively assess and elaborate an initial model, measure the support for each of the resulting family of models, assess the influence of adopting alternative models on the conclusions of primary interest, and identify whether an approximate model can be reported. The influence-support plot is then introduced as a tool to aid model comparison. The strategy is semi-formal, in that it could be embedded in a decision-theoretic framework but requires substantive input for any specific application. The one restriction of the strategy is that the quantity of interest, or 'focus', must retain its interpretation across all candidate models. It is, therefore, applicable to analyses whose goal is prediction, or where a set of common model parameters are of interest and candidate models make alternative distributional assumptions. The ideas are illustrated by two examples. Technical issues include the calibration of the Kullback-Leibler divergence between marginal distributions, and the use of alternative measures of support for the range of models fitted.  相似文献   

9.
Goodness of fit tests for the multiple logistic regression model   总被引:1,自引:0,他引:1  
Several test statistics are proposed for the purpose of assessing the goodness of fit of the multiple logistic regression model. The test statistics are obtained by applying a chi-square test for a contingency table in which the expected frequencies are determined using two different grouping strategies and two different sets of distributional assumptions. The null distributions of these statistics are examined by applying the theory for chi-square tests of Moore Spruill (1975) and through computer simulations. All statistics are shown to have a chi-square distribution or a distribution which can be well approximated by a chi-square. The degrees of freedom are shown to depend on the particular statistic and the distributional assumptions.

The power of each of the proposed statistics is examined for the normal, linear, and exponential alternative models using computer simulations.  相似文献   

10.
Hierarchical models are popular in many applied statistics fields including Small Area Estimation. One well known model employed in this particular field is the Fay–Herriot model, in which unobservable parameters are assumed to be Gaussian. In Hierarchical models assumptions about unobservable quantities are difficult to check. For a special case of the Fay–Herriot model, Sinharay and Stern [2003. Posterior predictive model checking in Hierarchical models. J. Statist. Plann. Inference 111, 209–221] showed that violations of the assumptions about the random effects are difficult to detect using posterior predictive checks. In this present paper we consider two extensions of the Fay–Herriot model in which the random effects are assumed to be distributed according to either an exponential power (EP) distribution or a skewed EP distribution. We aim to explore the robustness of the Fay–Herriot model for the estimation of individual area means as well as the empirical distribution function of their ‘ensemble’. Our findings, which are based on a simulation experiment, are largely consistent with those of Sinharay and Stern as far as the efficient estimation of individual small area parameters is concerned. However, when estimating the empirical distribution function of the ‘ensemble’ of small area parameters, results are more sensitive to the failure of distributional assumptions.  相似文献   

11.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   

12.
面板数据随机前沿分析的研究综述   总被引:4,自引:0,他引:4  
近年来,面板数据随机前沿分析(SFA)越来越多地被用于测算各类决策单位的效率,取得了很多成果,但是国内外实证研究文献也存在过度依赖几种假设严格的模型和不注重模型局限性的问题。本文在统一的计量框架下,对面板SFA模型的发展研究进行了系统的梳理总结。本文将相关模型分为两大类:效率不随时间变化的模型和效率可随时间变化的模型,每一类又根据是否对效率项的分布做出假设分为:有分布假设的模型和无分布假设的模型。在明确和比较不同模型的假设、估计过程和局限性的基础上,我们对未来面板SFA模型的应用提出了建议。  相似文献   

13.
This article focuses on the distribution of price sensitivity across consumers. We employ a random-coefficient logit model in which brand-specific intercepts and price-slope coefficients are allowed to vary across households. The model is estimated with panel data for two product categories. The implications of the estimated model are deduced through an optimal retail pricing analysis that combines the panel data with chain-level cost figures. We test parametric distributional assumptions using semiparametric density estimates based on series expansions.  相似文献   

14.
This article presents an econometric model capable of accommodating a nonradial measure of input-specific technical inefficiency and suggests an estimation technique that reduces dependency on distributional assumptions on inefficiency. It also makes use of the demand system derived from a flexible cost function and imposes concavity restrictions as required by economic theory. Panel data on 12 Finnish foundry plants are used to estimate technical efficiency of labor and energy for each of these plants.  相似文献   

15.
The problem of constructing nonlinear regression models is investigated to analyze data with complex structure. We introduce radial basis functions with hyperparameter that adjusts the amount of overlapping basis functions and adopts the information of the input and response variables. By using the radial basis functions, we construct nonlinear regression models with help of the technique of regularization. Crucial issues in the model building process are the choices of a hyperparameter, the number of basis functions and a smoothing parameter. We present information-theoretic criteria for evaluating statistical models under model misspecification both for distributional and structural assumptions. We use real data examples and Monte Carlo simulations to investigate the properties of the proposed nonlinear regression modeling techniques. The simulation results show that our nonlinear modeling performs well in various situations, and clear improvements are obtained for the use of the hyperparameter in the basis functions.  相似文献   

16.
The aim of this study is to assess the biases of a Food Frequency Questionnaire (FFQ) by comparing total energy intake (TEI) with total energy expenditure (TEE) obtained from doubly labelled water(DLW) biomarker after adjusting measurement errors in DLW. We develop several Bayesian hierarchical measurement error models of DLW with different distributional assumptions on TEI to obtain precise bias estimates of TEI. Inference is carried out by using MCMC simulation techniques in a fully Bayesian framework, and model comparisons are done via the mean square predictive error. Our results showed that the joint model with random effects under the Gamma distribution is the best fit model in terms of the MSPE and residual diagnostics, in which bias in TEI is not significant based on the 95% credible interval. The Canadian Journal of Statistics 38: 506–516; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
This paper discusses a general strategy for reducing measurement-error-induced bias in statistical models. It is assumed that the measurement error is unbiased with a known variance although no other distributional assumptions on the measurement-error are employed,

Using a preliminary fit of the model to the observed data, a transformation of the variable measured with error is estimated. The transformation is constructed so that the estimates obtained by refitting the model to the ‘corrected’ data have smaller bias,

Whereas the general strategy can be applied in a number of settings, this paper focuses on the problem of covariate measurement error in generalized linear models, Two estimators are derived and their effectiveness at reducing bias is demonstrated in a Monte Carlo study.  相似文献   

18.
This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes and innovations of observed duration process and latent log duration process. Suitable algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allow comparison of our models and methods to those existing in the literature.  相似文献   

19.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

20.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号