首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We often rely on the likelihood to obtain estimates of regression parameters but it is not readily available for generalized linear mixed models (GLMMs). Inferences for the regression coefficients and the covariance parameters are key in these models. We presented alternative approaches for analyzing binary data from a hierarchical structure that do not rely on any distributional assumptions: a generalized quasi-likelihood (GQL) approach and a generalized method of moments (GMM) approach. These are alternative approaches to the typical maximum-likelihood approximation approach in Statistical Analysis System (SAS) such as Laplace approximation (LAP). We examined and compared the performance of GQL and GMM approaches with multiple random effects to the LAP approach as used in PROC GLIMMIX, SAS. The GQL approach tends to produce unbiased estimates, whereas the LAP approach can lead to highly biased estimates for certain scenarios. The GQL approach produces more accurate estimates on both the regression coefficients and the covariance parameters with smaller standard errors as compared to the GMM approach. We found that both GQL and GMM approaches are less likely to result in non-convergence as opposed to the LAP approach. A simulation study was conducted and a numerical example was presented for illustrative purposes.  相似文献   

2.
Abstract

This paper introduces a multiscale Gaussian convolution model of Gaussian mixture (MGC-GMM) via the convolution of the GMM and a multiscale Gaussian window function. It is found that the MGC-GMM is still a Gaussian mixture model, and its parameters can be mapped back to the parameters of the GMM. Meanwhile, the multiscale probability density function (MPDF) of the MGC-GMM can be viewed as the mathematical expectation of a random process induced by the Gaussian window function and the GMM, which can be directly estimated by the use of sample data. Based on the estimated MPDF, a novel algorithm denoted by the MGC is proposed for the selection of model and the parameter estimates of the GMM, where the component number and the means of the GMM are respectively determined by the number and the locations of the maximum points of the MPDF, and the numerical algorithms for the weight and variance parameters of the GMM are derived. The MGC is suitable for the GMM with diagonal covariance matrices. A MGC-EM algorithm is also presented for the generalized GMM, where the GMM is estimated using the EM algorithm by taking the estimates from the MGC as initial parameters of the GMM model. The proposed algorithms are tested via a series of simulated sample sets from the given GMM models, and the results show that the proposed algorithms can effectively estimate the GMM model.  相似文献   

3.
杨振兵 《统计研究》2016,33(1):26-34
现有研究对创新技术进步要素偏向关注甚少。本文基于超越对数生产函数的随机前沿分析方法测算了中国制造业部门创新技术进步的要素偏向指数,进而通过系统广义矩估计方法考察了创新要素结构、科研人员工资扭曲等因素对创新技术进步资本偏向的影响效果,我们发现:创新资本的产出弹性远远大于科研人员的产出弹性,制造业创新技术进步整体偏向于资本,且具有明显的路径依赖特征;创新投入要素结构的资本化程度、政府资助行为削弱了创新技术进步的资本偏向;科研人员工资扭曲、企业自有资金对创新活动的支持强化了资本偏向。优化创新方向、提高科研人员工资待遇对国家创新战略的实施至关重要。  相似文献   

4.
We suggest a generalized spatial system GMM (SGMM) estimation for short dynamic panel data models with spatial errors and fixed effects when n is large and T is fixed (usually small). Monte Carlo studies are conducted to evaluate the finite sample properties with the quasi-maximum likelihood estimation (QMLE). The results show that, QMLE, with a proper approximation for initial observation, performs better than SGMM in general cases. However, it performs poorly when spatial dependence is large. QMLE and SGMM perform better for different parameters when there is unknown heteroscedasticity in the disturbances and the data are highly persistent. Both estimates are not sensitive to the treatment of initial values. Estimation of the spatial autoregressive parameter is generally biased when either the data are highly persistent or spatial dependence is large. Choices of spatial weights matrices and the sign of spatial dependence do affect the performance of the estimates, especially in the case of the heteroscedastic disturbance. We also give empirical guidelines for the model.  相似文献   

5.
Abstract

In this article, we consider the optimal investment problem for a defined contribution (DC) pension plan with mispricing. We assume that the pension funds are allowed to invest in a risk-free asset, a market index, and a risky asset with mispricing, i.e. the prices are inconsistent in different financial markets. Assuming that the price process of the risky asset follows the Heston model, the manager of the pension fund aims to maximize the expected utility for the power utility function of terminal wealth. By applying stochastic control theory, we establish the corresponding Hamilton-Jacobi-Bellman (HJB) equation. And the optimal investment strategy is obtained for the power utility function explicitly. Finally, numerical examples are provided to analyze effects of parameters on the optimal strategy.  相似文献   

6.
Dynamic regression models are widely used because they express and model the behaviour of a system over time. In this article, two dynamic regression models, the distributed lag (DL) model and the autoregressive distributed lag model, are evaluated focusing on their lag lengths. From a classical statistics point of view, there are various methods to determine the number of lags, but none of them are the best in all situations. This is a serious issue since wrong choices will provide bad estimates for the effects of the regressors on the response variable. We present an alternative for the aforementioned problems by considering a Bayesian approach. The posterior distributions of the numbers of lags are derived under an improper prior for the model parameters. The fractional Bayes factor technique [A. O'Hagan, Fractional Bayes factors for model comparison (with discussion), J. R. Statist. Soc. B 57 (1995), pp. 99–138] is used to handle the indeterminacy in the likelihood function caused by the improper prior. The zero-one loss function is used to penalize wrong decisions. A naive method using the specified maximum number of DLs is also presented. The proposed and the naive methods are verified using simulation data. The results are promising for the method we proposed. An illustrative example with a real data set is provided.  相似文献   

7.
This paper assesses the econometric and economic value consequences of neglecting structural breaks in dynamic correlation models and in the context of asset allocation framework. It is shown that changes in the parameters of the conditional correlation process can lead to biased estimates of persistence. Monte Carlo simulations reveal that short-run persistence is downward biased while long-run persistence is severely upward biased, leading to spurious high persistence of shocks to conditional correlation. An application to stock returns supports these results and concludes that neglecting such structural shifts could lead to misleading decisions on portfolio diversification, hedging, and risk management.  相似文献   

8.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
The multivariate maxima of moving maxima (M4) model has the potential to model both the cross-sectional and temporal tail-dependence for a rich class of multivariate time series. The main difficulty of applying M4 model to real data is due to the estimation of a large number of parameters in the model and the intractability of its joint likelihood. In this paper, we consider a sparse M4 random coefficient model (SM4R), which has a parsimonious number of parameters and it can potentially capture the major stylized facts exhibited by devolatized asset returns found in empirical studies. We study the probabilistic properties of the newly proposed model. Statistical inference can be made based on the Generalized Method of Moments (GMM) approach. We also demonstrate through real data analysis that the SM4R model can be effectively used to improve the estimates of the Value-at-Risk (VaR) for portfolios consisting of multivariate financial returns while ignoring either temporal or cross-sectional tail dependence could potentially result in serious underestimate of market risk.  相似文献   

10.
This article provides a method to estimate search costs in a differentiated product environment in which consumers are uncertain about the utility distribution. Consumers learn about the utility distribution by Bayesian updating their Dirichlet process prior beliefs. The model provides expressions for bounds on the search costs that can rationalize observed search and purchasing behavior. Using individual-specific data on web browsing and purchasing behavior for MP3 players sold online we show how to use these bounds to estimate search costs as well as the parameters of the utility distribution. Our estimates indicate that search costs are sizable. We show that ignoring consumer learning while searching can lead to severely biased search cost and elasticity estimates.  相似文献   

11.
In this article we examine small sample properties of a generalized method of moments (GMM) estimation using Monte Carlo simulations. We assume that the generated time series describe the stochastic variance rate of a stock index. we use a mean reverting square-root process to simulate the dynamics of this instantaneous variance rate. The time series obtained are used to estimate the parameters of the assumed variance rate process by applying GMM. Our results are described and compared to estimates from empirical data which consist of volatility as well as daily volume data of the German stock market. One of our main findings is that estimates of the mean reverting parameter that are not significantly different from zero do not necessarily imply a rejection of the hypothesis of a mean reverting behavior of the underlying stochastic process.  相似文献   

12.
In this article, we examine the limiting behavior of generalized method of moments (GMM) sample moment conditions and point out an important discontinuity that arises in their asymptotic distribution. We show that the part of the scaled sample moment conditions that gives rise to degeneracy in the asymptotic normal distribution is T-consistent and has a nonstandard limiting distribution. We derive the appropriate asymptotic (weighted chi-squared) distribution when this degeneracy occurs and show how to conduct asymptotically valid statistical inference. We also propose a new rank test that provides guidance on which (standard or nonstandard) asymptotic framework should be used for inference. The finite-sample properties of the proposed asymptotic approximation are demonstrated using simulated data from some popular asset pricing models.  相似文献   

13.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

14.
Calibration in macroeconomics involves choosing fre parameters by matching certain moments of simulted models with those of data. We formally examine this method by treating the process of calibration as an econometric estimator. A numerical version of the Mehra-Prescott (1985) economy is the setting for an evaluation of calibration estimators via Monte Carlo methods. While these estimators sometimes have reasonable finite-sample properties they are not robust to mistakes in setting non-free parameters. In contrast, generalized method-of-moments (GMM) estimators have satisfactory finite-sample characteristics, quick convergence, and informational requirements less stringent than those of calibration estimators. In dynamic equilibrium models in which GMM is infeasible we offer some suggestions for improving estimates based on calibration methodology.  相似文献   

15.
Calibration in macroeconomics involves choosing fre parameters by matching certain moments of simulted models with those of data. We formally examine this method by treating the process of calibration as an econometric estimator. A numerical version of the Mehra-Prescott (1985) economy is the setting for an evaluation of calibration estimators via Monte Carlo methods. While these estimators sometimes have reasonable finite-sample properties they are not robust to mistakes in setting non-free parameters. In contrast, generalized method-of-moments (GMM) estimators have satisfactory finite-sample characteristics, quick convergence, and informational requirements less stringent than those of calibration estimators. In dynamic equilibrium models in which GMM is infeasible we offer some suggestions for improving estimates based on calibration methodology.  相似文献   

16.
We study the invariance properties of various test criteria which have been proposed for hypothesis testing in the context of incompletely specified models, such as models which are formulated in terms of estimating functions (Godambe, 1960) or moment conditions and are estimated by generalized method of moments (GMM) procedures (Hansen, 1982), and models estimated by pseudo-likelihood (Gouriéroux, Monfort, and Trognon, 1984b,c) and M-estimation methods. The invariance properties considered include invariance to (possibly nonlinear) hypothesis reformulations and reparameterizations. The test statistics examined include Wald-type, LR-type, LM-type, score-type, and C(α)?type criteria. Extending the approach used in Dagenais and Dufour (1991), we show first that all these test statistics except the Wald-type ones are invariant to equivalent hypothesis reformulations (under usual regularity conditions), but all five of them are not generally invariant to model reparameterizations, including measurement unit changes in nonlinear models. In other words, testing two equivalent hypotheses in the context of equivalent models may lead to completely different inferences. For example, this may occur after an apparently innocuous rescaling of some model variables. Then, in view of avoiding such undesirable properties, we study restrictions that can be imposed on the objective functions used for pseudo-likelihood (or M-estimation) as well as the structure of the test criteria used with estimating functions and generalized method of moments (GMM) procedures to obtain invariant tests. In particular, we show that using linear exponential pseudo-likelihood functions allows one to obtain invariant score-type and C(α)?type test criteria, while in the context of estimating function (or GMM) procedures it is possible to modify a LR-type statistic proposed by Newey and West (1987) to obtain a test statistic that is invariant to general reparameterizations. The invariance associated with linear exponential pseudo-likelihood functions is interpreted as a strong argument for using such pseudo-likelihood functions in empirical work.  相似文献   

17.
Bayesian inference for pairwise interacting point processes   总被引:1,自引:0,他引:1  
Pairwise interacting point processes are commonly used to model spatial point patterns. To perform inference, the established frequentist methods can produce good point estimates when the interaction in the data is moderate, but some methods may produce severely biased estimates when the interaction in strong. Furthermore, because the sampling distributions of the estimates are unclear, interval estimates are typically obtained by parametric bootstrap methods. In the current setting however, the behavior of such estimates is not well understood. In this article we propose Bayesian methods for obtaining inferences in pairwise interacting point processes. The requisite application of Markov chain Monte Carlo (MCMC) techniques is complicated by an intractable function of the parameters in the likelihood. The acceptance probability in a Metropolis-Hastings algorithm involves the ratio of two likelihoods evaluated at differing parameter values. The intractable functions do not cancel, and hence an intractable ratio r must be estimated within each iteration of a Metropolis-Hastings sampler. We propose the use of importance sampling techniques within MCMC to address this problem. While r may be estimated by other methods, these, in general, are not readily applied in a Bayesian setting. We demonstrate the validity of our importance sampling approach with a small simulation study. Finally, we analyze the Swedish pine sapling dataset (Strand 1972) and contrast the results with those in the literature.  相似文献   

18.
Accelerated life testing of a product under more severe than normal conditions is cawiionly used to reduce test time and cost. Data collected at such accelerated conditions is used to obtain estimates of parameters of a stress translation function which is then used to make inference about the product's, per" formance under normal conditions. This problem is considered when the product is a p component series system with WeibuH distributed component lifetimes liaving a caimon shape parameter. A general stress translation function is used and estimates of model parameters are obtained for various censoring schemes.  相似文献   

19.
For the portfolio problem with unknown parameter values, we compare the conventional certainty equivalence portfolio choice with the optimal Bayes portfolio. In the important single risky asset case a diffuse Bayes rule leads to portfolios that differ significantly from those suggested by a certainty equivalence rule which we show are inadmissible relative to a quadratic utility function for the range of parameters we consider. These results are invariant to arbitrary changes in the utility function parameters. We illustrate the results using a simple mutual fund example.  相似文献   

20.
This paper derives a test statistic for the variance-covariance parameters which is a quadratic function of their MINQUE (Minimum Norm Quadratic Unbiased Estimation) estimates. The test is a Wald-type test, and its development closely parallels the theory used to derive a similar test for the coefficients in linear models. In fact, the derivation proceeds by first setting up the estimation problem in a derived linear model in which the dispersion parameters are the coefficients. The test statistic is shown to be the sum of the squares of independent standardized x2 variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号