首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models that cover moderate time spans and have correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent limited information maximum likelihood estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.  相似文献   

2.
Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously‐distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super‐consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross‐section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995).  相似文献   

3.
A new approach is proposed to evaluate new-product opportunities. This approach uses the distribution of brand-purchase probability of the new product over a population of potential customers and the outputs from conjoint analysis. The heterogeneous distribution of brand-purchase probability is expressed by a beta binomial brand-choice model compounded with a negative binomial product-class purchase-incidence model. The resulting model provides a way to predict trial and repeat-purchase patterns of new-product concepts. The paper discusses the development of the model. It also discusses issues of measurement, estimation, testing, and implementation of the proposed approach based on actual empirical data.  相似文献   

4.
This paper shows that the semiparametric efficiency bound for a parameter identified by an unconditional moment restriction with data missing at random (MAR) coincides with that of a particular augmented moment condition problem. The augmented system consists of the inverse probability weighted (IPW) original moment restriction and an additional conditional moment restriction which exhausts all other implications of the MAR assumption. The paper also investigates the value of additional semiparametric restrictions on the conditional expectation function (CEF) of the original moment function given always observed covariates. In the program evaluation context, for example, such restrictions are implied by semiparametric models for the potential outcome CEFs given baseline covariates. The efficiency bound associated with this model is shown to also coincide with that of a particular moment condition problem. Some implications of these results for estimation are briefly discussed.  相似文献   

5.
《Risk analysis》2018,38(10):2073-2086
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real‐world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation.  相似文献   

6.
非参数计量经济联立模型的变窗宽估计理论   总被引:4,自引:0,他引:4  
联立方程模型在经济政策制定、经济结构分析和经济预测方面起重要作用. 文章将非参 数回归模型的局部线性估计方法与传统联立方程模型估计方法相结合,在随机设计(模型中所 有变量为随机变量) 下,提出了非参数计量经济联立模型的局部线性工具变量变窗宽估计并利 用概率论中大数定理和中心极限定理等在内点处研究了它的大样本性质,证明了它的一致性 和渐近正态性,它在内点处的收敛速度达到了非参数函数估计的最优收敛速度  相似文献   

7.
A decision making model is often very sensitive to the subjective probability estimates that are used. To reduce this sensitivity, it is necessary to utilize elicitation procedures that yield more valid estimates and increase the decision maker's confidence in the estimates. This paper discusses an experiment in which the subjects were asked to estimate the areas of various squares. Two procedures were evaluated. In one procedure, the subjects were told that estimates were distributed typically around the true value in accordance with the normal probability distribution. In the other, the subjects were asked to estimate the length as a component that could be used to calculate the area. The results indicate that the normal error model is a reasonable representation of the estimation procedure. Both procedures provided estimates with significant validity. There is also an indication that the procedures reduce bias and that both procedures result in more consistent estimates when the sizes of the squares are varied.  相似文献   

8.
针对具有风险厌恶的零售商,建立了权衡期望利润和条件风险值(CVaR)的均值-风险库存优化模型,给出了离散需求分布不确定条件下能实现帕累托最优但具有较高保守性和非帕累托最优但具有较低保守性的两种鲁棒对应。针对不确定需求分布,在仅知历史需求样本数据情况下,应用统计推断理论构建了满足一定置信水平的基于似然估计的需求概率分布不确定集。在此基础上,运用拉格朗日对偶理论,将上述两种鲁棒对应模型转化为易于求解的凹优化问题,并证明了其与原问题的等价性。最后,针对实际案例进行了数值计算,分析了不同系统参数和样本规模对零售商最优库存决策及其运作绩效的影响,并给出了零售商期望利润和条件风险值两个目标权衡的帕累托有效前沿。结果表明,采用基于似然估计的鲁棒优化方法得到的零售商库存策略具有良好鲁棒性,能够有效抑制需求分布不确定性对零售商库存绩效的影响。而且,历史需求样本规模越大,鲁棒库存策略下的零售商运作绩效越接近最优情况。进一步,通过对比发现,两种鲁棒对应模型虽然保守性不同,但在最终库存策略上保持一致。  相似文献   

9.
Conditional moment restrictions can be combined through GMM estimation to construct more efficient semiparametric estimators. This paper is about attainable efficiency for such estimators. We define and use a moment tangent set, the directions of departure from the truth allowed by the moments, to characterize when the semiparametric efficiency bound can be attained. The efficiency condition is that the moment tangent set equals the model tangent set. We apply these results to transformed, censored, and truncated regression models, e.g., finding that the conditional moment restrictions from Powell's (1986) censored regression quantile estimators can be combined to approximate efficiency when the disturbance is independent of regressors.  相似文献   

10.
Computation of typical statistical sample estimates such as the median or least squares fit usually require the solution of an unconstrained optimization problem with a convex objective function, that can be solved efficiently by various methods. The presence of outliers in the data dictates the computation of a robust estimate, which can be defined as the optimum statistical estimate for a subset that contains at least half of the observations. The resulting problem is now a combinatorial optimization problem which is often computationally intractable. Classical statistical methods for multivariate location \(\varvec{\mu }\) and scatter matrix \(\varvec{\varSigma }\) estimation are based on the sample mean vector and covariance matrix, which are very sensitive in the presence of outlier observations. We propose a new method for robust location and scatter estimation which is composed of two stages. In the first stage an unbiased multivariate \(L_{1}\)-median center for all the observations is attained by a novel procedure called the least trimmed Euclidean deviations estimator. This robust median defines a coverage set of observations which is used in the second stage to iteratively compute the set of outliers which violate the correlational structure of the data set. Extensive computational experiments indicate that the proposed method outperforms existing methods in accuracy, robustness and computational time.  相似文献   

11.
Ilias S. Kevork 《Omega》2010,38(3-4):218-227
The paper considers the classical single-period inventory model, also known as the Newsboy Problem, with the demand normally distributed and fully observed in successive inventory cycles. The extent of applicability of such a model to inventory management depends upon demand estimation. Appropriate estimators for the optimal order quantity and the maximum expected profit are developed. The statistical properties of the two estimators are explored for both small and large samples, analytically and through Monte-Carlo simulations. For small samples, both estimators are biased. The form of distribution of the optimal order quantity estimator depends upon the critical fractile, while the distribution of the maximum expected profit estimator is always left-skewed. Small samples properties of the estimators indicate that, when the critical fractile is set over a half, the optimal order quantity is underestimated and the maximum expected profit is overestimated with probability over 50%, whereas the probability of overestimating both quantities exceeds again 50% when the critical fractile is below a half. For large samples, based on the asymptotic properties of the two estimators, confidence intervals are derived for the corresponding true population values. The validity of confidence intervals using small samples is tested by developing appropriate Monte-Carlo simulations. In small samples, these intervals attain acceptable confidence levels, but with high unit shortage cost, for the case of maximum expected profit, significant reductions in their precision and stability are observed.  相似文献   

12.
本文研究风险因子多元厚尾分布情形下的信用资产组合风险度量问题.用多元t-Copula分布来描述标的资产收益率分布的厚尾性,同时将三步重要抽样技术发展到基多元t-Copula分布的资产组合模型中,拓宽和丰富了信用资产组合风险度量模型.同时,并运用了非线性优化技术中的Levenberg-Marquardt算法来解决重要抽样技术中风险因子期望向量估计.模拟结果表明该算法比普通Monte Carlo模拟法的计算效率更有效,且能很大程度上减少所要估计的损失概率的方差,从而更精确地估计出信用投资组合损失分布的尾部概率或给定置信度下组合VaR值.  相似文献   

13.
This paper considers the problem of testing a finite number of moment inequalities. We propose a two‐step approach. In the first step, a confidence region for the moments is constructed. In the second step, this set is used to provide information about which moments are “negative.” A Bonferonni‐type correction is used to account for the fact that, with some probability, the moments may not lie in the confidence region. It is shown that the test controls size uniformly over a large class of distributions for the observed data. An important feature of the proposal is that it remains computationally feasible, even when the number of moments is large. The finite‐sample properties of the procedure are examined via a simulation study, which demonstrates, among other things, that the proposal remains competitive with existing procedures while being computationally more attractive.  相似文献   

14.
传统库存模型通常将提前期和构建成本视为不可控制。事实上可以通过追加投资缩短提前期和降低构建成本。缺货期间,为减少订单丢失量和补偿顾客的损失,供应商会给予一定的价格折扣。现实库存系统中,容易得到需求的期望值和标准差,但较难得到其分布规律。基于此,考虑短缺量拖后率与价格折扣和缺货期间库存水平相关,提出了一种需求为任意分布且提前期和构建成本均可控的EOQ模型,证明了模型存在唯一最优解,给出了一种寻优算法。数值仿真分析表明,一般情况下,压缩提前期和降低构建成本能降低订购批量和安全库存,降低库存总成本;短缺量拖后系数和缺货概率对库存总成本影响较大,企业应尽量降低缺货概率,尤其在短缺量拖后系数较小时。  相似文献   

15.
This paper examines the efficient estimation of partially identified models defined by moment inequalities that are convex in the parameter of interest. In such a setting, the identified set is itself convex and hence fully characterized by its support function. We provide conditions under which, despite being an infinite dimensional parameter, the support function admits √n‐consistent regular estimators. A semiparametric efficiency bound is then derived for its estimation, and it is shown that any regular estimator attaining it must also minimize a wide class of asymptotic loss functions. In addition, we show that the “plug‐in” estimator is efficient, and devise a consistent bootstrap procedure for estimating its limiting distribution. The setting we examine is related to an incomplete linear model studied in Beresteanu and Molinari (2008) and Bontemps, Magnac, and Maurin (2012), which further enables us to establish the semiparametric efficiency of their proposed estimators for that problem.  相似文献   

16.
This study presents a new robust estimation method that can produce a regression median hyper-plane for any data set. The robust method starts with dual variables obtained by least absolute value estimation. It then utilizes two specially designed goal programming models to obtain regression median estimators that are less sensitive to a small sample size and a skewed error distribution than least absolute value estimators. The superiority of new robust estimators over least absolute value estimators is confirmed by two illustrative data sets and a Monte Carlo simulation study.  相似文献   

17.
This paper considers a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect. The paper introduces several procedures for consistently selecting the correct moment conditions. The procedures also can consistently determine whether there is a sufficient number of correct moment conditions to identify the unknown parameters of interest. The paper specifies moment selection criteria that are GMM analogues of the widely used BIC and AIC model selection criteria. (The latter is not consistent.) The paper also considers downward and upward testing procedures. All of the moment selection procedures discussed in this paper are based on the minimized values of the GMM criterion function for different vectors of moment conditions. The procedures are applicable in time-series and cross-sectional contexts. Application of the results of the paper to instrumental variables estimation problems yields consistent procedures for selecting instrumental variables.  相似文献   

18.
We consider the problem of estimating the probability of detection (POD) of flaws in an industrial steel component. Modeled as an increasing function of the flaw height, the POD characterizes the detection process; it is also involved in the estimation of the flaw size distribution, a key input parameter of physical models describing the behavior of the steel component when submitted to extreme thermodynamic loads. Such models are used to assess the resistance of highly reliable systems whose failures are seldom observed in practice. We develop a Bayesian method to estimate the flaw size distribution and the POD function, using flaw height measures from periodic in‐service inspections conducted with an ultrasonic detection device, together with measures from destructive lab experiments. Our approach, based on approximate Bayesian computation (ABC) techniques, is applied to a real data set and compared to maximum likelihood estimation (MLE) and a more classical approach based on Markov Chain Monte Carlo (MCMC) techniques. In particular, we show that the parametric model describing the POD as the cumulative distribution function (cdf) of a log‐normal distribution, though often used in this context, can be invalidated by the data at hand. We propose an alternative nonparametric model, which assumes no predefined shape, and extend the ABC framework to this setting. Experimental results demonstrate the ability of this method to provide a flexible estimation of the POD function and describe its uncertainty accurately.  相似文献   

19.
This paper is concerned with the Bayesian estimation of nonlinear stochastic differential equations when observations are discretely sampled. The estimation framework relies on the introduction of latent auxiliary data to complete the missing diffusion between each pair of measurements. Tuned Markov chain Monte Carlo (MCMC) methods based on the Metropolis‐Hastings algorithm, in conjunction with the Euler‐Maruyama discretization scheme, are used to sample the posterior distribution of the latent data and the model parameters. Techniques for computing the likelihood function, the marginal likelihood, and diagnostic measures (all based on the MCMC output) are developed. Examples using simulated and real data are presented and discussed in detail.  相似文献   

20.
以关键链方法中的二次资源冲突困境为切入点,从鲁棒调度优化角度提出一种解决策略.首先,采用定量化建模对问题进行数学描述和表达,剖析插入输入缓冲引起二次资源冲突的原理,进而采用场景分析法从复杂的冲突表象中分解出四种基本的冲突场景构成要素.其次,基于鲁棒调度优化理论,探究各种冲突子问题的有效对策并归类,据此开发出一种消除二次资源冲突的局部重调度启发式协调策略;根据策略设计基于两次调度进程和两类缓冲动态消耗的鲁棒性指标,采用鲁棒性关键链项目调度问题输出鲁棒性最大的调度方案.再次,设计仿真程序和三个测试指标:项目实际按期完工率、活动开始时间偏差绝对值之和及偏差绝对值的方差;基于ProGen随机地生成测试算例集合进行数值实验.结果发现,以鲁棒性调度方案为依据安排项目的实施过程,三个统计指标值都优于以传统关键链调度方案为依据时相对应的指标值.结论表明:基于鲁棒调度优化的二次资源冲突消除策略及设计的关键链鲁棒性指标在项目实施中具有较好的稳定性效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号