首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We develop the measurement theory of polarization for the case in which income distributions can be described using density functions. The main theorem uniquely characterizes a class of polarization measures that fits into what we call the “identity‐alienation” framework, and simultanously satisfies a set of axioms. Second, we provide sample estimators of population polarization indices that can be used to compare polarization across time or entities. Distribution‐free statistical inference results are also used in order to ensure that the orderings of polarization across entities are not simply due to sampling noise. An illustration of the use of these tools using data from 21 countries shows that polarization and inequality orderings can often differ in practice.  相似文献   

2.
We examine challenges to estimation and inference when the objects of interest are nondifferentiable functionals of the underlying data distribution. This situation arises in a number of applications of bounds analysis and moment inequality models, and in recent work on estimating optimal dynamic treatment regimes. Drawing on earlier work relating differentiability to the existence of unbiased and regular estimators, we show that if the target object is not differentiable in the parameters of the data distribution, there exist no estimator sequences that are locally asymptotically unbiased or α‐quantile unbiased. This places strong limits on estimators, bias correction methods, and inference procedures, and provides motivation for considering other criteria for evaluating estimators and inference procedures, such as local asymptotic minimaxity and one‐sided quantile unbiasedness.  相似文献   

3.
One of the main goals of any country is to secure the general welfare of society, entailing positive levels of education, health and income, coupled with low levels of social inequality. The following paper studies the efficient use of economic and social resources to generate social welfare in the presence of bad outputs in the states of Mexico during 2010. A two-level data envelopment analysis model was used to determine how efficient the 32 states of the Mexican Republic were, considering as model variables the socioeconomic indicators of the three dimensions of human development (education, health and income), and the data on poverty or inequity in the country. The analysis of the results reveals that only 5 of the 32 units studied were efficient in generating welfare and in reducing poverty, while the rest need to increase their welfare levels and especially reduce inequity in education and income using the economic and social resources they possess.  相似文献   

4.
The paper proposes a new and normative approach for adjusting households’ incomes in order to account for the heterogeneity of needs across income recipients when measuring inequality and welfare. We derive the implications for the structure of the adjustment method of two conditions concerned with the way the ranking of situations is modified by a change in the reference household type and by more equally distributed living standards across households. Our results suggest that concern for greater equality in living standards conflicts with the basic welfarist principle of symmetrical treatment of individuals that is at the core of the standard equivalence scale approach.  相似文献   

5.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

6.
Propensity score matching estimators (Rosenbaum and Rubin (1983)) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators, and derive adjustments to the large sample variances of propensity score matching estimators of the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The adjustment for the ATE estimator is negative (or zero in some special cases), implying that matching on the estimated propensity score is more efficient than matching on the true propensity score in large samples. However, for the ATET estimator, the sign of the adjustment term depends on the data generating process, and ignoring the estimation error in the propensity score may lead to confidence intervals that are either too large or too small.  相似文献   

7.
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model, based on the Gaussian likelihood conditional on initial values. We give conditions on the parameters such that the process Xt is fractional of order d and cofractional of order db; that is, there exist vectors β for which βXt is fractional of order db and no other fractionality order is possible. For b=1, the model nests the I(d−1) vector autoregressive model. We define the statistical model by 0 < bd, but conduct inference when the true values satisfy 0d0b0<1/2 and b0≠1/2, for which β0Xt is (asymptotically) a stationary process. Our main technical contribution is the proof of consistency of the maximum likelihood estimators. To this end, we prove weak convergence of the conditional likelihood as a continuous stochastic process in the parameters when errors are independent and identically distributed with suitable moment conditions and initial values are bounded. Because the limit is deterministic, this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of is mixed Gaussian, while for the remaining parameters it is Gaussian. The limit distribution of the likelihood ratio test for cointegration rank is a functional of fractional Brownian motion of type II. If b0<1/2, all limit distributions are Gaussian or chi‐squared. We derive similar results for the model with d = b, allowing for a constant term.  相似文献   

8.
This paper applies revealed preference theory to the nonparametric statistical analysis of consumer demand. Knowledge of expansion paths is shown to improve the power of nonparametric tests of revealed preference. The tightest bounds on indifference surfaces and welfare measures are derived using an algorithm for which revealed preference conditions are shown to guarantee convergence. Nonparametric Engel curves are used to estimate expansion paths and provide a stochastic structure within which to examine the consistency of household level data and revealed preference theory. An application is made to a long time series of repeated cross–sections from the Family Expenditure Survey for Britain. The consistency of these data with revealed preference theory is examined. For periods of consistency with revealed preference, tight bounds are placed on true cost of living indices.  相似文献   

9.
In this article, we study the control of stochastic make‐to‐stock manufacturing lines in the presence of electricity costs. Electricity costs are difficult to manage because unit costs increase with the total load, that is, the amount of electricity needed by the manufacturing line at a certain point in time. We demonstrate that standard methods for controlling manufacturing lines cannot be used and that standard analytic results for stochastic manufacturing lines do not hold in the presence of electricity costs. We develop a control policy that balances electricity costs with inventory holding and backorder costs. We derive closed‐form expressions and analytic properties of the expected total cost for manufacturing lines with two workstations and demonstrate the accuracy and robustness of the policy for manufacturing lines with more than two workstations. The results indicate that avoiding electricity peak loads requires additional investment in manufacturing capacity and higher inventory and backorder costs. Our approach also applies to companies which aim at reducing their carbon emissions in addition to their operating costs.  相似文献   

10.
We introduce and derive the asymptotic behavior of a new measure constructed from high‐frequency data which we call the realized Laplace transform of volatility. The statistic provides a nonparametric estimate for the empirical Laplace transform function of the latent stochastic volatility process over a given interval of time and is robust to the presence of jumps in the price process. With a long span of data, that is, under joint long‐span and infill asymptotics, the statistic can be used to construct a nonparametric estimate of the volatility Laplace transform as well as of the integrated joint Laplace transform of volatility over different points of time. We derive feasible functional limit theorems for our statistic both under fixed‐span and infill asymptotics as well as under joint long‐span and infill asymptotics which allow us to quantify the precision in estimation under both sampling schemes.  相似文献   

11.
采用不同的随机过程模型描述标的资产的价格动态,会极大的影响衍生品定价和风险管理活动。在文献中,同一资产采用的随机过程往往是不一致甚至是矛盾的。本文以GBM过程与OU过程为例,提出了一种统计推断方法,旨在从多个备选模型中选出能更好的描述标的资产价格动态的随机过程。该方法应用事后检验原理,将数据分成估计窗和检验窗,估计窗用来估计随机过程的参数,然后在模型参数不变的假定下,推导了原假设成立时检验窗各个时点的资产价格的样本外分布,看实际数据落在接受域或拒绝域的频数来判断是否接受原假设。本文以大宗商品、汇率、利率、股票作为标的资产,对随机过程选择进行了实证分析。实证结果表明,一些经常使用的随机过程模型并不一定是最优的模型。  相似文献   

12.
This paper presents point and interval estimators of both long-run and single-period target quantities in a simple cost-volume-profit (C-V-P) model. This model is a stochastic version of the “accountant's break-even chart” where the major component is a semivariable cost function. Although these features suggest obvious possibilities for practical application, a major purpose of this paper is to examine the statistical properties of target quantity estimators in C-V-P analysis. It is shown that point estimators of target quantity are biased and possess no moments of positive order, but are consistent. These properties are also shared by previous break-even models, even when all parameters are assumed known with certainty. After a test for positive variable margins, Fieller's [6] method is used to obtain interval estimators of relevant target quantities. This procedure therefore minimizes possible ambiguities in stochastic break-even analysis (noted by Ekern [3]).  相似文献   

13.
姚辉  徐亚豪 《管理学报》2008,5(2):301-304
通过对只能投资于股票和只能投资于债券2种情况下的投资组合的机会成本的计算,研究了受限制情况下所产生效用损失和风险厌恶程度对机会成本的影响。计算中选用上证综合指数、深圳成分指数、中信国债指数、中信企债指数和银行间7日债券回购利率的5年期月度数据,运用向量自回归的方法估计下期组合的期望收益和收益的联合概率分布,并运用Quasi-Newton最优化方法来求得最优投资组合,从而求出机会成本。  相似文献   

14.
The main results of this paper are monotonicity statements about the risk measures value-at-risk (VaR) and tail value-at-risk (TVaR) with respect to the parameters of single and multi risk factor models, which are standard models for the quantification of credit and insurance risk. In the context of single risk factor models, non-Gaussian distributed latent risk factors are allowed. It is shown that the TVaR increases with increasing claim amounts, probabilities of claims and correlations, whereas the VaR is in general not monotone in the correlation parameters. To compare the aggregated risks arising from single and multi risk factor models, the usual stochastic order and the increasing convex order are used in this paper, since these stochastic orders can be interpreted as being induced by the VaR-concept and the TVaR-concept, respectively. To derive monotonicity statements about these risk measures, properties of several further stochastic orders are used and their relation to the usual stochastic order and to the increasing convex order are applied.  相似文献   

15.
针对顾客需求量不确定情况下末端配送中心选址及提前备货问题,提出了基于“自营+外包”配送模式的配送中心选址-配送问题。以自营配送中心的固定运行成本、提前备货成本和各种场景下的自营配送成本、外包配送成本以及缺货损失成本的期望值之和最小化为目标,建立了两阶段连续型随机规划模型。第一阶段确定自营配送中心的选址位置和各个配送中心的提前备货量;第二阶段确定各种场景下的自营配送货运量、外包配送货运量和客户点的缺货量等,使总成本期望值达到最小。基于Monte Carlo抽样理论设计了求解模型的样本均值近似方法;以及求解大规模问题L-shaped分解算法。通过模拟算例验证了两阶段随机规划模型的优越性和样本均值近似方法的有效性;并对自营配送中心固定运行成本、单位商品的自营配送成本和外包配送成本等进行灵敏度分析,得到了不同参数对应的最优配送策略,结果表明,正常情况下“自营+外包”配送模式是企业的最佳选择。本文同时将配送中心选址和提前备货量作为随机规划模型的第一阶段决策变量,可以帮助企业降低物流成本、提高顾客的满意度。  相似文献   

16.
基于多子样的贝叶斯动态过程能力估计与评价方法研究   总被引:2,自引:0,他引:2  
针对参数随机化情况下生产过程能力的评价问题,提出了新的过程能力指数估计与评价方法。通过质量控制模型的统计结构分析,研究了扩散先验分布下参数后验分布,据此构造了过程能力指数的贝叶斯点估计和区间估计;在此基础上,将前一阶段模型参数后验分布作为下一阶段的参数先验分布,充分利用历史数据信息,建立了过程能力指数及其下限的贝叶斯动态评价模型。研究结果表明:与现有的贝叶斯过程能力指数估计方法比较,贝叶斯动态过程能力指数的预测精度优于前者,更能反映实际生产过程能力水平。  相似文献   

17.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

18.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

19.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

20.
《Risk analysis》2018,38(3):489-503
Flooding remains a major problem for the United States, causing numerous deaths and damaging countless properties. To reduce the impact of flooding on communities, the U.S. government established the Community Rating System (CRS) in 1990 to reduce flood damages by incentivizing communities to engage in flood risk management initiatives that surpass those required by the National Flood Insurance Program. In return, communities enjoy discounted flood insurance premiums. Despite the fact that the CRS raises concerns about the potential for unevenly distributed impacts across different income groups, no study has examined the equity implications of the CRS. This study thus investigates the possibility of unintended consequences of the CRS by answering the question: What is the effect of the CRS on poverty and income inequality? Understanding the impacts of the CRS on poverty and income inequality is useful in fully assessing the unintended consequences of the CRS. The study estimates four fixed‐effects regression models using a panel data set of neighborhood‐level observations from 1970 to 2010. The results indicate that median incomes are lower in CRS communities, but rise in floodplains. Also, the CRS attracts poor residents, but relocates them away from floodplains. Additionally, the CRS attracts top earners, including in floodplains. Finally, the CRS encourages income inequality, but discourages income inequality in floodplains. A better understanding of these unintended consequences of the CRS on poverty and income inequality can help to improve the design and performance of the CRS and, ultimately, increase community resilience to flood disasters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号