首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Counterfactual distributions are important ingredients for policy analysis and decomposition analysis in empirical economics. In this article, we develop modeling and inference tools for counterfactual distributions based on regression methods. The counterfactual scenarios that we consider consist of ceteris paribus changes in either the distribution of covariates related to the outcome of interest or the conditional distribution of the outcome given covariates. For either of these scenarios, we derive joint functional central limit theorems and bootstrap validity results for regression‐based estimators of the status quo and counterfactual outcome distributions. These results allow us to construct simultaneous confidence sets for function‐valued effects of the counterfactual changes, including the effects on the entire distribution and quantile functions of the outcome as well as on related functionals. These confidence sets can be used to test functional hypotheses such as no‐effect, positive effect, or stochastic dominance. Our theory applies to general counterfactual changes and covers the main regression methods including classical, quantile, duration, and distribution regressions. We illustrate the results with an empirical application to wage decompositions using data for the United States. As a part of developing the main results, we introduce distribution regression as a comprehensive and flexible tool for modeling and estimating the entire conditional distribution. We show that distribution regression encompasses the Cox duration regression and represents a useful alternative to quantile regression. We establish functional central limit theorems and bootstrap validity results for the empirical distribution regression process and various related functionals.  相似文献   

2.
We consider the identification of counterfactual distributions and treatment effects when the outcome variables and conditioning covariates are observed in separate data sets. Under the standard selection on observables assumption, the counterfactual distributions and treatment effect parameters are no longer point identified. However, applying the classical monotone rearrangement inequality, we derive sharp bounds on the counterfactual distributions and policy parameters of interest.  相似文献   

3.
This study analyzes subsidy schemes that are widely used in reducing waiting times for public healthcare service. We assume that public healthcare service has no user fee but an observable delay, while private healthcare service has a fee but no delay. Patients in the public system are given a subsidy s to use private service if their waiting times exceed a pre‐determined threshold t. We call these subsidy schemes (st) policies. As two extreme cases, the (st) policy is called an unconditional subsidy scheme if t = 0, and a full subsidy scheme if s is equal to the private service fee. There is a fixed budget constraint so that a scheme with larger s has a larger t. We assess policies using two criteria: total patient cost and serviceability (i.e., the probability of meeting a waiting time target for public service). We prove analytically that, if patients are equally sensitive to delay, a scheme with a smaller subsidy outperforms one with a larger subsidy on both criteria. Thus, the unconditional scheme dominates all other policies. Using empirically derived parameter values from the Hong Kong Cataract Surgery Program, we then compare policies numerically when patients differ in delay sensitivity. Total patient cost is now unimodal in subsidy amount: the unconditional scheme still yields the lowest total patient cost, but the full subsidy scheme can outperform some intermediate policies. Serviceability is unimodal too, and the full subsidy scheme can outperform the unconditional scheme in serviceability when the waiting time target is long.  相似文献   

4.
We propose a new regression method to evaluate the impact of changes in the distribution of the explanatory variables on quantiles of the unconditional (marginal) distribution of an outcome variable. The proposed method consists of running a regression of the (recentered) influence function (RIF) of the unconditional quantile on the explanatory variables. The influence function, a widely used tool in robust estimation, is easily computed for quantiles, as well as for other distributional statistics. Our approach, thus, can be readily generalized to other distributional statistics.  相似文献   

5.
Adele Bergin 《LABOUR》2015,29(2):194-223
Self‐reported tenure is often used to determine job changes. We show there are substantial inconsistencies in these responses; consequently, we risk misclassifying job changes as stays and vice versa. An estimator from Hausman et al. is applied to a job change model for Ireland, and we find that ignoring misclassification may substantially underestimate the true number of changes and lead to diminished covariate effects. The main contribution of the paper is to control for misclassification when estimating the wage effects of job mobility. A two‐step approach is adopted. We find ignoring misclassification leads to a significant downwards bias in the wage impact, and we provide an estimate that corrects for measurement error.  相似文献   

6.
Lídia Farr  Francis Vella 《LABOUR》2008,22(3):383-410
Abstract. This paper analyses the impact of changes in macroeconomic con ditions on the income distribution in Spain. Using household data from the Encuesta Continuada de Presupuestos Familiares (ECPF) from 1985 to 1996 we disentangle the effect of aggregate variables on the income distribution by estimating counterfactual densities conditional on different macroeconomic scenarios. In estimation, we use a semi‐parametric least squares procedure that allows a flexible interaction between the level of income and a first index of individual characteristics and a second index that captures the role of macroeconomic variables. We find that although inequality displays a decreasing trend over the earlier part of the period examined, the poor performance of the Spanish economy during the early 1990s appears to have reversed this trend. We also conclude that while inflation appears to have no impact on the distribution of income for the period examined, there were important redistributive roles for unemployment, government expenditure, and the level of GDP.  相似文献   

7.
Giulio Bosio 《LABOUR》2014,28(1):64-86
Using Italian data, this paper investigates the wage implications of temporary jobs across the whole pay profile using unconditional quantile regression (UQR) models. Results clearly indicate that the wage penalty associated to temporary jobs is significantly larger at the bottom of wage profile and is almost absent for high‐wage jobs. This is in line with the sticky floors hypothesis, supporting the idea that the wage gap for temporary employees depends on their position in the wage distribution for low‐paid jobs. To recover a causal interpretation, I employ an instrumental variable (IV) strategy. I adopt the unconditional instrumental variable quantile treatment effects (IVQTE) estimator proposed by Frolich and Melly, which corrects for endogenous selection in temporary contracts. The IVQTE estimates yield similar results to standard UQR, even if the wage penalty is larger in size at the bottom of the wage distribution and disappears at the top quantiles. This evidence highlights that policies aimed at increasing flexibility may reinforce the two‐tier nature of the Italian labour market and the relative wage inequality.  相似文献   

8.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

9.
We propose a method to correct for sample selection in quantile regression models. Selection is modeled via the cumulative distribution function, or copula, of the percentile error in the outcome equation and the error in the participation decision. Copula parameters are estimated by minimizing a method‐of‐moments criterion. Given these parameter estimates, the percentile levels of the outcome are readjusted to correct for selection, and quantile parameters are estimated by minimizing a rotated “check” function. We apply the method to correct wage percentiles for selection into employment, using data for the UK for the period 1978–2000. We also extend the method to account for the presence of equilibrium effects when performing counterfactual exercises.  相似文献   

10.
Li R  Englehardt JD  Li X 《Risk analysis》2012,32(2):345-359
Multivariate probability distributions, such as may be used for mixture dose‐response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose‐response biomarker and genetic information. In this article, a new two‐stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn‐in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose‐response function (DRF). Results are shown for the five‐parameter common‐mode and seven‐parameter dissimilar‐mode models, based on published data for eight benzene–toluene dose pairs. The common mode conditional DRF is obtained with a 21‐fold reduction in data requirement versus MCMC. Example common‐mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126‐PCB 153 mixture. Applicability is analyzed and discussed. Matlab® computer programs are provided.  相似文献   

11.
Staffing decisions are crucial for retailers since staffing levels affect store performance and labor‐related expenses constitute one of the largest components of retailers’ operating costs. With the goal of improving staffing decisions and store performance, we develop a labor‐planning framework using proprietary data from an apparel retail chain. First, we propose a sales response function based on labor adequacy (the labor to traffic ratio) that exhibits variable elasticity of substitution between traffic and labor. When compared to a frequently used function with constant elasticity of substitution, our proposed function exploits information content from data more effectively and better predicts sales under extreme labor/traffic conditions. We use the validated sales response function to develop a data‐driven staffing heuristic that incorporates the prediction loss function and uses past traffic to predict optimal labor. In counterfactual experimentation, we show that profits achieved by our heuristic are within 0.5% of the optimal (attainable if perfect traffic information was available) under stable traffic conditions, and within 2.5% of the optimal under extreme traffic variability. We conclude by discussing implications of our findings for researchers and practitioners.  相似文献   

12.
Legal liability for risk‐generating technological activities is evaluated in view of requirements that are necessary for peaceful human coexistence and progress in order to show possibilities for improvement. The requirements imply, given that political decision making about the activities proceeds on the basis of majority rule, that legal liability should be unconditional (absolute, strict) and unlimited (full). We analyze actual liability in international law for various risk‐generating technological activities, to conclude that nowhere is the standard of unconditional and unlimited liability fully met. Apart from that there are enormous differences. Although significant international liability legislation is in place for some risk‐generating technological activities, legislation is virtually absent for others. We discuss fundamental possibilities and limitations of liability and private insurance to secure credible and ethically sound risk assessment and risk management practices. The limitations stem from problems of establishing a causal link between an activity and a harm; compensating irreparable harm; financial warranty; moral hazard in insurance and in organizations; and discounting future damage to present value. As our requirements call for prior agreement among all who are subjected to the risks of an activity about the settlement of these difficult problems, precautionary ex ante regulation of risk‐generating activities may be a more attractive option, either combined with liability stipulations or not. However, if ex ante regulation is not based on the consent of all subjected to the risks, it remains that the basis of liability in the law should be unconditional and unlimited liability.  相似文献   

13.
We consider empirical measurement of equivalent variation (EV) and compensating variation (CV) resulting from price change of a discrete good using individual‐level data when there is unobserved heterogeneity in preferences. We show that for binary and unordered multinomial choice, the marginal distributions of EV and CV can be expressed as simple closed‐form functionals of conditional choice probabilities under essentially unrestricted preference distributions. These results hold even when the distribution and dimension of unobserved heterogeneity are neither known nor identified, and utilities are neither quasilinear nor parametrically specified. The welfare distributions take simple forms that are easy to compute in applications. In particular, average EV for a price rise equals the change in average Marshallian consumer surplus and is smaller than average CV for a normal good. These nonparametric point‐identification results fail for ordered choice if the unit price is identical for all alternatives, thereby providing a connection to Hausman–Newey's (2014) partial identification results for the limiting case of continuous choice.  相似文献   

14.
The purpose of this paper is to provide a deeper process understanding of team mental model dynamics in a context of strategic change implementation. To do so, we adopt a change recipient sensemaking perspective with the objective to identify salient determinants of team mental model dynamics. We aim to contribute to the managerial and organizational cognition literature by identifying critical micro-foundations that shape team cognition and interpretation processes during strategic change implementation. This adds to the field’s understanding of the under-researched collective dimension of strategic processes in general and strategic change implementation more specifically. Through an explorative case study conducted at a professional service organization, we identified five determinants of team mental model dynamics: coherence between ostensive and performative aspects of organizational routines, equivocality of expectations, dominance of organizational discourse, shifts in organizational identification and cross-understanding between departmental thought worlds. Case findings reveal that implementation processes of strategic change become intricate and difficult if change recipient sensemaking is not effectively acted upon. The five determinants identified require adequate managerial attention in order to avoid slipping into organizational inertia. As a consequence, professional workers are unable to ‘drop their tools’ and fail to integrate the strategic change effort in updated team mental models.  相似文献   

15.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

16.
This paper develops a generalization of the widely used difference‐in‐differences method for evaluating the effects of policy changes. We propose a model that allows the control and treatment groups to have different average benefits from the treatment. The assumptions of the proposed model are invariant to the scaling of the outcome. We provide conditions under which the model is nonparametrically identified and propose an estimator that can be applied using either repeated cross section or panel data. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean–variance trade‐off. We also propose methods for inference, showing that our estimator for the average treatment effect is root‐N consistent and asymptotically normal. We consider extensions to allow for covariates, discrete dependent variables, and multiple groups and time periods.  相似文献   

17.
We show that a simple “reputation‐style” test can always identify which of two experts is informed about the true distribution. The test presumes no prior knowledge of the true distribution, achieves any desired degree of precision in some fixed finite time, and does not use “counterfactual” predictions. Our analysis capitalizes on a result of Fudenberg and Levine (1992) on the rate of convergence of supermartingales. We use our setup to shed some light on the apparent paradox that a strategically motivated expert can ignorantly pass any test. We point out that this paradox arises because in the single‐expert setting, any mixed strategy for Nature over distributions is reducible to a pure strategy. This eliminates any meaningful sense in which Nature can randomize. Comparative testing reverses the impossibility result because the presence of an expert who knows the realized distribution eliminates the reducibility of Nature's compound lotteries.  相似文献   

18.
In December 2015, a cyber‐physical attack took place on the Ukrainian electricity distribution network. This is regarded as one of the first cyber‐physical attacks on electricity infrastructure to have led to a substantial power outage and is illustrative of the increasing vulnerability of Critical National Infrastructure to this type of malicious activity. Few data points, coupled with the rapid emergence of cyber phenomena, has held back the development of resilience analytics of cyber‐physical attacks, relative to many other threats. We propose to overcome data limitations by applying stochastic counterfactual risk analysis as part of a new vulnerability assessment framework. The method is developed in the context of the direct and indirect socioeconomic impacts of a Ukrainian‐style cyber‐physical attack taking place on the electricity distribution network serving London and its surrounding regions. A key finding is that if decision‐makers wish to mitigate major population disruptions, then they must invest resources more‐or‐less equally across all substations, to prevent the scaling of a cyber‐physical attack. However, there are some substations associated with higher economic value due to their support of other Critical National Infrastructures assets, which justifies the allocation of additional cyber security investment to reduce the chance of cascading failure. Further cyber‐physical vulnerability research must address the tradeoffs inherent in a system made up of multiple institutions with different strategic risk mitigation objectives and metrics of value, such as governments, infrastructure operators, and commercial consumers of infrastructure services.  相似文献   

19.
If voter preferences depend on a noisy state variable, under what conditions do large elections deliver outcomes “as if” the state were common knowledge? While the existing literature models elections using the jury metaphor where a change in information regarding the state induces all voters to switch in favor of only one alternative, we allow for more general preferences where a change in information can induce a switch in favor of either alternative. We show that information is aggregated for any voting rule if, for a randomly chosen voter, the probability of switching in favor of one alternative is strictly greater than the probability of switching away from that alternative for any given change in belief over states. If the preference distribution violates this condition, there exist equilibria that produce outcomes different from the full information outcome with high probability for large classes of voting rules. In other words, unless preferences closely conform to the jury metaphor, information aggregation is not guaranteed to obtain.  相似文献   

20.
决策后评估对衡量决策的效果具有重要作用。本文引入反事实推理方法实现对决策结果的对比后评估。首先提出了带有后评估环节的规范性决策流程,然后阐述了使用因果影响图模型进行反事实推理的步骤,定义了自满意度和相对满意度两个指标,通过反事实推理的结果计算指标的分值,从而实现对决策的评价。最后使用例子展示了计算过程,表明了该方法是可行和有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号