Wealth aggregates implied by the Household Finance and Consumption Survey (HFCS) usually yield much lower amounts than macroeconomic statistics reported in the National Accounts. An important source of this gap may be the under-representation of the wealthiest households in the HFCS. This article therefore combines a semi-parametric Pareto model estimated from top survey data and observations from rich lists with a non-parametric stratification approach to quantify the impact of the missing wealthy households on component-specific micro-macro gaps. We find that unadjusted micro data substantially underestimates wealth inequality. The largest effects are documented for equity. For other components, the missing wealthy explain less than ten percentage points of the micro-macro gap. We find that differences in oversampling strategies limit the cross-country comparability of unadjusted survey-implied wealth distributions and that our top tail adjustment leads to measures that are internationally better comparable.
Information ambiguity is prevalent in organizations and likely influences management decisions. This study examines, given imprecise probabilities and outcomes, how managers make choices when they are provided with single-figure benchmarks. Seventy-nine MBA students completed two experiments. We found that, in a decision framed as a decision under certainty involving an ambiguous outcome, the majority of the subjects were ambiguity prone in the loss condition and switched to ambiguity aversion in the gain condition. However, in the presence of probabilistic ambiguity in a decision under risk, this expected switching pattern was shown only when the difference in riskiness between the two choice options (in the loss condition) was perceived to be relatively small. In a companion study, we used a written protocol approach to identify factors that affect decision makers' investment choices when faced with ambiguous outcomes. Protocols frequently mentioned that the ambiguous outcome option was risky, even in the case which was framed as a decision under certainty in the problem statement. In a decision under risk with ambiguous outcomes, the combination of probabilistic risk and outcome ambiguity was seen as even more risky. 相似文献
Karni and Safra [8] prove that the Becker-DeGroot-Marschak mechanism reveals a decision maker's true certainty equivalent of a lottery if and only if he satisfies the independence axiom. Segal [17] claims that this mechanism may reveal a violation of the reduction of compound lotteries axiom. This paper empirically tests these two interpretations. Our results show that the second interpretation fits better with the collected data. Moreover, we show by means of some nonexpected utility examples that these results are consistent with a wide range of functionals. 相似文献
A series of studies investigate the decision processes of actuaries, underwriters, and reinsurers in setting premiums for ambiguous and uncertain risks. Survey data on prices reveal that all three types of these insurance decision makers are risk averse and ambiguity averse. In addition, groups appear to be influenced in their premium-setting decisions by specific reference points such as expected loss and the concern with insolvency. This behavior is consistent with a growing analytical and empirical literature in economics and decision processes that investigates the role that uncertainty plays on managerial choices. Improved risk-assessment procedures and government involvement in providing protection against catastrophic losses may induce insurers to reduce premiums and broaden available coverage.This article is part of a larger effort supported by the National Science Foundation on The Role of Insurance, Compensation, Regulation, and Protective Behavior in Decision Making about Risk and Misfortune. We greatly appreciate the many helpful comments and suggestions by our colleagues on the project: Jon Baron, Colin Camerer, Neil Doherty, Jack Hershey, Eric Johnson, and Paul Kleindorfer. Support from NSF Grant #SES8809299 is gratefully acknowledged. 相似文献
This article compares the performance of the expected utility (EU) and lottery-dependent expected utility (LDEU) models in predicting the actual choices of experimental subjects among risky options. In the process, we present two approaches for calibrating the LDEU model for an individual decision maker. The results indicate that while LDEU exhibits a higher potential for correctly predicting choice, the version of the model calibrated by indifference judgments does not outperform EU. We suggest a functional form for the parametric functions that defines the LDEU model, and discuss ways in which this function can be incorporated into choice-based assessment approaches to improve predictions.This research was supported in part by the Business Associates Fund at the Fuqua School of Business, Duke University. 相似文献
Differences between plant varieties are based on phenotypic observations, which are both space and time consuming. Moreover, the phenotypic data result from the combined effects of genotype and environment. On the contrary, molecular data are easier to obtain and give a direct access to the genotype. In order to save experimental trials and to concentrate efforts on the relevant comparisons between varieties, the relationship between phenotypic and genetic distances is studied. It appears that the classical genetic distances based on molecular data are not appropriate for predicting phenotypic distances. In the linear model framework, we define a new pseudo genetic distance, which is a prediction of the phenotypic one. The distribution of this distance given the pseudo genetic distance is established. Statistical properties of the predicted distance are derived when the parameters of the model are either given or estimated. We finally apply these results to distinguishing between 144 maize lines. This case study is very satisfactory because the use of anonymous molecular markers (RFLP) leads to saving 29% of the trials with an acceptable error risk. These results need to be confirmed on other varieties and species and would certainly be improved by using genes coding for phenotypic traits. 相似文献
Cornwell, Schmidt, and Sickles (1990) and Kumbhakar (1990), among others, developed stochasticfrontier production models which allow firm specific inefficiency levels to change over time. These studies assumed arbitrary restrictions on the short-run dynamics of efficiency levels which have little theoretical justification. Further, the models are inappropriate for estimation of long-run efficiencies. We consider estimation of an alternative frontier model in which firmspecific technical inefficiency levels are autoregressive. This model is particularly useful to examine a potential dynamic link between technical innovations and production inefficiency levels. We apply our methodology to a panel of US airlines. 相似文献
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged. 相似文献
The Erdös–Rényi model of a network is simple and possesses many explicit expressions for average and asymptotic properties, but it does not fit well to real-world networks. The vertices of those networks are often structured in unknown classes (functionally related proteins or social communities) with different connectivity properties. The stochastic block structures model was proposed for this purpose in the context of social sciences, using a Bayesian approach. We consider the same model in a frequentest statistical framework. We give the degree distribution and the clustering coefficient associated with this model, a variational method to estimate its parameters and a model selection criterion to select the number of classes. This estimation procedure allows us to deal with large networks containing thousands of vertices. The method is used to uncover the modular structure of a network of enzymatic reactions. 相似文献