首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Safe Drinking Water Act of 1974 regulates water quality in public drinking water supply systems but does not pertain to private domestic wells, often found in rural areas throughout the country. The recent decision to tighten the drinking water standard for arsenic from 50 parts per billion (ppb) to 10 ppb may therefore affect some households in rural communities, but may not directly reduce health risks for those on private wells. The article reports results from a survey conducted in a U.S. arsenic hot spot, the rural area of Churchill County, Nevada. This area has elevated levels of arsenic in groundwater. We find that a significant proportion of households on private wells are consuming drinking water with arsenic levels that pose a health risk. The decision to treat tap water for those on private wells in this area is modeled, and the predicted probability of treatment is used to help explain drinking water consumption. This probability represents behaviors relating to the household's perception of risk.  相似文献   

2.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   

3.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

4.
The primary source of evidence that inorganic arsenic in drinking water is associated with increased mortality from cancer at internal sites (bladder, liver, lung, and other organs) is a large ecologic study conducted in regions of Southwest Taiwan endemic to Blackfoot disease. The dose-response patterns for lung, liver, and bladder cancers display a nonlinear dose-response relationship with arsenic exposure. The data do not appear suitable, however, for the more refined task of dose-response assessment, particularly for inference of risk at the low arsenic concentrations found in some U.S. water supplies. The problem lies in variable arsenic concentrations between the wells within a village, largely due to a mix of shallow wells and deep artesian wells, and in having only one well test for 24 (40%) of the 60 villages. The current analysis identifies 14 villages where the exposure appears most questionable, based on criteria described in the text. The exposure values were then changed for seven of the villages, from the median well test being used as a default to some other point in the village's range of well tests that would contribute to smoothing the appearance of a dose-response curve. The remaining seven villages, six of which had only one well test, were deleted as outliers. The resultant dose-response patterns showed no evidence of excess risk below arsenic concentrations of 0.1 mg/l. Of course, that outcome is dependent on manipulation of the data, as described. Inclusion of the seven deleted villages would make estimates of risk much higher at low doses. In those seven villages, the cancer mortality rates are significantly high for their exposure levels, suggesting that their exposure values may be too low or that other etiological factors need to be taken into account.  相似文献   

5.
本文以2012年1月4日至2013年12月31日之间的共481个交易日作为样本期间,以样本期间上交所发布的“上证180”成分股中的上市公司的经营公告、财务报告及证券分析师根据上述信息披露的股评三种信息为主要研究对象,从验证方法选择、高频数据选取、信息考察窗口优化及基于面板数据多元Logistics回归模型构建等四个方面将跳跃与不同信息相联系,分析股价波动与不同信息披露的关系。研究结果表明,当信息范围为公司特定的经营公告、财务报告及分析师建议时,经营公告是最具影响力的信息披露渠道,而分析师建议并不是引起股价异常波动最重要的信息。同时,本文研究揭示仅有20%的跳跃与此类信息披露相关,当解释变量覆盖代表宏观信息“系统性事件”和行业、板块信息的“行业事件”时,也仅40%的价格跳跃发生和信息披露有关。本文的研究不仅表明哪一种信息可能更具有投资价值,而且揭示在此研究基础上继续探究引起股价异常波动的其他起因事件可能更具有重要意义。  相似文献   

6.
This paper applies revealed preference theory to the nonparametric statistical analysis of consumer demand. Knowledge of expansion paths is shown to improve the power of nonparametric tests of revealed preference. The tightest bounds on indifference surfaces and welfare measures are derived using an algorithm for which revealed preference conditions are shown to guarantee convergence. Nonparametric Engel curves are used to estimate expansion paths and provide a stochastic structure within which to examine the consistency of household level data and revealed preference theory. An application is made to a long time series of repeated cross–sections from the Family Expenditure Survey for Britain. The consistency of these data with revealed preference theory is examined. For periods of consistency with revealed preference, tight bounds are placed on true cost of living indices.  相似文献   

7.
Recently there has been a great deal of interest in studying monetary policy under model uncertainty. We point out that different assumptions about the uncertainty may result in drastically different “robust” policy recommendations. Therefore, we develop new methods to analyze uncertainty about the parameters of a model, the lag specification, the serial correlation of shocks, and the effects of real‐time data in one coherent structure. We consider both parametric and nonparametric specifications of this structure and use them to estimate the uncertainty in a small model of the U.S. economy. We then use our estimates to compute robust Bayesian and minimax monetary policy rules, which are designed to perform well in the face of uncertainty. Our results suggest that the aggressiveness recently found in robust policy rules is likely to be caused by overemphasizing uncertainty about economic dynamics at low frequencies. (JEL: E52, C32, D81)  相似文献   

8.
There are often several data sets that may be used in developing a quantitative risk estimate for a carcinogen. These estimates are usually based, however, on the dose-response data for tumor incidences from a single sex/strain/species of animal. When appropriate, the use of more data should result in a higher level of confidence in the risk estimate. The decision to use more than one data set (e.g., representing different animal sexes, strains, species, or tumor sites) can be made following biological and statistical analyses of the compatibility of these data sets. Biological analysis involves consideration of factors such as the relevance of the animal models, study design and execution, dose selection and route of administration, the mechanism of action of the agent, its pharmacokinetics, any species- and/or sex-specific effects, and tumor site specificity. If the biological analysis does not prohibit combining data sets, statistical compatibility of the data sets is then investigated. A generalized likelihood ratio test is proposed for determining the compatibility of different data sets with respect to a common dose-response model, such as the linearized multistage model. The biological and statistical factors influencing the decision to combine data sets are described, followed by a case study of bromodichloromethane.  相似文献   

9.
We show that it is possible to adapt to nonparametric disturbance autocorrelation in time series regression in the presence of long memory in both regressors and disturbances by using a smoothed nonparametric spectrum estimate in frequency–domain generalized least squares. When the collective memory in regressors and disturbances is sufficiently strong, ordinary least squares is not only asymptotically inefficient but asymptotically non–normal and has a slow rate of convergence, whereas generalized least squares is asymptotically normal and Gauss–Markov efficient with standard convergence rate. Despite the anomalous behavior of nonparametric spectrum estimates near a spectral pole, we are able to justify a standard construction of frequency–domain generalized least squares, earlier considered in case of short memory disturbances. A small Monte Carlo study of finite sample performance is included.  相似文献   

10.
This paper develops a new estimation procedure for characteristic‐based factor models of stock returns. We treat the factor model as a weighted additive nonparametric regression model, with the factor returns serving as time‐varying weights and a set of univariate nonparametric functions relating security characteristic to the associated factor betas. We use a time‐series and cross‐sectional pooled weighted additive nonparametric regression methodology to simultaneously estimate the factor returns and characteristic‐beta functions. By avoiding the curse of dimensionality, our methodology allows for a larger number of factors than existing semiparametric methods. We apply the technique to the three‐factor Fama–French model, Carhart's four‐factor extension of it that adds a momentum factor, and a five‐factor extension that adds an own‐volatility factor. We find that momentum and own‐volatility factors are at least as important, if not more important, than size and value in explaining equity return comovements. We test the multifactor beta pricing theory against a general alternative using a new nonparametric test.  相似文献   

11.
We introduce and derive the asymptotic behavior of a new measure constructed from high‐frequency data which we call the realized Laplace transform of volatility. The statistic provides a nonparametric estimate for the empirical Laplace transform function of the latent stochastic volatility process over a given interval of time and is robust to the presence of jumps in the price process. With a long span of data, that is, under joint long‐span and infill asymptotics, the statistic can be used to construct a nonparametric estimate of the volatility Laplace transform as well as of the integrated joint Laplace transform of volatility over different points of time. We derive feasible functional limit theorems for our statistic both under fixed‐span and infill asymptotics as well as under joint long‐span and infill asymptotics which allow us to quantify the precision in estimation under both sampling schemes.  相似文献   

12.
We present new identification results for nonparametric models of differentiated products markets, using only market level observables. We specify a nonparametric random utility discrete choice model of demand allowing rich preference heterogeneity, product/market unobservables, and endogenous prices. Our supply model posits nonparametric cost functions, allows latent cost shocks, and nests a range of standard oligopoly models. We consider identification of demand, identification of changes in aggregate consumer welfare, identification of marginal costs, identification of firms' marginal cost functions, and discrimination between alternative models of firm conduct. We explore two complementary approaches. The first demonstrates identification under the same nonparametric instrumental variables conditions required for identification of regression models. The second treats demand and supply in a system of nonparametric simultaneous equations, leading to constructive proofs exploiting exogenous variation in demand shifters and cost shifters. We also derive testable restrictions that provide the first general formalization of Bresnahan's (1982) intuition for empirically distinguishing between alternative models of oligopoly competition. From a practical perspective, our results clarify the types of instrumental variables needed with market level data, including tradeoffs between functional form and exclusion restrictions.  相似文献   

13.
Swati Agiwal 《Risk analysis》2012,32(8):1309-1325
In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one‐period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance.  相似文献   

14.
We estimate the country-level risk of extreme wildfires defined by burned area (BA) for Mediterranean Europe and carry out a cross-country comparison. To this end, we avail of the European Forest Fire Information System (EFFIS) geospatial data from 2006 to 2019 to perform an extreme value analysis. More specifically, we apply a point process characterization of wildfire extremes using maximum likelihood estimation. By modeling covariates, we also evaluate potential trends and correlations with commonly known factors that drive or affect wildfire occurrence, such as the Fire Weather Index as a proxy for meteorological conditions, population density, land cover type, and seasonality. We find that the highest risk of extreme wildfires is in Portugal (PT), followed by Greece (GR), Spain (ES), and Italy (IT) with a 10-year BA return level of 50'338 ha, 33'242 ha, 25'165 ha, and 8'966 ha, respectively. Coupling our results with existing estimates of the monetary impact of large wildfires suggests expected losses of 162–439 million € (PT), 81–219 million € (ES), 41–290 million € (GR), and 18–78 million € (IT) for such 10-year return period events.

SUMMARY

We model the risk of extreme wildfires for Italy, Greece, Portugal, and Spain in form of burned area return levels, compare them, and estimate expected losses.  相似文献   

15.
This paper exploits dynamic features of insurance contracts in the empirical analysis of moral hazard. We first show that experience rating implies negative occurrence dependence under moral hazard: individual claim intensities decrease with the number of past claims. We then show that dynamic insurance data allow to distinguish this moral‐hazard effect from dynamic selection on unobservables. We develop nonparametric tests and estimate a flexible parametric model. We find no evidence of moral hazard in French car insurance. Our analysis contributes to a recent literature based on static data that has problems distinguishing between moral hazard and selection and dealing with dynamic features of actual insurance contracts. Methodologically, this paper builds on and extends the literature on state dependence and heterogeneity in event‐history data. (JEL: D82, G22, C41, C14)  相似文献   

16.
This paper develops asymptotic optimality theory for statistical treatment rules in smooth parametric and semiparametric models. Manski (2000, 2002, 2004) and Dehejia (2005) have argued that the problem of choosing treatments to maximize social welfare is distinct from the point estimation and hypothesis testing problems usually considered in the treatment effects literature, and advocate formal analysis of decision procedures that map empirical data into treatment choices. We develop large‐sample approximations to statistical treatment assignment problems using the limits of experiments framework. We then consider some different loss functions and derive treatment assignment rules that are asymptotically optimal under average and minmax risk criteria.  相似文献   

17.
The desirability of a merger/acquisition alternative depends in part on the perceptions of the decision maker. What sources of information are “useful” to the decision maker & Does the set of useful information remain constant for all decision makers; if not, do individuals using similar information sets have similar information processing characteristics? Do these sets vary as feedback is obtained during the decision process? To answer these questions, graduate students participated in a modified Delphi experiment, and the resulting data were analyzed by the two-way aligned-ranks nonparametric test. These test results affirm that in a merger/acquisition scenario, decision makers with different cognitive styles prefer different sets of information and these sets vary dynamically as feedback is incorporated in the decision-making process. Furthermore, information that contains worker and community welfare considerations is identified as “useful” five times more frequently by decision makers with a “feeling” cognitive style than those with a “thinking” style.  相似文献   

18.
Applying a hockey stick parametric dose-response model to data on late or retarded development in Iraqi children exposed in utero to methylmercury, with mercury (Hg) exposure characterized by the peak Hg concentration in mothers'hair during pregnancy, Cox et al. calculated the "best statistical estimate" of the threshold for health effects as 10 ppm Hg in hair with a 95% range of uncertainty of between 0 and 13.6 ppm.(1)A new application of the hockey stick model to the Iraqi data shows, however, that the statistical upper limit of the threshold based on the hockey stick model could be as high as 255 ppm. Furthermore, the maximum likelihood estimate of the threshold using a different parametric model is virtually zero. These and other analyses demonstrate that threshold estimates based on parametric models exhibit high statistical variability and model dependency, and are highly sensitive to the precise definition of an abnormal response. Consequently, they are not a reliable basis for setting a reference dose (RfD) for methylmercury. Benchmark analyses and statistical analyses useful for deriving NOAELs are also presented. We believe these latter analyses—particularly the benchmark analyses—generally form a sounder basis for determining RfDs than the type of hockey stick analysis presented by Cox et al. However, the acute nature of the exposures, as well as other limitations in the Iraqi data suggest that other data may be more appropriate for determining acceptable human exposures to methylmercury.  相似文献   

19.
Research suggests that two methods of introducing dissent, the dialectic inquiry (DI) and devil's advocate (DA) methods, show promise for increasing the cognitive complexity of decision makers. We investigated the joint effects of formalized dissent and group cognitive complexity by manipulating the formalized dissent method (DI or DA) used by 25 interacting groups engaged in a complex, ill-structured planning task. Participants were classified as either high or low cognitive complexity and assigned to stratified groups with members of homogeneous complexity. Results indicated that: (1) DA groups produced higher quality assumptions but took longer to generate plans than did DI groups, (2) high complexity groups generated more recommendations relative to low complexity groups, and (3) DA groups with low complexity members produced lower quality recommendations and participated less equally in decision making than did the other groups. We conclude by discussing the implications of the results for formalized dissent, cognitive complexity, and assessing managerial performance.  相似文献   

20.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号