首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian methods are presented for updating the uncertainty in the predictions of an integrated Environmental Health Risk Assessment (EHRA) model. The methods allow the estimation of posterior uncertainty distributions based on the observation of different model outputs along the chain of the linked assessment framework. Analytical equations are derived for the case of the multiplicative lognormal risk model where the sequential log outputs (log ambient concentration, log applied dose, log delivered dose, and log risk) are each normally distributed. Given observations of a log output made with a normally distributed measurement error, the posterior distributions of the log outputs remain normal, but with modified means and variances, and induced correlations between successive log outputs and log inputs. The analytical equations for forward and backward propagation of the updates are generally applicable to sums of normally distributed variables. The Bayesian Monte-Carlo (BMC) procedure is presented to provide an approximate, but more broadly applicable method for numerically updating uncertainty with concurrent backward and forward propagation. Illustrative examples, presented for the multiplicative lognormal model, demonstrate agreement between the analytical and BMC methods, and show how uncertainty updates can propagate through a linked EHRA. The Bayesian updating methods facilitate the pooling of knowledge encoded in predictive models with that transmitted by research outcomes (e.g., field measurements), and thereby support the practice of iterative risk assessment and value of information appraisals.  相似文献   

2.
Although prior research has addressed the influence of production activity and research and development (R&D) on productivity, it is not clear whether production and R&D affect the market value of a firm. This study proposes and verifies an R&D value chain framework to explore the relationship among productivity, R&D, and firm market values, as measured by Tobin's q theory. By doing so, we attempt to link new theoretical insights and empirical evidence on the effects of R&D efforts and basic production activities to the market valuations of high-technology firms. The value chain data envelopment analysis approach was proposed to estimate parallel-serial processes of basic operations and R&D efforts. This approach can be used to simultaneously estimate the profitability efficiency and marketability efficiency of high-technology firms. This area has rarely been studied, but it is particularly important for high-technology R&D policies and for further industrial development. Using the R&D value chain perspectives of model innovations and extensions proposed in several previous studies, we examined the appropriate levels of intermediate outputs. Production efficiency and R&D were combined to estimate the appropriate levels of intermediate outputs for high-technology firms. Based on the intermediate output analyses, we developed an R&D efforts decision matrix to explore and identify operational and R&D efficiency for high-technology firms. Our sample firms are displayed on a four-quadrant action grid that provides visual information on current short-term operational efficiency and decision making on long-term R&D strategic positions. The empirical findings from the R&D value chain model can provide information for policymakers and managers and suggest the adoption of various policies that place more emphasis on profitability and marketability strategies.  相似文献   

3.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

4.
谢建辉  李勇军  梁樑  吴记 《管理科学》2018,21(11):50-60
传统的DEA模型假设观测样本的投入产出都是确定型数据, 这使得DEA在实际应用中受到限制, 本文提出的基于拟似然估计的多投入多产出随机非参数包络数据 (PLE-StoNED) 方法拓展了这个假设, 能够估计随机环境下的生产前沿面.本文证明, 生产可能集假设条件下的前沿面可以用一个有凹凸性和单调性限制的函数来表示.相较之前的StoNED方法, 本文提出的方法可以估计随机环境下多投入多产出决策单元 (DMU) 的前沿面.通过Monte Carlo实验, 多投入多产出PLE-StoNED方法的有效性得到验证, 它可纠正DEA等传统方法产生的偏误.最后, 实证研究部分运用这一新提出的方法估计了中国大陆商业银行的生产前沿面和效率.本文提出的方法弥补了DEA缺乏统计性的不足, 可为决策者在随机环境下对多投入多产出决策单元进行生产力和效率评估提供决策参考.  相似文献   

5.
The dose to human and nonhuman individuals inflicted by anthropogenic radiation is an important issue in international and domestic policy. The current paradigm for nonhuman populations asserts that if the dose to the maximally exposed individuals in a population is below a certain criterion (e.g., <10 mGy d(-1)) then the population is adequately protected. Currently, there is no consensus in the regulatory community as to the best statistical approach. Statistics, currently considered, include the maximum likelihood estimator for the 95th percentile of the sample mean and the sample maximum. Recently, the investigators have proposed the use of the maximum likelihood estimate of a very high quantile as an estimate of dose to the maximally exposed individual. In this study, we compare all of the above-mentioned statistics to an estimate based on extreme value theory. To determine and compare the bias and variance of these statistics, we use Monte Carlo simulation techniques, in a procedure similar to a parametric bootstrap. Our results show that a statistic based on extreme value theory has the least bias of those considered here, but requires reliable estimates of the population size. We recommend establishing the criterion based on what would be considered acceptable if only a small percentage of the population exceeded the limit, and hence recommend using the maximum likelihood estimator of a high quantile in the case that reliable estimates of the population size are not available.  相似文献   

6.
A decision making model is often very sensitive to the subjective probability estimates that are used. To reduce this sensitivity, it is necessary to utilize elicitation procedures that yield more valid estimates and increase the decision maker's confidence in the estimates. This paper discusses an experiment in which the subjects were asked to estimate the areas of various squares. Two procedures were evaluated. In one procedure, the subjects were told that estimates were distributed typically around the true value in accordance with the normal probability distribution. In the other, the subjects were asked to estimate the length as a component that could be used to calculate the area. The results indicate that the normal error model is a reasonable representation of the estimation procedure. Both procedures provided estimates with significant validity. There is also an indication that the procedures reduce bias and that both procedures result in more consistent estimates when the sizes of the squares are varied.  相似文献   

7.
Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced‐based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5–21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria‐transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs.  相似文献   

8.
Changes in productivity of Spanish university libraries   总被引:1,自引:0,他引:1  
This paper analyzes productivity growth, technical progress, and efficiency change in a sample of 34 Spanish university libraries between 2003 and 2007. Data envelopment analysis and a Malmquist index are combined with a bootstrap method to provide statistical inference estimators of individual productivity, technical progress, pure efficiency, and scale efficiency scores. To calculate productivity, a three-stage service model has been developed, examining productivity changes in the relationships between the libraries' basic inputs, intermediate outputs, and final outputs. The results indicate a growth in the productivity of the libraries (relationship between basic inputs and intermediate outputs) and in the productivity of the service (relationship between basic inputs and final outputs). The growth in productivity in both relationships is due to technical progress. If the variable representing the use of electronic information resources is removed from the final output, the result is a significant reduction in productivity.  相似文献   

9.
In a number of semiparametric models, smoothing seems necessary in order to obtain estimates of the parametric component which are asymptotically normal and converge at parametric rate. However, smoothing can inflate the error in the normal approximation, so that refined approximations are of interest, especially in sample sizes that are not enormous. We show that a bootstrap distribution achieves a valid Edgeworth correction in the case of density‐weighted averaged derivative estimates of semiparametric index models. Approaches to bias reduction are discussed. We also develop a higher‐order expansion to show that the bootstrap achieves a further reduction in size distortion in the case of two‐sided testing. The finite‐sample performance of the methods is investigated by means of Monte Carlo simulations from a Tobit model.  相似文献   

10.
This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, 2 We can also admit models defined by moment equalities by combining pairs of weak moment inequalities.
which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. 3 We deal with the objective of covering each element of the identified set with a prespecified probability in Bugni (2010a).
We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation.  相似文献   

11.
A methodology that simulates outcomes from future data collection programs, utilizes Bayesian Monte Carlo analysis to predict the resulting reduction in uncertainty in an environmental fate-and-transport model, and estimates the expected value of this reduction in uncertainty to a risk-based environmental remediation decision is illustrated considering polychlorinated biphenyl (PCB) sediment contamination and uptake by winter flounder in New Bedford Harbor, MA. The expected value of sample information (EVSI), the difference between the expected loss of the optimal decision based on the prior uncertainty analysis and the expected loss of the optimal decision from an updated information state, is calculated for several sampling plan. For the illustrative application we have posed, the EVSI for a sampling plan of two data points is $9.4 million, for five data points is $10.4 million, and for ten data points is $11.5 million. The EVSI for sampling plans involving larger numbers of data points is bounded by the expected value of perfect information, $15.6 million. A sensitivity analysis is conducted to examine the effect of selected model structure and parametric assumptions on the optimal decision and the EVSI. The optimal decision (total area to be dredged) is sensitive to the assumption of linearity between PCB sediment concentration and flounder PCB body burden and to the assumed relationship between area dredged and the harbor-wide average sediment PCB concentration; these assumptions also have a moderate impact on the computed EVSI. The EVSI is most sensitive to the unit cost of remediation and rather insensitive to the penalty cost associated with under-remediation.  相似文献   

12.
A number of investigators have explored the use of value of information (VOI) analysis to evaluate alternative information collection procedures in diverse decision-making contexts. This paper presents an analytic framework for determining the value of toxicity information used in risk-based decision making. The framework is specifically designed to explore the trade-offs between cost, timeliness, and uncertainty reduction associated with different toxicity-testing methodologies. The use of the proposed framework is demonstrated by two illustrative applications which, although based on simplified assumptions, show the insights that can be obtained through the use of VOI analysis. Specifically, these results suggest that timeliness of information collection has a significant impact on estimates of the VOI of chemical toxicity tests, even in the presence of smaller reductions in uncertainty. The framework introduces the concept of the expected value of delayed sample information, as an extension to the usual expected value of sample information, to accommodate the reductions in value resulting from delayed decision making. Our analysis also suggests that lower cost and higher throughput testing also may be beneficial in terms of public health benefits by increasing the number of substances that can be evaluated within a given budget. When the relative value is expressed in terms of return-on-investment per testing strategy, the differences can be substantial.  相似文献   

13.
Decision biases can distort cost‐benefit evaluations of uncertain risks, leading to risk management policy decisions with predictably high retrospective regret. We argue that well‐documented decision biases encourage learning aversion, or predictably suboptimal learning and premature decision making in the face of high uncertainty about the costs, risks, and benefits of proposed changes. Biases such as narrow framing, overconfidence, confirmation bias, optimism bias, ambiguity aversion, and hyperbolic discounting of the immediate costs and delayed benefits of learning, contribute to deficient individual and group learning, avoidance of information seeking, underestimation of the value of further information, and hence needlessly inaccurate risk‐cost‐benefit estimates and suboptimal risk management decisions. In practice, such biases can create predictable regret in selection of potential risk‐reducing regulations. Low‐regret learning strategies based on computational reinforcement learning models can potentially overcome some of these suboptimal decision processes by replacing aversion to uncertain probabilities with actions calculated to balance exploration (deliberate experimentation and uncertainty reduction) and exploitation (taking actions to maximize the sum of expected immediate reward, expected discounted future reward, and value of information). We discuss the proposed framework for understanding and overcoming learning aversion and for implementing low‐regret learning strategies using regulation of air pollutants with uncertain health effects as an example.  相似文献   

14.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

15.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   

16.
A nonparametric, residual‐based block bootstrap procedure is proposed in the context of testing for integrated (unit root) time series. The resampling procedure is based on weak assumptions on the dependence structure of the stationary process driving the random walk and successfully generates unit root integrated pseudo‐series retaining the important characteristics of the data. It is more general than previous bootstrap approaches to the unit root problem in that it allows for a very wide class of weakly dependent processes and it is not based on any parametric assumption on the process generating the data. As a consequence the procedure can accurately capture the distribution of many unit root test statistics proposed in the literature. Large sample theory is developed and the asymptotic validity of the block bootstrap‐based unit root testing is shown via a bootstrap functional limit theorem. Applications to some particular test statistics of the unit root hypothesis, i.e., least squares and Dickey‐Fuller type statistics are given. The power properties of our procedure are investigated and compared to those of alternative bootstrap approaches to carry out the unit root test. Some simulations examine the finite sample performance of our procedure.  相似文献   

17.
Estimating the unknown minimum (location) of a random variable has received some attention in the statistical literature, but not enough in the area of decision sciences. This is surprising, given that such estimation needs exist often in simulation and global optimization. This study explores the characteristics of two previously used simple percentile estimators of location. The study also identifies a new percentile estimator of the location parameter for the gamma, Weibull, and log-normal distributions with a smaller bias than the other two estimators. The performance of the new estimator, the minimum-bias percentile (MBP) estimator, and the other two percentile estimators are compared using Monte-Carlo simulation. The results indicate that, of the three estimators, the MBP estimator developed in this study provides, in most cases, the estimate with the lowest bias and smallest mean square error of the location for populations drawn from log-normal and gamma or Weibull (but not exponential) distributions. A decision diagram is provided for location estimator selection, based on the value of the coefficient of variation, when the statistical distribution is known or unknown.  相似文献   

18.
We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.  相似文献   

19.
The standard value of information approach of decision analysis assumes that the individual or agency that collects the information is also in control of the subsequent decisions based on the information. We refer to this situation as the “value of information with control (VOI‐C).” This paradigm leads to powerful results, for example, that the value of information cannot be negative and that it is zero, when the information cannot change subsequent decisions. In many real world situations, however, the agency collecting the information is different from the one that makes the decision on the basis of that information. For example, an environmental research group may contemplate to fund a study that can affect an environmental policy decision that is made by a regulatory organization. In this two‐agency formulation, the information‐acquiring agency has to decide, whether an investment in research is worthwhile, while not being in control of the subsequent decision. We refer to this situation as “value of information without control (VOI‐NC).” In this article, we present a framework for the VOI‐NC and illustrate it with an example of a specific problem of determining the value of a research program on the health effects of power‐frequency electromagnetic fields. We first compare the VOI‐C approach with the VOI‐NC approach. We show that the VOI‐NC can be negative, but that with high‐quality research (low probabilities of errors of type I and II) it is positive. We also demonstrate, both in the example and in more general mathematical terms, that the VOI‐NC for environmental studies breaks down into a sum of the VOI‐NC due to the possible reduction of environmental impacts and the VOI‐NC due to the reduction of policy costs, with each component being positive for low environmental impacts and high‐quality research. Interesting results include that the environmental and cost components of the VOI‐NC move in opposite directions as a function of the probability of environmental impacts and that VOI‐NC can be positive, even though the probability of environmental impacts is zero or one.  相似文献   

20.
Opportunities to improve our information about risk continue to arise and lead decision makers to indirectly address the issue of the value of improved information through resource allocation decisions. Statistical decision analysis techniques provide an analytical framework for valuing information explicitly in the context of regulatory decision making. This paper provides estimates of the value of improved national estimates of perchloroethylene (perc) exposure from U.S. dry cleaners in the context of EPA's recently promulgated National Emissions Standard for Hazardous Air Pollutants (NESHAP) with emphasis on exposure information. Consistent with the NESHAP decision, we relied on EPA's technology and economic assessments. In this first cut analysis, estimates of the exposures of workers, consumers of dry cleaning services, and the general public are probabilistically characterized to reflect uncertainty about exposure and potency. We consider the net benefits of the different control options by assessing the associated changes in the total annual population risks and valuing them in monetary terms, with no constraints placed on maximum individual risks. The results suggest that the expected value of perfect information (EVPI) about potency exceeds the EVPI about exposure. Sensitivity analyses demonstrate how the choices of the valuation parameters and distributions used to characterize uncertainty in the model affect the estimates of the value of information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号