首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
"Uncertainty in statistics and demographic projections for aging and other policy purposes comes from four sources: differences in definitions, sampling error, nonsampling error, and scientific uncertainty. Some of these uncertainties can be reduced by proper planning and coordination, but most often decisions have to be made in the face of some remaining uncertainty. Although decision makers have a tendency to ignore uncertainty, doing so does not lead to good policy-making. Techniques for estimating and reporting on uncertainty include sampling theory, assessment of experts' subjective distributions, sensitivity analysis, and multiple independent estimates." The primary geographical focus is on the United States.  相似文献   

2.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

3.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste.  相似文献   

4.
This paper reviews five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main questions are: when should which type of analysis be applied; which statistical techniques may then be used? This paper claims that the proper sequence to follow in the evaluation of simulation models is as follows. 1) Validation, in which the availability of data on the real system determines which type of statistical technique to use for validation. 2) Screening: in the simulation‘s pilot phase the really important inputs can be identified through a novel technique, called sequential bifurcation, which uses aggregation and sequential experimentation. 3) Sensitivity analysis: the really important inputs should be subjected to a more detailed analysis, which includes interactions between these inputs; relevant statistical techniques are design of experiments (DOE) and regression analysis. 4) Uncertainty analysis: the important environmental inputs may have values that are not precisely known, so the uncertainties of the model outputs that result from the uncertainties in these model inputs should be quantified; relevant techniques are the Monte Carlo method and Latin hypercube sampling. 5) Optimization: the policy variables should be controlled; a relevant technique is Response Surface Methodology (RSM), which combines DOE, regression analysis, and steepest-ascent hill-climbing. The recommended sequence implies that sensitivity analysis procede uncertainty analysis. Several case studies for each phase are briefly discussed in this paper.  相似文献   

5.
Summary.  Composite indicators are increasingly used for bench-marking countries' performances. Yet doubts are often raised about the robustness of the resulting countries' rankings and about the significance of the associated policy message. We propose the use of uncertainty analysis and sensitivity analysis to gain useful insights during the process of building composite indicators, including a contribution to the indicators' definition of quality and an assessment of the reliability of countries' rankings. We discuss to what extent the use of uncertainty and sensitivity analysis may increase transparency or make policy inference more defensible by applying the methodology to a known composite indicator: the United Nations's technology achievement index.  相似文献   

6.
Abstract

The paper elicits subjectively the Dirichlet prior hyperparameters based on the realistic opinion collected from the experts. The procedure used for subjective elicitation considers several stages such as the choice of experts, formation of some relevant questions to be asked to the experts for getting their opinion, pooling of opinion, quantification of information and then the formation of exact prior distribution through quantile assessment based on an iterative procedure. The resulting prior distribution is used to provide the Bayes analysis assuming multinomial sampling plan. The results are illustrated by means of a data set involving two life style factors of gallbladder carcinoma patients. The results convey the message that matches closely with the opinion given by the medical experts.  相似文献   

7.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
We consider the use of emulator technology as an alternative method to second-order Monte Carlo (2DMC) in the uncertainty analysis for a percentile from the output of a stochastic model. 2DMC is a technique that uses repeated sampling in order to make inferences on the uncertainty and variability in a model output. The conventional 2DMC approach can often be highly computational, making methods for uncertainty and sensitivity analysis unfeasible. We explore the adequacy and efficiency of the emulation approach, and we find that emulation provides a viable alternative in this situation. We demonstrate these methods using two different examples of different input dimensions, including an application that considers contamination in pre-pasteurised milk.  相似文献   

9.
Risk assessment of modeling predictions is becoming increasingly important as input to decision makers. Probabilistic risk analysis is typically expensive to perform since it generallyrequires the calculation of a model output Probability Distribution Function (PDF) followed by the integration of the risk portion of the PDF. Here we describe the new risk analysis Guided Monte Carlo (GMC) technique. It maintains the global coverage of Monte Carlo (MC) while judiciously combining model reruns with efficient sensitivity analysis predictions to accurately evaluate the integrated risk portion of the PDF. This GMC technique will facilitate risk analysis of complex models, where the expense was previously prohibitive. Two examples are presented to illustrate the technique, its computational savings and broad applicability. These are an ordinary differential equation based chemical kinetics model and an analytic dosimetry model. For any particular example, the degree of savings will depend on the relative risk being evaluated. In general, the highest fractional degree of savings with the GMC technique will occur for estimating risk levels that are specified in the far wing of the PDF.If no savings are possible, the GMC technique defaults to the true MC limit. In the illustrations presented here, the GMC analysis saved approximately a factor of four in computational effort relative to that of a full MC analysis. Furthermore, the GMC technique can also be implemented with other possible sampling strategies, such as Latin Hypercube, when appropriate.  相似文献   

10.
A two stage sampling scheme for estimating the mean of a distribution, proposed by Katti (1962), is investigated and some of its properties are generalized. The method utilizes subjective information in the analysis, but is still within the classical framework. The generalized mean square error of the estimator is computed as the loss function and is compared with that of the usual classical approach. Modifications are suggested to improve the efficiency of the estimator by incorporating the uncertainty of the subjective information. The procedure, which is referred to as "the Method of Tested Priors" is an alternate way of using subjective information without necessarily agreeing with the philosophical aspect of the Bayesian approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号