首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

2.
Summary.  A two-level regression mixture model is discussed and contrasted with the conventional two-level regression model. Simulated and real data shed light on the modelling alternatives. The real data analyses investigate gender differences in mathematics achievement from the US National Education Longitudinal Survey. The two-level regression mixture analyses show that unobserved heterogeneity should not be presupposed to exist only at level 2 at the expense of level 1. Both the simulated and the real data analyses show that level 1 heterogeneity in the form of latent classes can be mistaken for level 2 heterogeneity in the form of the random effects that are used in conventional two-level regression analysis. Because of this, mixture models have an important role to play in multilevel regression analyses. Mixture models allow heterogeneity to be investigated more fully, more correctly attributing different portions of the heterogeneity to the different levels.  相似文献   

3.
In this article, we are interested in the comparison, under a third-order framework, of classes of second-order, reduced-bias tail index estimators, giving particular emphasis to minimum-variance reduced-bias estimators of the tail index γ. The full asymptotic distributional properties of the proposed classes are derived under a third-order framework and the estimators are compared with other alternatives, not only asymptotically, but also for finite samples through Monte Carlo techniques. An application to the log-exchange rates of the Euro against the USA Dollar is also provided.  相似文献   

4.
Decision theory is applied to the general problem of comparing two treatments in an experiment with subjects assigned to the treatments at random. The inferential agenda covers collection of evidence about superiority, non‐inferiority and average bioequivalence of the treatments. The proposed approach requires defining the terms ‘small’ and ‘large’ to qualify the magnitude of the treatment effect and specifying the losses (or loss functions) that quantify the consequences of the incorrect conclusions. We argue that any analysis that ignores these two inputs is deficient, and so is any ad hoc way of taking them into account. Sample size calculation for studies intended to be analysed by this approach is also discussed. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Many hypothesis problems in practice require the selection of the left side or the right side alternative when the null is rejected. For parametric models, this problem can be stated as H0:θ=θ0H0:θ=θ0vs.  H:θ<θ0H:θ<θ0 or H+:θ>θ0H+:θ>θ0. Frequentists use Type-III error (directional error) to develop statistical methodologies. This approach and other approaches considered in the literature do not take into account the situations where the selection of one side may be more important or when one side may be more probable than the other. This problem can be tackled by specifying a loss function and/or by specifying a hierarchical prior structure with allowing the skewness in the alternatives. Based on this, we develop a Bayesian decision theoretic methodology and show that the resulted Bayes rule perform better in the side of the alternatives which is more probable. The methodology can be also used in a frequentist's framework when it is desired to discover an alternative that is more important. We also consider the multiple hypotheses problem and develop new false discovery rates for the selection of the left and the right sides of alternatives. These discovery rates would be useful in the situations when one side of the alternatives are more important or more probable than the other.  相似文献   

6.
Complex computer codes are widely used in science to model physical systems. Sensitivity analysis aims to measure the contributions of the inputs on the code output variability. An efficient tool to perform such analysis is the variance-based methods which have been recently investigated in the framework of dependent inputs. One of their issue is that they require a large number of runs for the complex simulators. To handle it, a Gaussian process (GP) regression model may be used to approximate the complex code. In this work, we propose to decompose a GP into a high-dimensional representation. This leads to the definition of a variance-based sensitivity measure well tailored for non-independent inputs. We give a methodology to estimate these indices and to quantify their uncertainty. Finally, the approach is illustrated on toy functions and on a river flood model.  相似文献   

7.
This paper reviews five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main questions are: when should which type of analysis be applied; which statistical techniques may then be used? This paper claims that the proper sequence to follow in the evaluation of simulation models is as follows. 1) Validation, in which the availability of data on the real system determines which type of statistical technique to use for validation. 2) Screening: in the simulation‘s pilot phase the really important inputs can be identified through a novel technique, called sequential bifurcation, which uses aggregation and sequential experimentation. 3) Sensitivity analysis: the really important inputs should be subjected to a more detailed analysis, which includes interactions between these inputs; relevant statistical techniques are design of experiments (DOE) and regression analysis. 4) Uncertainty analysis: the important environmental inputs may have values that are not precisely known, so the uncertainties of the model outputs that result from the uncertainties in these model inputs should be quantified; relevant techniques are the Monte Carlo method and Latin hypercube sampling. 5) Optimization: the policy variables should be controlled; a relevant technique is Response Surface Methodology (RSM), which combines DOE, regression analysis, and steepest-ascent hill-climbing. The recommended sequence implies that sensitivity analysis procede uncertainty analysis. Several case studies for each phase are briefly discussed in this paper.  相似文献   

8.
A framework for the asymptotic analysis of local power properties of tests of stationarity in time series analysis is developed. Appropriate sequences of locally stationary processes are defined that converge at a controlled rate to a limiting stationary process as the length of the time series increases. Different interesting classes of local alternatives to the null hypothesis of stationarity are then considered, and the local power properties of some recently proposed, frequency domain‐based tests for stationarity are investigated. Some simulations illustrate our theoretical findings.  相似文献   

9.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

10.
We establish general conditions for the asymptotic validity of single-stage multiple-comparison procedures (MCPs) under the following general framework. There is a finite number of independent alternatives to compare, where each alternative can represent, e.g., a population, treatment, system or stochastic process. Associated with each alternative is an unknown parameter to be estimated, and the goal is to compare the alternatives in terms of the parameters. We establish the MCPs’ asymptotic validity, which occurs as the sample size of each alternative grows large, under two assumptions. First, for each alternative, the estimator of its parameter satisfies a central limit theorem (CLT). Second, we have a consistent estimator of the variance parameter appearing in the CLT. Our framework encompasses comparing means (or other moments) of independent (not necessarily normal) populations, functions of means, quantiles, steady-state means of stochastic processes, and optimal solutions of stochastic approximation by the Kiefer–Wolfowitz algorithm. The MCPs we consider are multiple comparisons with the best, all pairwise comparisons, all contrasts, and all linear combinations, and they allow for unknown and unequal variance parameters and unequal sample sizes across alternatives.  相似文献   

11.
A framework for designing and analyzing computer experiments is presented, which is constructed for dealing with functional and scalar inputs and scalar outputs. For designing experiments with both functional and scalar inputs, a two-stage approach is suggested. The first stage consists of constructing a candidate set for each functional input. During the second stage, an optimal combination of the found candidate sets and a Latin hypercube for the scalar inputs is sought. The resulting designs can be considered to be generalizations of Latin hypercubes. Gaussian process models are explored as metamodel. The functional inputs are incorporated into the Kriging model by applying norms in order to define distances between two functional inputs. We propose the use of B-splines to make the calculation of these norms computationally feasible.  相似文献   

12.
The proportional hazards regression model of Cox(1972) is widely used in analyzing survival data. We examine several goodness of fit tests for checking the proportionality of hazards in the Cox model with two-sample censored data, and compare the performance of these tests by a simulation study. The strengths and weaknesses of the tests are pointed out. The effects of the extent of random censoring on the size and power are also examined. Results of a simulation study demonstrate that Gill and Schumacher's test is most powerful against a broad range of monotone departures from the proportional hazards assumption, but it may not perform as well fail for alternatives of nonmonotone hazard ratio. For the latter kind of alternatives, Andersen's test may detect patterns of irregular changes in hazards.  相似文献   

13.
There are two distinct definitions of “P-value” for evaluating a proposed hypothesis or model for the process generating an observed dataset. The original definition starts with a measure of the divergence of the dataset from what was expected under the model, such as a sum of squares or a deviance statistic. A P-value is then the ordinal location of the measure in a reference distribution computed from the model and the data, and is treated as a unit-scaled index of compatibility between the data and the model. In the other definition, a P-value is a random variable on the unit interval whose realizations can be compared to a cutoff α to generate a decision rule with known error rates under the model and specific alternatives. It is commonly assumed that realizations of such decision P-values always correspond to divergence P-values. But this need not be so: Decision P-values can violate intuitive single-sample coherence criteria where divergence P-values do not. It is thus argued that divergence and decision P-values should be carefully distinguished in teaching, and that divergence P-values are the relevant choice when the analysis goal is to summarize evidence rather than implement a decision rule.  相似文献   

14.
Summary. Standard goodness-of-fit tests for a parametric regression model against a series of nonparametric alternatives are based on residuals arising from a fitted model. When a parametric regression model is compared with a nonparametric model, goodness-of-fit testing can be naturally approached by evaluating the likelihood of the parametric model within a nonparametric framework. We employ the empirical likelihood for an α -mixing process to formulate a test statistic that measures the goodness of fit of a parametric regression model. The technique is based on a comparison with kernel smoothing estimators. The empirical likelihood formulation of the test has two attractive features. One is its automatic consideration of the variation that is associated with the nonparametric fit due to empirical likelihood's ability to Studentize internally. The other is that the asymptotic distribution of the test statistic is free of unknown parameters, avoiding plug-in estimation. We apply the test to a discretized diffusion model which has recently been considered in financial market analysis.  相似文献   

15.
Several omnibus tests of the proportional hazards assumption have been proposed in the literature. In the two-sample case, tests have also been developed against ordered alternatives like monotone hazard ratio and monotone ratio of cumulative hazards. Here we propose a natural extension of these partial orders to the case of continuous and potentially time varying covariates, and develop tests for the proportional hazards assumption against such ordered alternatives. The work is motivated by applications in biomedicine and economics where covariate effects often decay over lifetime. The proposed tests do not make restrictive assumptions on the underlying regression model, and are applicable in the presence of time varying covariates, multiple covariates and frailty. Small sample performance and an application to real data highlight the use of the framework and methodology to identify and model the nature of departures from proportionality.  相似文献   

16.
Financial, social and ecological cost and benefit criteria were used to examine the sustainability of land-use options on an upland Scottish estate. The costs and benefits of alternative land uses were examined using a psychometric modelling technique based on decision-conferencing, using an expert group of people. Three decision models based respectively on financial, social and ecological criteria were developed, compared and then integrated. The management options that emerged from the integrated model as best compromises appeared to be reasonable and feasible within the framework of facts and projections available to the group. The effects of different future scenarios, such as changes in fiscal policy, were also examined using the models. A three-dimensional model constructed from a principal components analysis of the input data produced axes that seemed to reflect certain socio-political archetypes.  相似文献   

17.
Financial, social and ecological cost and benefit criteria were used to examine the sustainability of land-use options on an upland Scottish estate. The costs and benefits of alternative land uses were examined using a psychometric modelling technique based on decision-conferencing, using an expert group of people. Three decision models based respectively on financial, social and ecological criteria were developed, compared and then integrated. The management options that emerged from the integrated model as best compromises appeared to be reasonable and feasible within the framework of facts and projections available to the group. The effects of different future scenarios, such as changes in fiscal policy, were also examined using the models. A three-dimensional model constructed from a principal components analysis of the input data produced axes that seemed to reflect certain socio-political archetypes.  相似文献   

18.
Two discrete-time insurance models are studied in the framework of cost approach. The models being non-deterministic one deals with decision making under uncertainty. Three different situations are investigated: (1) underlying processes are stochastic however their probability distributions are given; (2) information concerning the distribution laws is incomplete; (3) nothing is known about the processes under consideration. Mathematical methods useful for establishing the (asymptotically) optimal control are demonstrated in each case. Algorithms for calculation of critical levels are proposed. Numerical results are presented as well.  相似文献   

19.
The fact of estimating how a model output is influenced by the variations of inputs has become an important problematic in reliability and sensitivity analysis. This article is interested in estimating sensitivity indices useful to quantify the contribution of inputs to the variance of model output. A multivariate mixed kernel estimator is investigated since, until now, discrete and continuous inputs have been separately considered in kernel estimation of sensitivity indices. To illustrate the differences between the influence of mixed, discrete, and continuous inputs, analytical expressions of Sobol sensitivity indices are expressed in these three cases for the Ishigami test function. Besides, the performance of the mixed kernel estimator is illustrated through simulations in which the Bayesian procedure is applied for bandwidth parameter choice. An application is also realized on a real example. Finally, the use of an appropriate kernel estimator according to the type of inputs is found to be influential on the accuracy of sensitivity indices estimates.  相似文献   

20.
In the present paper we find finite dimensional spaces W of alternatives with high power for a given class of tests and non-parametric alternatives. On the orthogonal complement of W the power function is flat. These methods can be used to reduce the dimension of interesting alternatives. We sketch a device how to calculate (approximately) an alternative with maximum power of a fixed test on a given ball of certain non-parametric alternatives.

The calculations are done within different asymptotic models specified by signal detection tests. Specific tests are Kolmogorov–Smirnov type tests, integral tests (like the Anderson and Darling test) and Rényi tests for hazard based models. The statistical meaning and interpretation of the spaces of alternatives with high power is discussed. These alternatives belong to least favorable directions of a class of statistical functionals which are linear combinations of quantile functions. For various cases their meaning is explained for parametric submodels, in particular for location alternatives.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号