首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   1篇
管理学   2篇
统计学   9篇
  2018年   1篇
  2017年   1篇
  2015年   1篇
  2011年   1篇
  2008年   1篇
  2005年   2篇
  2002年   1篇
  2000年   1篇
  1998年   1篇
  1996年   1篇
排序方式: 共有11条查询结果,搜索用时 93 毫秒
1.
Summary.  We deal with contingency table data that are used to examine the relationships between a set of categorical variables or factors. We assume that such relationships can be adequately described by the cond`itional independence structure that is imposed by an undirected graphical model. If the contingency table is large, a desirable simplified interpretation can be achieved by combining some categories, or levels, of the factors. We introduce conditions under which such an operation does not alter the Markov properties of the graph. Implementation of these conditions leads to Bayesian model uncertainty procedures based on reversible jump Markov chain Monte Carlo methods. The methodology is illustrated on a 2×3×4 and up to a 4×5×5×2×2 contingency table.  相似文献   
2.
In this paper, we investigate the use of the contribution to the sample mean plot (CSM plot) as a graphical tool for sensitivity analysis (SA) of computational models. We first provide an exact formula that links, for each uncertain model input Xj, the CSM plot Cj(·) with the first-order variance-based sensitivity index Sj. We then build a new estimate for Sj using polynomial regression of the CSM plot. This estimation procedure allows the computation of Sj from given data, without any SA-specific design of experiment. Numerical results show that this new Sj estimate is efficient for large sample sizes, but that at small sample sizes it does not compare well with other Sj estimation techniques based on given data, such as the effective algorithm for computing global sensitivity indices method or metamodel-based approaches.  相似文献   
3.
Saltelli  Andrea  Tarantola  Stefano  Chan  Karen 《Risk analysis》1998,18(6):799-803
The motivation of the present work is to provide an auxiliary tool for the decision-maker (DM) faced with predictive model uncertainty. The tool is especially suited for the allocation of R&Dresources. When taking decisions under uncertainties, making use of the output from mathematical or computational models, the DM might be helped if the uncertainty in model predictions be decomposed in a quantitative-rather than qualitativefashion, apportioning uncertainty according to source. This would allow optimal use of resources to reduce the imprecision in the prediction. For complex models, such a decomposition of the uncertainty into constituent elements could be impractical as such, due to the large number of parameters involved. If instead parameters could be grouped into logical subsets, then the analysis could be more useful, also because the decision maker might likely have different perceptions (and degrees of acceptance) for different kinds of uncertainty. For instance, the decomposition in groups could involve one subset of factors for each constituent module of the model; or one set for the weights, and one for the factors in a multicriteria analysis; or phenomenological parameters of the model vs. factors driving the model configuratiodstructure aggregation level, etc.); finally, one might imagine that a partition of the uncertainty could be sought between stochastic (or aleatory) and subjective (or epistemic) uncertainty. The present note shows how to compute rigorous decomposition of the output's variance with grouped parameters, and how this approach may be beneficial for the efficiency and transparency of the analysis.  相似文献   
4.
Summary.  Composite indicators are increasingly used for bench-marking countries' performances. Yet doubts are often raised about the robustness of the resulting countries' rankings and about the significance of the associated policy message. We propose the use of uncertainty analysis and sensitivity analysis to gain useful insights during the process of building composite indicators, including a contribution to the indicators' definition of quality and an assessment of the reliability of countries' rankings. We discuss to what extent the use of uncertainty and sensitivity analysis may increase transparency or make policy inference more defensible by applying the methodology to a known composite indicator: the United Nations's technology achievement index.  相似文献   
5.
The aim of this paper is to propose conditions for exploring the class of identifiable Gaussian models with one latent variable. In particular, we focus attention on the topological structure of the complementary graph of the residuals. These conditions are mainly based on the presence of odd cycles and bridge edges in the complementary graph. We propose to use the spanning tree representation of the graph and the associated matrix of fundamental cycles. In this way it is possible to obtain an algorithm able to establish in advance whether modifying the graph corresponding to an identifiable model, the resulting graph still denotes identifiability.  相似文献   
6.
Sensitivity analysis aims to ascertain how each model input factor influences the variation in the model output. In performing global sensitivity analysis, we often encounter the problem of selecting the required number of runs in order to estimate the first order and/or the total indices accurately at a reasonable computational cost. The Winding Stairs sampling scheme (Jansen M.J.W., Rossing W.A.H., and Daamen R.A. 1994. In: Gasman J. and van Straten G. (Eds.), Predictability and Nonlinear Modelling in Natural Sciences and Economics. pp. 334–343.) is designed to provide an economic way to compute these indices. The main advantage of it is the multiple use of model evaluations, hence reducing the total number of model evaluations by more than half. The scheme is used in three simulation studies to compare its performance with the classic Sobol' LP. Results suggest that the Jansen Winding Stairs method provides better estimates of the Total Sensitivity Indices at small sample sizes.  相似文献   
7.
Sensitivity analysis is an essential tool in the development of robust models for engineering, physical sciences, economics and policy-making, but typically requires running the model a large number of times in order to estimate sensitivity measures. While statistical emulators allow sensitivity analysis even on complex models, they only perform well with a moderately low number of model inputs: in higher dimensional problems they tend to require a restrictively high number of model runs unless the model is relatively linear. Therefore, an open question is how to tackle sensitivity problems in higher dimensionalities, at very low sample sizes. This article examines the relative performance of four sampling-based measures which can be used in such high-dimensional nonlinear problems. The measures tested are the Sobol' total sensitivity indices, the absolute mean of elementary effects, a derivative-based global sensitivity measure, and a modified derivative-based measure. Performance is assessed in a ‘screening’ context, by assessing the ability of each measure to identify influential and non-influential inputs on a wide variety of test functions at different dimensionalities. The results show that the best-performing measure in the screening context is dependent on the model or function, but derivative-based measures have a significant potential at low sample sizes that is currently not widely recognised.  相似文献   
8.
This paper extends the ordinary quasi‐symmetry (QS) model for square contingency tables with commensurable classification variables. The proposed generalised QS model is defined in terms of odds ratios that apply to ordinal variables. In particular, we present QS models based on global, cumulative and continuation odds ratios and discuss their properties. Finally, the conditional generalised QS model is introduced for local and global odds ratios. These models are illustrated through the analysis of two data sets.  相似文献   
9.
Moment independent methods for the sensitivity analysis of model output are attracting growing attention among both academics and practitioners. However, the lack of benchmarks against which to compare numerical strategies forces one to rely on ad hoc experiments in estimating the sensitivity measures. This article introduces a methodology that allows one to obtain moment independent sensitivity measures analytically. We illustrate the procedure by implementing four test cases with different model structures and model input distributions. Numerical experiments are performed at increasing sample size to check convergence of the sensitivity estimates to the analytical values.  相似文献   
10.
We deal with two-way contingency tables having ordered column categories. We use a row effects model wherein each interaction term is assumed to have a multiplicative form involving a row effect parameter and a fixed column score. We propose a methodology to cluster row effects in order to simplify the interaction structure and to enhance the interpretation of the model. Our method uses a product partition model with a suitable specification of the cohesion function, so that we can carry out our analysis on a collection of models of varying dimensions using a straightforward MCMC sampler. The methodology is illustrated with reference to simulated and real data sets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号