首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper reviews five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main questions are: when should which type of analysis be applied; which statistical techniques may then be used? This paper claims that the proper sequence to follow in the evaluation of simulation models is as follows. 1) Validation, in which the availability of data on the real system determines which type of statistical technique to use for validation. 2) Screening: in the simulation‘s pilot phase the really important inputs can be identified through a novel technique, called sequential bifurcation, which uses aggregation and sequential experimentation. 3) Sensitivity analysis: the really important inputs should be subjected to a more detailed analysis, which includes interactions between these inputs; relevant statistical techniques are design of experiments (DOE) and regression analysis. 4) Uncertainty analysis: the important environmental inputs may have values that are not precisely known, so the uncertainties of the model outputs that result from the uncertainties in these model inputs should be quantified; relevant techniques are the Monte Carlo method and Latin hypercube sampling. 5) Optimization: the policy variables should be controlled; a relevant technique is Response Surface Methodology (RSM), which combines DOE, regression analysis, and steepest-ascent hill-climbing. The recommended sequence implies that sensitivity analysis procede uncertainty analysis. Several case studies for each phase are briefly discussed in this paper.  相似文献   

2.
A sequential procedure for a selection of the better of two trinomial populations has been proposed by ?idók (1988). The present paper shows some Monte Carlo results for 4 different strategies of sequential experimentation in this procedure, on this basis compares the strategies, and gives some practical recommendations for choosing the strategy.  相似文献   

3.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   

4.
Time sharing computer configurations have introduced a new dimension in applying statistical and mathematical models to sequential decision problems. When the outcome of one step in the process influences subsequent decisions, then an interactive time-sharing system is of great help. Since the forecasting function involves such a sequential process, it can be handled particularly well with an appropriate time-shared computer system. This paper describes such as system which allows the user to do preliminary analysis of his data to identify the forecasting technique or class of techniques most appropriate for his situation and to apply those in developing a forecast. This interactive forecasting system has met with excellent success both in teaching the fundamentals of forecasting for business decision making and in actually applying those techniques in management situations.  相似文献   

5.
SUMMARY This paper reviews a number of extreme value models which have been applied to corrosion problems. The techniques considered are used to model and predict the statistical behaviour of corrosion extremes, such as the largest pit, thinnest wall, maximum penetration or similar assessment of corrosion phenomenon. These techniques can be applied to measurements over a regular grid or to measurements of selected extremes, and can be adapted to accommodate all values over a selected threshold, or a selected number of the largest values-or only the single largest value. Data can come from one coupon or several coupons, and can be modelled to allow for dependence on environmental conditions, surface area examined, and duration of exposure or of experimentation. The techniquesare demonstrated on data from laboratory experiments and also on data collected in an industrial context.  相似文献   

6.
Deterministic computer simulations are often used as replacement for complex physical experiments. Although less expensive than physical experimentation, computer codes can still be time-consuming to run. An effective strategy for exploring the response surface of the deterministic simulator is the use of an approximation to the computer code, such as a Gaussian process (GP) model, coupled with a sequential sampling strategy for choosing design points that can be used to build the GP model. The ultimate goal of such studies is often the estimation of specific features of interest of the simulator output, such as the maximum, minimum, or a level set (contour). Before approximating such features with the GP model, sufficient runs of the computer simulator must be completed.Sequential designs with an expected improvement (EI) design criterion can yield good estimates of the features with minimal number of runs. The challenge is that the expected improvement function itself is often multimodal and difficult to maximize. We develop branch and bound algorithms for efficiently maximizing the EI function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of a contour. These branch and bound algorithms outperform other optimization strategies such as genetic algorithms, and can lead to significantly more accurate estimation of the features of interest.  相似文献   

7.
Response surface experimentation is an integral part of the development of a new process or product, but the relatively efficient statistical methodologies for such experimentation are underutilized by research and development scientists and engineers because of a lack of knowledge and/or understanding of these methodologies. To help to increase its utilization, a simplified approach to one such statistical methodology, known as the determination of optimum conditions, has been developed which can be used by scientists and engineers with a minimum of statistical knowledge.  相似文献   

8.
For many stochastic models, it is difficult to make inference about the model parameters because it is impossible to write down a tractable likelihood given the observed data. A common solution is data augmentation in a Markov chain Monte Carlo (MCMC) framework. However, there are statistical problems where this approach has proved infeasible but where simulation from the model is straightforward leading to the popularity of the approximate Bayesian computation algorithm. We introduce a forward simulation MCMC (fsMCMC) algorithm, which is primarily based upon simulation from the model. The fsMCMC algorithm formulates the simulation of the process explicitly as a data augmentation problem. By exploiting non‐centred parameterizations, an efficient MCMC updating schema for the parameters and augmented data is introduced, whilst maintaining straightforward simulation from the model. The fsMCMC algorithm is successfully applied to two distinct epidemic models including a birth–death–mutation model that has only previously been analysed using approximate Bayesian computation methods.  相似文献   

9.
Response surface methodology is a collection of mathematical and statistical techniques that are useful for the modeling and analysis of problems in which a response of interest is influenced by several independent variables and the objective is to optimize this response. When we are at a point on the response surface that is remote from the optimum, such as the current operating conditions, there is little curvature in the system and the first-order model will be appropriate. In these circumstances, a preliminary procedure as the steepest ascent usually is employed to move sequentially in the direction of maximum increase in the response. To improve the estimation of parameters of the steepest ascent path, we present, in an efficient way, the augmentation of existing data such that the independent variables are made more orthogonal to each other. Additionally, when we estimate the true path using this method, the bias and magnitude of the covariance matrix of the estimated path is decreased, significantly.  相似文献   

10.
Under some very reasonable hypotheses, it becomes evident that randomizing the run order of a factorial experiment does not always neutralize the effect of undesirable factors. Yet, these factors do have an influence on the response, depending on the order in which the experiments are conducted. On the other hand, changing the factor levels is many times costly; therefore it is not reasonable to leave to chance the number of changes necessary. For this reason, run orders that offer the minimum number of factor level changes and at the same time minimize the possible influence of undesirable factors on the experimentation have been sought. Sequences which are known to produce the desired properties in designs with 8 and 16 experiments can be found in the literature. In this paper, we provide the best possible sequences for designs with 32 experiments, as well as sequences that offer excellent properties for designs with 64 and 128 experiments. The method used to find them is based on a mixture of algorithmic searches and an augmentation of smaller designs.  相似文献   

11.
This paper applies the theory of unimodular matrices to prove that all saturated main effect plans of an s1 × s2 factorial are equivalent from the point of view of D–optimality and are hence all D–optimal. The A– and E–optimal plans in this context have also been derived. An application in sequential experimentation has been considered  相似文献   

12.
Rotatable designs that are available for process/ product optimization trials are mostly symmetric in nature. In many practical situations, response surface designs (RSDs) with mixed factor (unequal) levels are more suitable as these designs explore more regions in the design space but it is hard to get rotatable designs with a given level of asymmetry. When experimenting with unequal factor levels via asymmetric second order rotatable design (ASORDs), the lack of fit of the model may become significant which ultimately leads to the estimation of parameters based on a higher (or third) order model. Experimenting with a new third order rotatable design (TORD) in such a situation would be expensive as the responses observed from the first stage runs would be kept underutilized. In this paper, we propose a method of constructing asymmetric TORD by sequentially augmenting some additional points to the ASORDs without discarding the runs in the first stage. The proposed designs will be more economical to obtain the optimum response as the design in the first stage can be used to fit the second order model and with some additional runs, third order model can be fitted without discarding the initial design.KEYWORDS: Response surface methodology, rotatability, orthogonal transformation, asymmetric, sequential experimentation, third order designs  相似文献   

13.
Optimal three-stage designs with equal sample sizes at each stage are presented and compared to fixed sample designs, fully sequential designs, designs restricted to use the fixed sample critical value at the final stage, and to modifications of other group sequential designs previously proposed in the literature. Typically, the greatest savings realized with interim analyses are obtained by the first interim look. More than 50% of the savings possible with a fully sequential design can be realized with a simple two-stage design. Three-stage designs can realize as much as 75% of the possible savings. Without much loss in efficiency, the designs can be modified so that the critical value at the final stage equals the usual fixed sample value while maintaining the overall level of significance, alleviating some potential confusion should a final stage be necessary. Some common group sequential designs, modified to allow early acceptance of the null hypothesis, are shown to be nearly optimal in some settings while performing poorly in others. An example is given to illustrate the use of several three-stage plans in the design of clinical trials.  相似文献   

14.
In this article, it is shown that many intractable problems of Bayesian inference can be cast in a form called “artificial augmenting regression” in which application of Markov Chain Monte Carlo techniques, especially Gibbs sampling with data augmentation, is rather convenient. The new techniques are illustrated using several challenging statistical problems and numerical results are presented.  相似文献   

15.
Abstract

Constant block-sum designs are of interest in repeated measures experimentation where the treatments levels are quantitative and it is desired that at the end of the experiments, all units have been exposed to the same constant cumulative dose. It has been earlier shown that the constant block-sum balanced incomplete block designs do not exist. As the next choice, we, in this article, explore and construct several constant block-sum partially balanced incomplete block designs. A natural choice is to first explore these designs via magic squares and Parshvanath yantram is found to be especially useful in generating designs for block size 4. Using other techniques such as pair-sums and, circular and radial arrangements, we generate a large number of constant block-sum partially balanced incomplete block designs. Their relationship with mixture designs is explored. Finally, we explore the optimization issues when constant block-sum may not be possible for the class of designs with a given set of parameters.  相似文献   

16.
We describe two sequential sampling procedures for Bernoulli subset selection which were shown to exhibit desirable behavior for large-sample problems. These procedures have identical performance characteristics in terms of the number of observations taken from any one of the populations under investigation, but one of the procedures employs one-at-a-time sampling while theother allows observations to be taken in blocks during early stages of experimentation. In this paper, a simulation study of their behavior for small-sample cases (n > 25) reveals that they canresult in a savings (sometimes substantial) in the expected total number of observations requiredto terminate the experiment as compared to single-stage procedures. Hence they may be quite usefulto a practitioner for screening purposes when sampling is limited.  相似文献   

17.
This paper investigates several techniques to discriminate two multivariate stationary signals. The methods considered include Gaussian likelihood ratio tests for variance equality, a chi-squared time-domain test, and a spectral-based test. The latter two tests assess equality of the multivariate autocovariance function of the two signals over many different lags. The Gaussian likelihood ratio test is perhaps best viewed as principal component analyses (PCA) without dimension reduction aspects; it can be modified to consider covariance features other than variances via dimension augmentation tactics. A simulation study is constructed that shows how one can make inappropriate conclusions with PCA tests, even when dimension augmentation techniques are used to incorporate non-zero lag autocovariances into the analysis. The various discrimination methods are first discussed. A simulation study then illuminates the various properties of the methods. In this pursuit, calculations are needed to identify several multivariate time series models with specific autocovariance properties. To demonstrate the applicability of the methods, nine US and Canadian weather stations from three distinct regions are clustered. Here, the spectral clustering perfectly identified distinct regions, the chi-squared test performed marginally, and the PCA/likelihood ratio method did not perform well.  相似文献   

18.
In a two-sample testing problem, sometimes one of the sample observations are difficult and/or costlier to collect compared to the other one. Also, it may be the situation that sample observations from one of the populations have been previously collected and for operational advantages we do not wish to collect any more observations from the second population that are necessary for reaching a decision. Partially sequential technique is found to be very useful in such situations. The technique gained its popularity in statistics literature due to its very nature of capitalizing the best aspects of both fixed and sequential procedures. The literature is enriched with various types of partially sequential techniques useable under different types of data set-up. Nonetheless, there is no mention of multivariate data framework in this context, although very common in practice. The present paper aims at developing a class of partially sequential nonparametric test procedures for two-sample multivariate continuous data. For this we suggest a suitable stopping rule adopting inverse sampling technique and propose a class of test statistics based on the samples drawn using the suggested sampling scheme. Various asymptotic properties of the proposed tests are explored. An extensive simulation study is also performed to study the asymptotic performance of the tests. Finally the benefit of the proposed test procedure is demonstrated with an application to a real-life data on liver disease.  相似文献   

19.
We consider the problem of constructing a set of fixed-width simultaneous confidence intervals for the treatment-control differences of means for several independent normal populations with a common unknown variance. Taking c observations from the control population instead of the usual vector-at-a-time approach, purely sequential estimation methodology is developed and asymptotic second-order characteristics are provided. Brief remarks on the accelerated sequential and three-stage methodologies have been added. Next, with the help of simulations, performances of the purely sequential, accelerated sequential and three-stage estimation techniques are compared. Overall, the second-order asymptotics are found to provide useful approximations even for moderate sample sizes.  相似文献   

20.
This article presents a design approach for sequential constant-stress accelerated life tests (ALT) with an auxiliary acceleration factor (AAF). The use of an AAF, if it exists, is to further amplify the failure probability of highly reliability testing items at low stress levels while maintaining an acceptable degree of extrapolation for reliability inference. Based on a Bayesian design criterion, the optimal plan optimizes the sample allocation, stress combination, as well as the loading profile of the AAF. In particular, a step-stress loading profile based on an appropriate cumulative exposure (CE) model is chosen for the AAF such that the initial auxiliary stress will not be too harsh. A case study, providing the motivation and practical importance of our study, is presented to illustrate the proposed planning approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号