首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computer models simulating a physical process are used in many areas of science. Due to the complex nature of these codes it is often necessary to approximate the code, which is typically done using a Gaussian process. In many situations the number of code runs available to build the Gaussian process approximation is limited. When the initial design is small or the underlying response surface is complicated this can lead to poor approximations of the code output. In order to improve the fit of the model, sequential design strategies must be employed. In this paper we introduce two simple distance based metrics that can be used to augment an initial design in a batch sequential manner. In addition we propose a sequential updating strategy to an orthogonal array based Latin hypercube sample. We show via various real and simulated examples that the distance metrics and the extension of the orthogonal array based Latin hypercubes work well in practice.  相似文献   

2.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   

3.
In this paper we define a new class of designs for computer experiments. A projection array based design defines sets of simulation runs with properties that extend the conceptual properties of orthogonal array based Latin hypercube sampling, particularly to underlying design structures other than orthogonal arrays. Additionally, we illustrate how these designs can be sequentially augmented to improve the overall projection properties of the initial design or focus on interesting regions of the design space that need further exploration to improve the overall fit of the underlying response surface. We also illustrate how an initial Latin hypercube sample can be expressed as a projection array based design and show how one can augment these designs to improve higher dimensional space filling properties.  相似文献   

4.
Empirical Bayes methods and a bootstrap bias adjustment procedure are used to estimate the size of a closed population when the individual capture probabilities are independently and identically distributed with a Beta distribution. The method is examined in simulations and applied to several well-known datasets. The simulations show the estimator performs as well as several other proposed parametric and non-parametric estimators.  相似文献   

5.
The step-stress model is a special case of accelerated life testing that allows for testing of units under different levels of stress with changes occurring at various intermediate stages of the experiment. Interest then lies on inference for the mean lifetime at each stress level. All step-stress models discussed so far in the literature are based on a single experiment. For the situation when data have been collected from different experiments wherein all the test units had been exposed to the same levels of stress but with possibly different points of change of stress, we introduce a model that combines the different experiments and facilitates a meta-analysis for the estimation of the mean lifetimes. We then discuss in detail the likelihood inference for the case of simple step-stress experiments under exponentially distributed lifetimes with Type-II censoring.  相似文献   

6.
This article considers the problem of response surface model fit in computer experiment. We propose a new sequential adaptive design through the “maximum expected improvement” approach. The new method defines the improvement by the first order approximation from the known design points using derivative information and sequentially seeks point in area with large curvature and variance. A version with distance penalty is also considered. We demonstrate their superiority over some existing methods by simulation.  相似文献   

7.
This article deals with a Bayesian predictive approach for two-stage sequential analyses in clinical trials, applied to both frequentist and Bayesian tests. We propose to make a predictive inference based on the notion of satisfaction index and the data accrued so far together with future data. The computations and the simulation results concern an inferential problem, related to the binomial model.  相似文献   

8.
A fully parametric multistate model is explored for the analysis of animal carcinogenicity experiments in which the time of tumour onset is not known. This model does not require assumptions about tumour lethality or cause of death judgements and can be fitted in the absence of sacrifice data. The model is constructed as a three-state model with simple parametric forms for the transition rates. Maximum likelihood methods are used to estimate the transition rates and different treatment groups are compared using likelihood ratio tests. Selection of an appropriate model and methods to assess the fit of the model are illustrated with data from animal experiments. Comparisons with standard methods are made.  相似文献   

9.
In this paper we develop a measure of polarization for discrete distributions of non-negative grouped data. The measure takes into account the relative sizes and homogeneities of individual groups as well as the heterogeneities between all pairs of groups. It is based on the assumption that the total polarization within the distribution can be understood as a function of the polarizations between all pairs of groups. The measure allows information on existing groups within a population to be used directly to determine the degree of polarization. Thus the impact of various classifications on the degree of polarization can be analysed. The treatment of the distribution’s total polarization as a function of pairwise polarizations allows statements concerning the effect of an individual pair or an individual group on the total polarization.  相似文献   

10.
Summary This paper looks at a new approach to the problem of finding the maximal tolerated dose (or optimal dose, Eichhorn and Zacks, 1973) of certain drugs which in addition to their therapeutic effects have secondary harmful effects. The problem is investigated in a sequential setting from a Bayesian predictive approach. Search procedures are proposed for parametric and nonparametric models.  相似文献   

11.
12.
Mixture experiments are commonly encountered in many fields including chemical, pharmaceutical and consumer product industries. Due to their wide applications, mixture experiments, a special study of response surface methodology, have been given greater attention in both model building and determination of designs compared with other experimental studies. In this paper, some new approaches are suggested on model building and selection for the analysis of the data in mixture experiments by using a special generalized linear models, logistic regression model, proposed by Chen et al. [7]. Generally, the special mixture models, which do not have a constant term, are highly affected by collinearity in modeling the mixture experiments. For this reason, in order to alleviate the undesired effects of collinearity in the analysis of mixture experiments with logistic regression, a new mixture model is defined with an alternative ratio variable. The deviance analysis table is given for standard mixture polynomial models defined by transformations and special mixture models used as linear predictors. The effects of components on the response in the restricted experimental region are given by using an alternative representation of Cox's direction approach. In addition, odds ratio and the confidence intervals of odds ratio are identified according to the chosen reference and control groups. To compare the suggested models, some model selection criteria, graphical odds ratio and the confidence intervals of the odds ratio are used. The advantage of the suggested approaches is illustrated on tumor incidence data set.  相似文献   

13.
Many clinical trials involve two-stage sequential designs with one interim analysis (Elashoff and Reedy, 1984, Biometrics 41, 791-795.) In this paper we present a situation where events are counted only at two fixed calendar time points and some patients may dropout during the time intervals. In the two-stage case, naive application of Tsiatis’s (1984, JASA 77, 855-861) logrank and Wilcoxon tests, which are for continuous survival time, is shown to lead to conservative type-I error rates and lower power. The two-stage sequential boundaries can also be calculated directly, rather than by simulation as was done by DeMets and Gail (1985, Biometrics 41, 1039-1044) with the assumption of some survival models, and are shown to be more flexible than the Pocock (1977, Biometrika 64, 191-199) and O’Brien-Fleming (1983, Biometrics 35, 549-556) boundaries since the former do not require an assumption on the correlation of the test statistics for the two stages. Repeated confidence intervals are also discussed. The design and approach are motivated by clinical trials studying treatment effects on vertebral fracture rates in elderly osteoporotic women. An example (Tilyard, et al. New England Journal of Medicine, 1992) is given to illustrate the method.  相似文献   

14.
In an experiment to compare K(<2) treatments, suppose that eligible subjects arrive at an experimental site sequentially and must be treated immediately. In this paper, we assume that the size of the experiment cannot be predetermined and propose and analyze a class of treatment assignment rules which offer compromises between the complete randomization and the perfect balance schemes, A special case of these assignment rules is thoroughly investigated and is featured in the numerical compu-tations. For practical use, a method of implementation of this special rule is provided.  相似文献   

15.
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.  相似文献   

16.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

17.
In this paper we introduce a sequential seasonal unit root testing approach which explicitly addresses its application to high frequency data. The main idea is to see which unit roots at higher frequency data can also be found in temporally aggregated data. We illustrate our procedure to the analysis of monthly data, and we find, upon analysing the aggregated quarterly data, that a smaller amount of test statistics can sometimes be considered. Monte Carlo simulation and empirical illustrations emphasize the practical relevance of our method.  相似文献   

18.
Zhenmin Chen  Jie Mi 《Statistics》2013,47(6):519-527
The gamma distribution has been discussed by many authors. This article proposes an exact confidence region for the parameters of a two-parameter gamma distribution. The result is based on the fact that the percentiles of the F-distribution, with equal degrees of freedom k, are monotonic in k.  相似文献   

19.
We present a graphical method based on the empirical probability generating function for preliminary statistical analysis of distributions for counts. The method is especially useful in fitting a Poisson model, or for identifying alternative models as well as possible outlying observations from general discrete distributions.  相似文献   

20.
Drug delivery devices are required to have excellent technical specifications to deliver drugs accurately, and in addition, the devices should provide a satisfactory experience to patients because this can have a direct effect on drug compliance. To compare patients' experience with two devices, cross-over studies with patient-reported outcomes (PRO) as response variables are often used. Because of the strength of cross-over designs, each subject can directly compare the two devices by using the PRO variables, and variables indicating preference (preferring A, preferring B, or no preference) can be easily derived. Traditionally, methods based on frequentist statistics can be used to analyze such preference data, but there are some limitations for the frequentist methods. Recently, Bayesian methods are considered an acceptable method by the US Food and Drug Administration to design and analyze device studies. In this paper, we propose a Bayesian statistical method to analyze the data from preference trials. We demonstrate that the new Bayesian estimator enjoys some optimal properties versus the frequentist estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号