首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
It is known that the distributions of statistics commonly used in experimental design (notably the F statistic) involve certain nuisance parameters which depend on the model and the design. Raudomization was a technique introduced by R.A. Fisher to eliminate some of these parameter and produce a usable distribution function. We will show that there is a close relationship between the analytical properties of the non-randomized distribution and the more combinatorial properties of the nuisance parameters. This relationship allows us to determine theoretically, and practically in some cases, how good an approximation the central F distribution is to the randomized distribution.  相似文献   

3.
When all experimental runs cannot be performed under homogeneous conditions, blocking can be used to increase the power for testing the treatment effects. Orthogonal blocking provides the same estimator of the polynomial effects as the one that would be obtained by ignoring the blocks. In many real-life design scenarios, there is at least one factor that is hard to change, leading to a split-plot structure. This paper shows that for a balanced ordinary least square–generalized least square equivalent split-plot design, orthogonal blocking can be achieved. Orthogonally blocked split-plot central composite designs are constructed and a catalog is provided.  相似文献   

4.
Summary. Estimation and experimental design in a non-linear regression model that is used in microbiology are studied. The Monod model is defined implicitly by a differential equation and has numerous applications in microbial growth kinetics, water research, pharmacokinetics and plant physiology. It is proved that least squares estimates are asymptotically unbiased and normally distributed. The asymptotic covariance matrix of the estimator is the basis for the construction of efficient designs of experiments. In particular locally D -, E - and c -optimal designs are determined and their properties are studied theoretically and by simulation. If certain intervals for the non-linear parameters can be specified, locally optimal designs can be constructed which are robust with respect to a misspecification of the initial parameters and which allow efficient parameter estimation. Parameter variances can be decreased by a factor of 2 by simply sampling at optimal times during the experiment.  相似文献   

5.
In a rank-order choice-based conjoint experiment, the respondent is asked to rank a number of alternatives of a number of choice sets. In this paper, we study the efficiency of those experiments and propose a D-optimality criterion for rank-order experiments to find designs yielding the most precise parameter estimators. For that purpose, an expression of the Fisher information matrix for the rank-ordered conditional logit model is derived which clearly shows how much additional information is provided by each extra ranking step. A simulation study shows that, besides the Bayesian D-optimal ranking design, the Bayesian D-optimal choice design is also an appropriate design for this type of experiments. Finally, it is shown that considerable improvements in estimation and prediction accuracy are obtained by including extra ranking steps in an experiment.  相似文献   

6.
DEXPERT is an expert system, built using KEE, for the design and analysis of experiments. From a mathematical model, expected mean squares are computed, tests are determined, and the power of the tests computed. Comparisons between designs are aided by suggestions and verbal interpretations provided by DEXPERT. DEXPERT provides a layout sheet for the collection of the data and then analyzes and interprets the results using analytical and graphical methods.  相似文献   

7.
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments.  相似文献   

8.
The thin plate volume matching and volume smoothing histosplines are described. These histosplines are suitable for estimating densities or incidence rates as a function of position on the plane when data is aggregated by area, for example by county. We give a numerical algorithm for the volume matching histospline and for the volume smoothing histospline using generalized cross validation to estimate the smoothing parameter. Some numerical experiments were performed using synthetic data, population data and SMR's (standardized mortality ratios) aggregated by county over the state of Wisconsin. The method turns out to be not particularly suited for obtaining population density maps where the population density can vary by two orders of magnitude, because the histospline can be negative in  相似文献   

9.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

10.
For consistency, the parameter space in the Gauss-Markov model with singular covariance matrix is usually restricted by observation vector. This restriction arises some difficulties in comparison of linear experiments. To avoid it we reduce the problem of comparison from singular to nonsingular case.  相似文献   

11.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   

12.
Observations with correlated error structures are sometimes unavoidable. Appropriate designs and analyses are reviewed for such situations. Serious problems can occur if conventional designs and analyses are used when correlated errors and layout are ignored or when the error structure is not known. Robust designs are discussed which guard against these problems.  相似文献   

13.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

14.
Many scientists believe that small experiments, guided by scientific intuition, are simpler and more efficient than design of experiments. This belief is strong and persists even in the face of data demonstrating that it is clearly wrong. In this paper, we present two powerful teaching examples illustrating the dangers of small experiments guided by scientific intuition. We describe two, simple, two‐dimensional spaces. These two spaces give rise to, and at the same time appear to generate supporting data for, scientific intuitions that are deeply flawed or wholly incorrect. We find these spaces useful in unfreezing scientific thinking and challenging the misplaced confidence in scientific intuition. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
When designing two-level fractional factorial experiments sequentially, there is a wide choice of designs that could be used at each stage. Designs in which one of the factors is fixed at a particular level after the first experiment are studied in this paper. This sometimes allows all important effects to be estimated in fewer runs than would the standard sequences of designs, and effects can sometimes be estimated more efficiently. The properties of some sequences are presented, and extensions to fixing more than one factor and to factors with more than two levels are discussed.  相似文献   

16.
In the sequential design of experiments in experimentel is sequentially performing experiments to help him make an inference about the true state of nature. Using results from renewal twenty at derive approximations for the operations characteristics and average sample numbers for this problem when there are two states of nature.

A critical problem in the sequential design of experiments is finding a good procedure. We investigate a Bayesian formulation of this problem and use our approximations to approximate the Bayes risk. Minimization of this approximate Bayes risk over procedures is discussed as a method of finding a good procedure, but difficulties are encountered due to the discrete time character of the sequential process. To avoid these difficulties, we consider minimization of an approximation related to a diffusion process. This leads to a simple rule for the sequential selection of experiments.  相似文献   

17.
The construction of optimal designs for change-over experiments requires consideration of the two component treatment designs: one for the direct treatments and the other for the residual (carry-over) treatments. A multi-objective approach is introduced using simulated annealing, which simultaneously optimises each of the component treatment designs to produce a set of dominant designs in one run of the algorithm. The algorithm is used to demonstrate that a wide variety of change-over designs can be generated quickly on a desk top computer. These are generally better than those previously recorded in the literature.  相似文献   

18.
In comparing two treatments, suppose the suitable subjects arrive sequentially and must be treated at once. Known or unknown to the experimenter there may be nuisance factors systematically affecting the subjects. Accidental bias is a measure of the influence of these factors in the analysis of data. We show in this paper that the random allocation design minimizes the accidental bias among all designs that allocate n, out of 2n, subjects to each treatment and do not prefer either treatment in the assignment. When the final imbalance is allowed to be nonzero, optimal and efficient designs are given. In particular the random allocation design is shown to be very efficient in this broader setup.  相似文献   

19.
The paper gives explicit formulae for analysing an experiment carried out in an affine resolvable proper block design. They follow from a randomization model, decomposed into stratum submodels. Analyses within the four relevant strata, and then the combined analysis, are considered in details. The paper is essentially an extension of some results presented in recent books, by Caliński and Kageyama [2000. Block Designs: A Randomization Approach, Volume I: Analysis. Lecture Notes in Statistics, vol. 150. Springer, New York; 2003. Block Designs: A Randomization Approach, Volume II: Design. Lecture Notes in Statistics, vol. 170. Springer, New York].  相似文献   

20.
Industrial statistics plays a major role in the areas of both quality management and innovation. However, existing methodologies must be integrated with the latest tools from the field of Artificial Intelligence. To this end, a background on the joint application of Design of Experiments (DOE) and Machine Learning (ML) methodologies in industrial settings is presented here, along with a case study from the chemical industry. A DOE study is used to collect data, and two ML models are applied to predict responses which performance show an advantage over the traditional modeling approach. Emphasis is placed on causal investigation and quantification of prediction uncertainty, as these are crucial for an assessment of the goodness and robustness of the models developed. Within the scope of the case study, the models learned can be implemented in a semi-automatic system that can assist practitioners who are inexperienced in data analysis in the process of new product development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号