首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   

2.
Structure learning for Bayesian networks has been made in a heuristic mode in search of an optimal model to avoid an explosive computational burden. In the learning process, a structural error which occurred at a point of learning may deteriorate its subsequent learning. We proposed a remedial approach to this error-for-error process by using marginal model structures. The remedy is made by fixing local errors in structure in reference to the marginal structures. In this sense, we call the remedy a marginally corrective procedure. We devised a new score function for the procedure which consists of two components, the likelihood function of a model and a discrepancy measure in marginal structures. The proposed method compares favourably with a couple of the most popular algorithms as shown in experiments with benchmark data sets.  相似文献   

3.
In software engineering empirical comparisons of different ways of writing computer code are often made. This leads to the need for planned experimentation and has recently established a new area of application of DoE. This paper is motivated by an experiment on the production of multimedia services on the web, performed at the Telecom Research Centre in Turin, where two different ways of developing code, with or without a framework, were compared. As the experiment progresses, the programmers performance improves as he/she undergoes a learning process; this must be taken into account as it may affect the outcome of the trial. In this paper we discuss statistical models and D-optimal plans for such experiments and indicate some heuristics which allow a much speedier search for the optimum. Solutions differ according to whether we assume that the learning process depends or not on the treatments.  相似文献   

4.
Time dependence is an important characteristic of mineral processing plant data. This paper finds that the time dependence in the recovery data for an experiment at Bougainville Copper Limited (BCL) (Napier-Munn, 1995) can be described by an autoregressive-one process. The paper investigates the optimum form of experimental design for such data. Two intuitive approaches for the design of experiments involving time-dependent data have been disproved recently. Cheng & Steinberg (1991) showed that in some circumstances systematic experiments are preferable to replicated randomized block designs, and Saunders & Eccleston (1992) showed that rather than sampling at a frequency which ensures independent data, in some circumstances sampling intervals should be as small as possible. A third issue, raised in this paper, concerns the use of standard statistical tests when the data are serially correlated. It is shown that the simple paired t -test, suitably modified for time dependence, is appropriate and easily adapted to allow for a covariate and a sequential analysis. The results are illustrated using the BCL data and are already being used to design major experiments involving another mineral process.  相似文献   

5.
New data collection and storage technologies have given rise to a new field of streaming data analytics, called real-time statistical methodology for online data analyses. Most existing online learning methods are based on homogeneity assumptions, which require the samples in a sequence to be independent and identically distributed. However, inter-data batch correlation and dynamically evolving batch-specific effects are among the key defining features of real-world streaming data such as electronic health records and mobile health data. This article is built under a state-space mixed model framework in which the observed data stream is driven by a latent state process that follows a Markov process. In this setting, online maximum likelihood estimation is made challenging by high-dimensional integrals and complex covariance structures. In this article, we develop a real-time Kalman-filter-based regression analysis method that updates both point estimates and their standard errors for fixed population average effects while adjusting for dynamic hidden effects. Both theoretical justification and numerical experiments demonstrate that our proposed online method has statistical properties similar to those of its offline counterpart and enjoys great computational efficiency. We also apply this method to analyze an electronic health record dataset.  相似文献   

6.
A mixture experiment is an experiment in which the response is assumed to depend on the relative proportions of the ingredients present in the mixture and not on the total amount of the mixture. In such experiment process, variables do not form any portion of the mixture but the levels changed could affect the blending properties of the ingredients. Sometimes, the mixture experiments are costly and the experiments are to be conducted in less number of runs. Here, a general method for construction of efficient mixture experiments in a minimum number of runs by the method for projection of efficient response surface design onto the constrained region is obtained. The efficient designs with a less number of runs have been constructed for 3rd, 4th, and 5th component of mixture experiments with one process variable.  相似文献   

7.
In his book 'Out of the Crisis' the late Dr Edwards Deming asserted that 'if anyone adjusts a stable process to try to compensate for a result that is undesirable, or for a result that is extra good, the output will be worse than if he had left the process alone'. His famous funnel experiments supported this assertion. The development of the control chart by Dr Walter Shewhart stemmed from an approach made to him by the management of a Western Electric Company plant because of their awareness that adjustments made to processes often made matters worse. However, many industrial processes are such that the mean values of product quality characteristics shift and drift over time so that, instead of sequences of independent observations to which Deming's assertion applies, process owners are faced with autocorrelated data. The truth of Dr Deming's assertion is demonstrated, both theoretically and via computer simulation. The use of the Exponentially Weighted Moving Average (EWMA) for process monitoring is demonstrated and, for situations where process data exhibit autocorrelation, its use for feedback adjustment is discussed and demonstrated. Finally, successful applications of process improvements using EWMA-based control algorithms is discussed.  相似文献   

8.
Methods for selecting a first-order or second-order rotatable response surface design when both variance and bias error exist are applied to a situation in which it is desired to extrapolate the fitted model in all directions outside of a sphere within which all the experiments are to be made. The extrapolation region is a spherical shell.  相似文献   

9.
Blending experiments with mixture in the presence of process variables are considered. We present an experimental design for quadratic (or linear) blending. The design in two orthogonal blocks is D-optimized in the case where there are no restrictions on the blending in two orthogonal blocks is presented when there are arbitrary restrictions on the blending components. The pair of orthogonal blocks can be used with and arbitrary number of process variables. The number of design points needed when different orthogonal blocks are used is usually smaller than when a single block is repeated at the various process variables levels.  相似文献   

10.
In his book 'Out of the Crisis' the late Dr Edwards Deming asserted that 'if anyone adjusts a stable process to try to compensate for a result that is undesirable, or for a result that is extra good, the output will be worse than if he had left the process alone'. His famous funnel experiments supported this assertion. The development of the control chart by Dr Walter Shewhart stemmed from an approach made to him by the management of a Western Electric Company plant because of their awareness that adjustments made to processes often made matters worse. However, many industrial processes are such that the mean values of product quality characteristics shift and drift over time so that, instead of sequences of independent observations to which Deming's assertion applies, process owners are faced with autocorrelated data. The truth of Dr Deming's assertion is demonstrated, both theoretically and via computer simulation. The use of the Exponentially Weighted Moving Average (EWMA) for process monitoring is demonstrated and, for situations where process data exhibit autocorrelation, its use for feedback adjustment is discussed and demonstrated. Finally, successful applications of process improvements using EWMA-based control algorithms is discussed.  相似文献   

11.
Mixture experiments are often carried out in the presence of process variables, such as days of the week or different machines in a manufacturing process, or different ovens in bread and cake making. In such experiments it is particularly useful to be able to arrange the design in orthogonal blocks, so that the model in tue mixture vanauies may ue iitteu inucpenuentiy or tne UIOCK enects mtrouuceu to take account of the changes in the process variables. It is possible in some situations that some of the ingredients in the mixture, such as additives or flavourings, are present in soian quantities, pernaps as iuw a.s 5% ur even !%, resulting in the design space being restricted to only part of the mixture simplex. Hau and Box (1990) discussed the construction of experimental designs for situations where constraints are placed on the design variables. They considered projecting standard response surface designs, including factorial designs and central composite designs, into the restricted design space, and showed that the desirable property of block orthogonality is preserved by the projections considered. Here we present a number of examples of projection designs and illustrate their use when some of the ingredients are restricted to small values, such that the design space is restricted to a sub-region within the usual simplex in the mixture variables.  相似文献   

12.
In experiments with mixtures that involve process variables, if the response function is expressed as the sum of a function of mixture components and a function of process variables, then the parameters in the mixture part and in the process part can be estimated independently using orthogonal block designs. This paper is concerned with such a block design for parameter estimation in the mixture part of a quadratic mixture model for three mixture components. The behaviour of the eigenvalues of the moment matrix of the design is investigated in detail, the design is optimized according to E- and Aoptimality criteria, and the results are compared together with a known result on Doptimality. It is found that this block design is robust with respect to these diff erent optimality criteria against the shifting of experimental points. As a result, we recommend experimental points of the form (a, b, c) in the simplex S2, where c=0, b=1-a, and a can be any value in the range 0.17+/-0.02.  相似文献   

13.
A supersaturated design is a factorial design in which the number of effects to be estimated is greater than the available number of experimental runs. It is used in many experiments for screening purposes, i.e., for studying a large number of factors and then identifying the active ones. The goal with such a design is to identify just a few of the factors under consideration, that have dominant effects and to do this at minimum cost. While most of the literature on supersaturated designs has focused on the construction of designs and their optimality, the data analysis of such designs remains still at an early stage. In this paper, we incorporate the parameter model complexity into the supersaturated design analysis process, by assuming generalized linear models for a Bernoulli response, for analyzing main effects designs and discovering simultaneously the effects that are significant.  相似文献   

14.
In this paper we consider screening experiments where a two-level fractional factorial design is to be used to identify significant factors in an experimental process and where the runs in the experiment are to occur in blocks of equal size. A simple method based on the foldover technique is given for constructing resolution IV orthogonal and non-orthogonal blocked designs and examples are given to illustrate the process.  相似文献   

15.
As an effective tool for data storage, processing, and computing, ontology has been used in many fields of computer science and information technology. By means of its powerful performance on semantic query and knowledge extraction, domain ontology has been built on various disciplines such as biology, pharmaceutics, geography, chemistry, etc. and been smoothly employed for their engineering applications. In these ontology applications, we aim to get an optimal ontology function which maps each ontology to a real number and then determine the similarity between concepts by the distance of their corresponding real numbers. In former ontology learning approaches, all the instances in the training sample have equal status in the learning process. In this article, we present the disequilibrium multi-dividing ontology algorithm in which the important ontology data will be highlighted during the learning, and the relevant ontology data tend to be eliminated. Four experiments are designed to test the serviceability of our disequilibrium multi-dividing algorithm from angles of ontology similarity measuring and ontology mapping construction.  相似文献   

16.
This paper describes the design philosophy of and current issues concerning a knowledge acquisition system namedkaiser. This system is an intelligent workbench for construction of knowledge bases for classification tasks by domain experts themselves. It first learns classification knowledge inductively from the examples given by a human expert, then analyzes the result and process based on abstract domain knowledge which is also given by the expert. Based on this analysis, it asks sophisticated questions for acquiring new knowledge. The queries stimulate the human expert and help him to revise the learned results, control the learning process and prepare new examples and domain knowledge. Viewed from an AI aspect, it aims at integrating similarity-based inductive learning and explanation-based deductive reasoning by guiding inductive inference with theoretical and/or heuristic knowledge about the domain. This interactive induce-evaluate-ask cycle produces a rational interview which promotes incremental acquisition of domain knowledge as well as efficient induction of operational and reasonable knowledge proved by the domain knowledge.  相似文献   

17.
The current work deals with modelling of response error components in supervised interview-reinterview surveys. The model considers several stages of an interactive process to obtain and record a response. The response process is evaluated as, controller-interviewer-respondent-interviewer-controller interaction setting under a supervised interviewing process. The allocation of controllers, interviewers and respondents is made by a hierarchical design for the interview-reinterview process. In addition, a coder error component is also added to the above proposed model. The proposed model operates under two major sub-models, namely an error detection model and response model.  相似文献   

18.
Many products are mixtures of several components (ingredient). Characteristics of the products such as the strength of steel, the efficacy of a chemical pesticide, or the viscosity of a liquid detergent, depend only on the relative proportions of the components in the mixture. Studying changes in a product' properties caused by varying the ingredient proportions is the objective of performing mixture experiments. The inherent restriction that the sum of the component proportions equal unity creates different design strategies than are usually employed with independent factors where factorial arrangements are quite common. Experimental designs for exploring the entire mixture simplex region as well as for exploring only a subregion of the simplex are presented. In those cases where four or more components are considered and a subregion is to be investigated, computer-aided designs are the rule rather than the exception. Design criterion based on the properties (variance and bias) of the prediction equation are mentioned briefly and some suggestions are made for future research in mixture experiments.  相似文献   

19.
Adaptive cluster sampling (ACS) is considered to be the most suitable sampling design for the estimation of rare, hidden, clustered and hard-to-reach population units. The main characteristic of this design is that it may select more meaningful samples and provide more efficient estimates for the field investigator as compare to the other conventional sampling designs. In this paper, we proposed a generalized estimator with a single auxiliary variable for the estimation of rare, hidden and highly clustered population variance under ACS design. The expressions of approximate bias and mean square error are derived and the efficiency comparisons have been made with other existing estimators. A numerical study is carried out on a real population of aquatic birds together with an artificial population generated by Poisson cluster process. Related results of numerical study show that the proposed generalized variance estimator is able to provide considerably better results over the competing estimators.  相似文献   

20.
Two types of symmetry can arise when the proportions of mixture components are constrained by upper and lower bounds. These two types of symmetry are shown to be useful for blocking first-order designs, as well as for finding the centroid of the experimental region. Orthogonal blocking of first-order mixture designs provides a method of including process variables in the mixture experiment, with the mixture terms orthogonal to the process factors. Symmetric regions are used to develop spherical and rotatable response surface designs for mixtures. The central composite design and designs based on the icosahedron and the dodecahedron are given for four-component mixtures. The uniform shell designs are three-level designs when applied to mixture experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号