首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Robust parameter design, originally proposed by Taguchi (1987. System of Experimental Design, vols. I and II. UNIPUB, New York), is an off-line production technique for reducing variation and improving a product's quality by using product arrays. However, the use of product arrays results in an exorbitant number of runs. To overcome the drawbacks of the product array several scientists proposed the use of combined arrays, where the control and noise factors are combined in a single array. In this paper, we use certain orthogonal arrays that are embedded into Hadamard matrices as combined arrays, in order to identify a model that contains all the main effects (control and noise) and their control-by-noise interactions with high efficiency. Aliasing of effects in each case is also discussed.  相似文献   

2.
Robust parameter design, originally proposed by Taguchi ( 1987 ) is an offline production technique for reducing variation and improving product's quality To achieve this objective Taguchi proposed the use of product arrays. However. the product array approach, results in an exorbitant number of runs To overcome the drawbacks of the product array Welch, Wu, Kang and Sacks ( 1990 ), Shoemaker, Tsui and Wu ( 1991 ) and Montgomery ( 1991a ) proposed the use of combined arrays, where the control factors and noise factors are combined in a single array. In this paper we study the concept of combined array for an intermediate class of designs where n = 1 (mod4), n = 2 (mod4) and n = 3 (mod4). The designs presented in this paper, though not orthogonal, offer a great reduction in the run-size.  相似文献   

3.
SUMMARY The combined array provides a powerful, more statistically rigorous alternative to Taguchi's crossed-array approach to robust parameter design. The combined array assumes a single linear model in the control and the noise factors. One may then find conditions for the control factors which will minimize an appropriate loss function that involves the noise factors. The most appropriate loss function is often simply the resulting process variance, recognizing that the noise factors are actually random effects in the process. Because the major focus of such an experiment is to optimize the estimated process variance, it is vital to understand the resulting prediction properties. This paper develops the mean squared error for the estimated process variance for the combined array approach, under the assumption that the model is correctly specified. Specific combined arrays are compared for robustness. A practical example outlines how this approach may be used to select appropriate combined arrays within a particular experimental situation.  相似文献   

4.
Robust parameter design, originally proposed by Taguchi [System of Experimental Design, Vols. I and II, UNIPUB, New York, 1987], is an offline production technique for reducing variation and improving a product's quality by using product arrays. However, the use of the product arrays results in an exorbitant number of runs. To overcome this drawback, several scientists proposed the use of combined arrays, where the control and noise factors are combined in a single array. In this paper, we use non-isomorphic orthogonal arrays as combined arrays, in order to identify a model that contains all the main effects (control and noise), their control-by-noise interactions and their control-by-control interactions with high efficiency. Some cases where the control-by-control-noise are of interest are also considered.  相似文献   

5.
Taguchi's robust design technique, also known as parameter design, focuses on making product and process designs insensitive (i.e., robust) to hard to control variations. In some applications, however, his approach of modeling expected loss and the resulting “product array” experimental format leads to unnecessarily expensive and less informative experiments. The response model approach to robust design proposed by Welch, Ku, Yang, and Sacks (1990), Box and Jones (1990), Lucas (1989), and Shoemaker, Tsui and Wu (1991) offers more flexibility and economy in experiment planning and more informative modeling. This paper develops a formal basis for the graphical data-analytic approach presented in Shoemaker et al. In particular, we decompose overall response variation into components representing the variability contributed by each noise factor, and show when this decomposition allows us to use individual control-by-noise interaction plots to minimize response variation. We then generalize the control-by-noise interaction plots to extend their usefulness, and develop a formal analysis strategy using these plots to minimize response variation.  相似文献   

6.
Robust parameter design has been widely used to improve the quality of products and processes. Although a product array, in which an orthogonal array for control factors is crossed with an orthogonal array for noise factors, is commonly used for parameter design experiments, this may lead to an unacceptably large number of experimental runs. The compound noise strategy proposed by Taguchi [30 G. Taguchi, System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs, UNIPUB/Kraus International, White Plains, New York, 1987. [Google Scholar]] can be used to reduce the number of experimental runs. In this strategy, a compound noise factor is formed based on the directionality of the effects of noise factors. However, the directionality is usually unknown in practice. Recently, Singh et al. [28 J. Singh, D.D. Frey, N. Soderborg, and R. Jugulum, Compound noise: Evaluation as a robust parameter design method, Qual. Reliab. Eng. Int. 23 (2007), 387398. doi: 10.1002/qre.812[Crossref], [Web of Science ®] [Google Scholar]] proposed a random compound noise strategy, in which a compound noise factor is formed by randomly selecting a setting of the levels of noise factors. The present paper evaluates the random compound noise strategy in terms of the precision of the estimators of the response mean and the response variance. In addition, the variances of the estimators in the random compound noise strategy are compared with those in the n-replication design. The random compound noise strategy is shown to have smaller variances of the estimators than the 2-replication design, especially when the control-by-noise-interactions are strong.  相似文献   

7.
Genichi Taguchi has emphasized the use of designed experiments in several novel and important applications. In this paper we focus on the use of statistical experimental designs in designingproducts to be robust to environmental conditions. The engineering concept of robust product design is very important because it is frequently impossible or prohibitively expensive to control or eliminate variation resulting from environmental conditions. Robust product design enablesthe experimenter to discover how to modify the design of the product to minimize the effect dueto variation from environmental sources. In experiments of this kind, Taguchi's total experimental arrangement consists of a cross-product of two experimental designs:an inner array containing the design factors and an outer array containing the environmental factors. Except in situations where both these arrays are small, this arrangement may involve a prohibitively large amount of experimental work. One of the objectives of this paper is to show how this amount of work can be reduced. In this paper we investigate the applicability of split-plot designs for thisparticular experimental situation. Consideration of the efficiency of split-plot designs and anexamination of several variants of split-plot designs indicates that experiments conductedin a split-plot mode can be of tremendous value in robust product design since they not only enable the contrasts of interest to be estimated efficiently but also the experiments can be considerably easier to conduct than the designs proposed by Taguchi.  相似文献   

8.
The two experimental methods most commonly used for reducing the effect of noise factors on a response of interest Y aim either to estimate a model of the variability (V(Y), or an associated function), that is transmitted by the noise factors, or to estimate a model of the ratio between the response (Y) and all the control and noise factors involved therein. Both methods aim to determine which control factor conditions minimise the noise factors' effect on the response of interest, and a series of analytical guidelines are established to reach this end. Product array designs allow robustness problems to be solved in both ways, but require a large number of experiments. Thus, practitioners tend to choose more economical designs that only allow them to model the surface response for Y. The general assumption is that both methods would lead to similar conclusions. In this article we present a case that utilises a design based on a product design and for which the conclusions yielded by the two analytical methods are quite different. This example casts doubt on the guidelines that experimental practice follows when using either of the two methods. Based on this example, we show the causes behind these discrepancies and we propose a number of guidelines to help researchers in the design and interpretation of robustness problems when using either of the two methods.  相似文献   

9.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

10.
Illumina BeadArrays are becoming an increasingly popular Microarray platform due to its high data quality and relatively low cost. One distinct feature of Illumina BeadArrays is that each array has thousands of negative control bead types containing oligonucleotide sequences that are not specific to any target genes in the genome. This design provides a way of directly estimating the distribution of the background noise. In the literature of background correction for BeadArray data, the information from negative control beads is either ignored, used in a naive way that can lead to a loss in efficiency, or the noise is assumed to be normally distributed. However, we show with real data that the noise can be skewed. In this study, we propose an exponential-gamma convolution model for background correction of Illumina BeadArray data. Using both simulated and real data examples, we show that the proposed method can improve the signal estimation and detection of differentially expressed genes when the signal to noise ratio is large and the noise has a skewed distribution.  相似文献   

11.
We present an almost sure central limit theorem for the product of the partial sums of m-dependent random variables. In order to obtain the main result, we prove a corresponding almost sure central limit theorem for a triangular array.  相似文献   

12.
This paper concerns the calculation of Bayes estimators of ratios of outcome proportions generated by the replication of an arbitrary tree-structured compound Bernoulli experiment under a multinomial-type sampling scheme. Here the compound Bernoulli experiment is treated as a collection of linear sequences of independent generalized Bernoulli trials having Dirichlet type 1 prior probability distributions. A method of obtaining a closed-form expression of the cumulative distribution function of the ratio of proportions – from its Meijer G-function representation – is described. Bayes point and interval estimators are directly obtained from the properties the distribution function as well as its related probability density function. In addition, the density function is used to derive the probability mass function of the predictive distribution any two associated outcome categories of the experiment – under an inverse multinomial-type sampling scheme. An illustrative numerical example concerning a Bayesian analysis of a simple tree-structured mortality model for medical patients who have suffered an acute myocardial infarction (heart attack) is also included.  相似文献   

13.
To compare several promising product designs, manufacturers must measure their performance under multiple environmental conditions. In many applications, a product design is considered to be seriously flawed if its performance is poor for any level of the environmental factor. For example, if a particular automobile battery design does not function well under temperature extremes, then a manufacturer may not want to put this design into production. Thus, this paper considers the measure of a product's quality to be its worst performance over the levels of the environmental factor. We develop statistical procedures to identify (a near) optimal product design among a given set of product designs, i.e., the manufacturing design that maximizes the worst product performance over the levels of the environmental variable. We accomplish this by intuitive procedures based on the split-plot experimental design (and the randomized complete block design as a special case); split-plot designs have the essential structure of a product array and the practical convenience of local randomization. Two classes of statistical procedures are provided. In the first, the δ-best formulation of selection problems, we determine the number of replications of the basic split-plot design that are needed to guarantee, with a given confidence level, the selection of a product design whose minimum performance is within a specified amount, δ, of the performance of the optimal product design. In particular, if the difference between the quality of the best and second best manufacturing designs is δ or more, then the procedure guarantees that the best design will be selected with specified probability. For applications where a split-plot experiment that involves several product designs has been completed without the planning required of the δ-best formulation, we provide procedures to construct a ‘confidence subset’ of the manufacturing designs; the selected subset contains the optimal product design with a prespecified confidence level. The latter is called the subset selection formulation of selection problems. Examples are provided to illustrate the procedures.  相似文献   

14.
Taguchi's quality engineering concepts are of great importance in designing and improving product quality and process. However, most of the controversy and mystique have been centred on Taguchi's tactics. This research proposes an extension to the on-going research by investigating the probability of identifying insignificant factors as significant, or the so-called alpha error, with L16 orthogonal array for the larger-the-better type response variable using simulation. The response variables in the L16 array are generated from a normal distribution with the same mean and standard deviation. Consequently, the null hypothesis that all factors in the L16 array will be identified as insignificant is true. Simulation results, however, reveal that some insignificant factors are wrongly identified as significant with a very high probability, which may provide a risky parameter design. Therefore, efficient and more valid statistical tactics should be developed put out Taguchi's important quality engineering concepts into practice.  相似文献   

15.
The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.  相似文献   

16.
Intersection matrices help identify the common graphical structure of two or more objects. They arise naturally in a variety of settings. Several examples of their use in a computer algebra environment are given. These include: simplifying an expression involving array products, automating cumulant calculations, determining the behaviour of an expected value operator and identifying model hierarchy in a factorial experiment. The emphasis is placed on the graphical structure, and the symmetry of arrays help reduce the complexity of the graphical problem.  相似文献   

17.
The error distribution is generally unknown in deconvolution problems with real applications. A separate independent experiment is thus often conducted to collect the additional noise data in those studies. In this paper, we study the nonparametric deconvolution estimation from a contaminated sample coupled with an additional noise sample. A ridge-based kernel deconvolution estimator is proposed and its asymptotic properties are investigated depending on the error magnitude. We then present a data-driven bandwidth selection algorithm with combining the bootstrap method and the idea of simulation extrapolation. The finite sample performance of the proposed methods and the effects of error magnitude are evaluated through simulation studies. A real data analysis for a gene Illumina BeadArray study is performed to illustrate the use of the proposed methods.  相似文献   

18.
Minimum aberration designs are preferred in practice, especially when it is desired to carry out a multi-factor experiment using less number of runs. Several authors considered constructions of minimum aberration designs. Some used computer algorithms and some listed good designs from the exhausted search. We propose a simple method to obtain minimum aberration designs for experiments of size less than or equal to thirty-two. Here, we use an ordered sequence of columns from an orthogonal array to design experiments and blocked experiments. When the method is implemented in MS Excel, minimum aberration designs can be easily achieved.  相似文献   

19.
20.
We propose generalized linear models for time or age-time tables of seasonal counts, with the goal of better understanding seasonal patterns in the data. The linear predictor contains a smooth component for the trend and the product of a smooth component (the modulation) and a periodic time series of arbitrary shape (the carrier wave). To model rates, a population offset is added. Two-dimensional trends and modulation are estimated using a tensor product B-spline basis of moderate dimension. Further smoothness is ensured using difference penalties on the rows and columns of the tensor product coefficients. The optimal penalty tuning parameters are chosen based on minimization of a quasi-information criterion. Computationally efficient estimation is achieved using array regression techniques, avoiding excessively large matrices. The model is applied to female death rate in the US due to cerebrovascular diseases and respiratory diseases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号