首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
In mixture experiments the properties of mixtures are usually studied by mixing the amounts of the mixture components that are required to obtain the necessary proportions. This paper considers the impact of inaccuracies in discharging the required amounts of the mixture components on the statistical analysis of the data. It shows how the regression calibration approach can be used to minimize the resulting bias in the model and in the estimates of the model parameters, as well as to find correct estimates of the corresponding variances. Its application is made difficult by the complex structure of these errors. We also show how knowledge of the form of the model bias allows for choosing a manufacturing setting for a mixture product that is not biased and has smaller signal to noise ratio.  相似文献   

2.
In experiments with mixtures involving process variables, orthogonal block designs may be used to allow estimation of the parameters of the mixture components independently of estimation of the parameters of the process variables. In the class of orthogonally blocked designs based on pairs of suitably chosen Latin squares, the optimal designs consist primarily of binary blends of the mixture components, regardless of how many ingredients are available for the mixture. This paper considers ways of modifying these optimal designs so that some or all of the runs used in the experiment include a minimum proportion of each mixture ingredient. The designs considered are nearly optimal in the sense that the experimental points are chosen to follow ridges of maxima in the optimality criteria. Specific designs are discussed for mixtures involving three and four components and distinctions are identified for different designs with the same optimality properties. The ideas presented for these specific designs are readily extended to mixtures with q>4 components.  相似文献   

3.
In experiments with mixtures that involve process variables, if the response function is expressed as the sum of a function of mixture components and a function of process variables, then the parameters in the mixture part and in the process part can be estimated independently using orthogonal block designs. This paper is concerned with such a block design for parameter estimation in the mixture part of a quadratic mixture model for three mixture components. The behaviour of the eigenvalues of the moment matrix of the design is investigated in detail, the design is optimized according to E- and Aoptimality criteria, and the results are compared together with a known result on Doptimality. It is found that this block design is robust with respect to these diff erent optimality criteria against the shifting of experimental points. As a result, we recommend experimental points of the form (a, b, c) in the simplex S2, where c=0, b=1-a, and a can be any value in the range 0.17+/-0.02.  相似文献   

4.
Control charts are statistical tools to monitor a process or a product. However, some processes cannot be controlled by monitoring a characteristic; instead, they need to be monitored using profiles. Economic-statistical design of profile monitoring means determining the parameters of a profile monitoring scheme such that total costs are minimized while statistical measures maintain proper values. While varying sampling interval usually increases the effectiveness of profile monitoring, economic-statistical design of variable sampling interval (VSI) profile monitoring is investigated in this paper. An extended Lorenzen–Vance function is used for modeling total costs in VSI model where the average time to signal is employed for depicting the statistical measure of the obtained profile monitoring scheme. Two sampling intervals; number of set points and the parameters of control charts that are used in profile monitoring are the variables that are obtained thorough the economic-statistical model. A genetic algorithm is employed to optimize the model and an experimental design approach is used for tuning its parameters. Sensitivity analysis and numerical results indicate satisfactory performance for the proposed model.  相似文献   

5.
This article presents a case study of a chemical compound used in the delay mechanism to start a rocket engine. The compound consists in a three-component mixture. Besides the components proportions, two process variables are considered. The aim of the study is to investigate the mix components proportions and the levels of process variables that set the expected delay time as close as possible to the target value and, at the same time, minimize the width of prediction interval for the response. A linear regression model with normal responses was fitted. Through the model developed, the optimal components proportions and the levels of the process variables were determined. For the model selection, the use of the backward method with an information criterion proved to be efficient in the case under study.  相似文献   

6.
Rounding errors have a considerable impact on statistical inferences, especially when the data size is large and the finite normal mixture model is very important in many applied statistical problems, such as bioinformatics. In this article, we investigate the statistical impacts of rounding errors to the finite normal mixture model with a known number of components, and develop a new estimation method to obtain consistent and asymptotically normal estimates for the unknown parameters based on rounded data drawn from this kind of models.  相似文献   

7.
Two types of symmetry can arise when the proportions of mixture components are constrained by upper and lower bounds. These two types of symmetry are shown to be useful for blocking first-order designs, as well as for finding the centroid of the experimental region. Orthogonal blocking of first-order mixture designs provides a method of including process variables in the mixture experiment, with the mixture terms orthogonal to the process factors. Symmetric regions are used to develop spherical and rotatable response surface designs for mixtures. The central composite design and designs based on the icosahedron and the dodecahedron are given for four-component mixtures. The uniform shell designs are three-level designs when applied to mixture experiments.  相似文献   

8.
We propose a semiparametric modeling approach for mixtures of symmetric distributions. The mixture model is built from a common symmetric density with different components arising through different location parameters. This structure ensures identifiability for mixture components, which is a key feature of the model as it allows applications to settings where primary interest is inference for the subpopulations comprising the mixture. We focus on the two-component mixture setting and develop a Bayesian model using parametric priors for the location parameters and for the mixture proportion, and a nonparametric prior probability model, based on Dirichlet process mixtures, for the random symmetric density. We present an approach to inference using Markov chain Monte Carlo posterior simulation. The performance of the model is studied with a simulation experiment and through analysis of a rainfall precipitation data set as well as with data on eruptions of the Old Faithful geyser.  相似文献   

9.
We use minimum message length (MML) estimation for mixture modelling. MML estimates are derived to choose the number of components in the mixture model to best describe the data and to estimate the parameters of the component densities for Gaussian mixture models. An empirical comparison of criteria prominent in the literature for estimating the number of components in a data set is performed. We have found that MML coding considerations allows the derivation of useful results to guide our implementation of a mixture modelling program. These advantages allow model search to be controlled based on the minimum variance for a component and the amount of data required to distinguish two overlapping components.  相似文献   

10.
In the framework of model-based cluster analysis, finite mixtures of Gaussian components represent an important class of statistical models widely employed for dealing with quantitative variables. Within this class, we propose novel models in which constraints on the component-specific variance matrices allow us to define Gaussian parsimonious clustering models. Specifically, the proposed models are obtained by assuming that the variables can be partitioned into groups resulting to be conditionally independent within components, thus producing component-specific variance matrices with a block diagonal structure. This approach allows us to extend the methods for model-based cluster analysis and to make them more flexible and versatile. In this paper, Gaussian mixture models are studied under the above mentioned assumption. Identifiability conditions are proved and the model parameters are estimated through the maximum likelihood method by using the Expectation-Maximization algorithm. The Bayesian information criterion is proposed for selecting the partition of the variables into conditionally independent groups. The consistency of the use of this criterion is proved under regularity conditions. In order to examine and compare models with different partitions of the set of variables a hierarchical algorithm is suggested. A wide class of parsimonious Gaussian models is also presented by parameterizing the component-variance matrices according to their spectral decomposition. The effectiveness and usefulness of the proposed methodology are illustrated with two examples based on real datasets.  相似文献   

11.
In this paper, the normal mixture model, as an alternative distribution, is utilized to represent the characteristics of stock daily returns over different bull and bear markets. Firstly, we conduct the normality test for the return data and compare the Kolmogorov-Smirnov statistics of normal mixture models with different components. Secondly, we analyze the likely reasons why parameters change over different sub-periods. Our empirical examination proves that majority of the data series reject the normality assumption and mixture models with three components can model the behavior of daily returns more appropriately and steadily. This result has both statistical and economic significance.  相似文献   

12.
The power function distribution is often used to study the electrical component reliability. In this paper, we model a heterogeneous population using the two-component mixture of the power function distribution. A comprehensive simulation scheme including a large number of parameter points is followed to highlight the properties and behavior of the estimates in terms of sample size, censoring rate, parameters size and the proportion of the components of the mixture. The parameters of the power function mixture are estimated and compared using the Bayes estimates. A simulated mixture data with censored observations is generated by probabilistic mixing for the computational purposes. Elegant closed form expressions for the Bayes estimators and their variances are derived for the censored sample as well as for the complete sample. Some interesting comparison and properties of the estimates are observed and presented. The system of three non-linear equations, required to be solved iteratively for the computations of maximum likelihood (ML) estimates, is derived. The complete sample expressions for the ML estimates and for their variances are also given. The components of the information matrix are constructed as well. Uninformative as well as informative priors are assumed for the derivation of the Bayes estimators. A real-life mixture data example has also been discussed. The posterior predictive distribution with the informative Gamma prior is derived, and the equations required to find the lower and upper limits of the predictive intervals are constructed. The Bayes estimates are evaluated under the squared error loss function.  相似文献   

13.
Particle filters for mixture models with an unknown number of components   总被引:2,自引:1,他引:1  
We consider the analysis of data under mixture models where the number of components in the mixture is unknown. We concentrate on mixture Dirichlet process models, and in particular we consider such models under conjugate priors. This conjugacy enables us to integrate out many of the parameters in the model, and to discretize the posterior distribution. Particle filters are particularly well suited to such discrete problems, and we propose the use of the particle filter of Fearnhead and Clifford for this problem. The performance of this particle filter, when analyzing both simulated and real data from a Gaussian mixture model, is uniformly better than the particle filter algorithm of Chen and Liu. In many situations it outperforms a Gibbs Sampler. We also show how models without the required amount of conjugacy can be efficiently analyzed by the same particle filter algorithm.  相似文献   

14.
New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context.  相似文献   

15.
The experimental design to model the response of a mixture in four components in the presence of process variables is considered. Two different blocks of blends that are orthogonal for linear or quadratic blending are D-optimized. The two orthogonal blocks of blends are generalized and D-optimized in some cases (and possibly Doptimized in others) to deal with restrictions on the blending component proportions. The pair of orthogonal D-optimal blocks of blends can be used with an arbitrary number of process variables, and requires a reduced number of observations.  相似文献   

16.
An economic design of sign chart to control the median is proposed. Since the sign chart is distribution free, it can easily be applied to any process without prior knowledge of process quality distribution. The effect on loss cost observed for different shifts in location shows that the sign chart performs better for large shifts. The economic statistical performance study reveals that statistical performance of sign chart can be improved sufficiently for moderate shifts in the process. Sensitivity study shows that design is more sensitive for change in values of penalty loss cost and time required for search and repair of an assignable cause.  相似文献   

17.
We investigate the likelihood function of small generalized Laplace laws and variance gamma Lévy processes in the short time framework. We prove the local asymptotic normality property in statistical inference for the variance gamma Lévy process under high-frequency sampling with its associated optimal convergence rate and Fisher information matrix. The location parameter is required to be given in advance for this purpose, while the remaining three parameters are jointly well behaved with an invertible Fisher information matrix. The results are discussed with relation to equivalent formulations of the variance gamma Lévy process, that is, as a time-changed Brownian motion and as a difference of two independent gamma processes.  相似文献   

18.
Control charts show the distinction between the random and assignable causes of variation in a process. The real process may be affected by many characteristics and several assignable causes. Therefore, the economic statistical design of multiple control chart under Burr XII shock model with multiple assignable causes can be an appropriate candidate model. In this paper, we develop a cost model based on the optimization of the average cost per unit of time. Indeed, the cost model under the influence of a single match case assignable cause and multiple assignable causes under a same cost and time parameters were compared. Besides, a sensitivity analysis was also presented in which the changeability of loss-cost and design parameters were evaluated based on the changes in cost, time and Burr XII distribution parameters.  相似文献   

19.
Blending experiments with mixture in the presence of process variables are considered. We present an experimental design for quadratic (or linear) blending. The design in two orthogonal blocks is D-optimized in the case where there are no restrictions on the blending in two orthogonal blocks is presented when there are arbitrary restrictions on the blending components. The pair of orthogonal blocks can be used with and arbitrary number of process variables. The number of design points needed when different orthogonal blocks are used is usually smaller than when a single block is repeated at the various process variables levels.  相似文献   

20.
Many statistical agencies, survey organizations, and research centers collect data that suffer from item nonresponse and erroneous or inconsistent values. These data may be required to satisfy linear constraints, for example, bounds on individual variables and inequalities for ratios or sums of variables. Often these constraints are designed to identify faulty values, which then are blanked and imputed. The data also may exhibit complex distributional features, including nonlinear relationships and highly nonnormal distributions. We present a fully Bayesian, joint model for modeling or imputing data with missing/blanked values under linear constraints that (i) automatically incorporates the constraints in inferences and imputations, and (ii) uses a flexible Dirichlet process mixture of multivariate normal distributions to reflect complex distributional features. Our strategy for estimation is to augment the observed data with draws from a hypothetical population in which the constraints are not present, thereby taking advantage of computationally expedient methods for fitting mixture models. Missing/blanked items are sampled from their posterior distribution using the Hit-and-Run sampler, which guarantees that all imputations satisfy the constraints. We illustrate the approach using manufacturing data from Colombia, examining the potential to preserve joint distributions and a regression from the plant productivity literature. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号