首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 762 毫秒
1.
Process control involves repeated hypothesis testing based on several samples. However, process control is not exactly hypothesis testing as such since it deals with detection of non-random patterns of variation as well in a fleeting kind of population. Compare this with hypothesis testing which is principally meant for a stagnant population. Dr Walter A. Shewhart introduced a graphic method for doing this testing in a fleeting population in 1924. This graphic method came to be known as control chart and is widely used throughout the world today for process management purposes. Subsequently there was much advancement in process control techniques. In particular, when more than one variable was involved, process control techniques were developed mainly by Hicks (1955), Jackson (1956 and 1959) and Montgomery and Wadsworth (1972) based on the pioneering work of Hotelling in 1931. Most of them have worked in the area of multivariate variable control chart with the underlying distribution as multivariate normal. When more than one attribute variables are involved some works relating to test of hypothesis was done by Mahalanobis (1946). These works were also based on the Hotelling T2 test. This paper expands the concept of 'Mahalanobis Distance' in case of a multinomial distribution and thereby proposes a multivariate attribute control chart.  相似文献   

2.
This paper extends an analysis of variance for categorical data (CATANOVA) procedure to multidimensional contingency tables involving several factors and a response variable measured on a nominal scale. Using an appropriate measure of total variation for multinomial data, partial and multiple association measures are developed as R2 quantities which parallel the analogous statistics in multiple linear regression for quantitative data. In addition, test statistics are derived in terms of these R2 criteria. Finally, this CATANOVA approach is illustrated within the context of 2 three-way contingency table from a multicenter clinicaltrial.  相似文献   

3.
Recent studies have shown the X-bar control chart with variable sampling interval detects shifts in the process mean faster than the traditional X-bar chart. These studies are usually based on the assumption that the process data are independently and normally distributed. However, many situations in practice violate these assumptions. In this study, a methodology is developed to economically design a variable sampling interval X-bar control chart that takes into consideration correlated non normal sample data. An example is provided to illustrate the solution procedure. A sensitivity analysis on the input parameters (i.e., the cost and the process parameters) is performed taking into account the non normality and the correlation on the optimal design of the chart.  相似文献   

4.
A proper monitoring of stochastic systems is the control charts of statistical process control and drift in characteristics of output may be due to one or several assignable causes. Although much research has been done on the design of control charts, the economic statistical design of the T2 control chart under the Weibull shock model with multiple assignable causes has not yet been addressed. Therefore, we tried to deal with it in this paper and thus we developed a cost model based on the variable sampling interval. We also give an example to support the practical use of T2 chart under the Weibull shock model with multiple assignable causes. Based on the optimization of the average cost per unit of time and taking into account the different combination values of Weibull distribution parameters, optimal values of design parameters were derived and calculated. Then, the cost models under the influence of single assignable cause and multiple assignable causes under the same cost and time parameters were compared. Also, a sensitivity analysis was conducted in which the variability of loss cost and design parameters due to change of cost and time and Weibull distribution parameters were evaluated.  相似文献   

5.
ADE-4: a multivariate analysis and graphical display software   总被引:59,自引:0,他引:59  
We present ADE-4, a multivariate analysis and graphical display software. Multivariate analysis methods available in ADE-4 include usual one-table methods like principal component analysis and correspondence analysis, spatial data analysis methods (using a total variance decomposition into local and global components, analogous to Moran and Geary indices), discriminant analysis and within/between groups analyses, many linear regression methods including lowess and polynomial regression, multiple and PLS (partial least squares) regression and orthogonal regression (principal component regression), projection methods like principal component analysis on instrumental variables, canonical correspondence analysis and many other variants, coinertia analysis and the RLQ method, and several three-way table (k-table) analysis methods. Graphical display techniques include an automatic collection of elementary graphics corresponding to groups of rows or to columns in the data table, thus providing a very efficient way for automatic k-table graphics and geographical mapping options. A dynamic graphic module allows interactive operations like searching, zooming, selection of points, and display of data values on factor maps. The user interface is simple and homogeneous among all the programs; this contributes to making the use of ADE-4 very easy for non- specialists in statistics, data analysis or computer science.  相似文献   

6.
High-content automated imaging platforms allow the multiplexing of several targets simultaneously to generate multi-parametric single-cell data sets over extended periods of time. Typically, standard simple measures such as mean value of all cells at every time point are calculated to summarize the temporal process, resulting in loss of time dynamics of the single cells. Multiple experiments are performed but observation time points are not necessarily identical, leading to difficulties when integrating summary measures from different experiments. We used functional data analysis to analyze continuous curve data, where the temporal process of a response variable for each single cell can be described using a smooth curve. This allows analyses to be performed on continuous functions, rather than on original discrete data points. Functional regression models were applied to determine common temporal characteristics of a set of single cell curves and random effects were employed in the models to explain variation between experiments. The aim of the multiplexing approach is to simultaneously analyze the effect of a large number of compounds in comparison to control to discriminate between their mode of action. Functional principal component analysis based on T-statistic curves for pairwise comparison to control was used to study time-dependent compound effects.  相似文献   

7.
There are many situations in which a researcher would like to analyse data from a two‐way layout. Often, the assumptions of linearity and normality may not hold. To address such situations, we introduce a semiparametric model. The model extends the well‐known density ratio model from the one‐way to the two‐way layout and provides a useful framework for semiparametric analysis of variance type problems under order restrictions. In particular, the likelihood ratio order is emphasized. The model enables highly efficient inference without resorting to fully parametric assumptions or the use of transformations. Estimation and testing procedures under order restrictions are developed and investigated in detail. It is shown that the model is robust to misspecification, and several simulations suggest that it performs well in practice. The methodology is illustrated using two data examples; in the first, the response variable is discrete, whereas in the second, it is continuous.  相似文献   

8.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

9.
Since the product quality of many industrial processes depends upon more than one dependent variable or attribute, they are either multivariate or multi-attribute in nature. Although multivariate statistical process control is receiving increased attention in the literature, little work has been done to deal with multi-attribute processes. In this article, we develop a new methodology to monitor multi-attribute processes. To do this, first we transform multi-attribute data in a way that their marginal probability distributions have almost zero skewness. Then, we estimate the transformed covariance matrix and apply the well-known T 2 control chart. In order to illustrate the proposed method and evaluate its performance, we use two simulation experiments and compare the results with the ones from both MNP chart and the χ2 control chart.  相似文献   

10.
The multivariate adaptive regression splines (MARS) model is one of the well-known, additive non-parametric models that can deal with highly correlated and nonlinear datasets successfully. From our previous analyses, we have seen that lasso-type MARS (LMARS) can be a strong alternative of the Gaussian graphical model (GGM) which is a well-known probabilistic method to describe the steady-state behaviour of the complex biological systems via the lasso regression. In this study, we extend our original LMARS model by taking into account the second-order interaction effects of genes as the representative of the feed-forward loop in biological networks. By this way, we can describe both linear and nonlinear activations of the genes in the same model. We evaluate the performance of our new model under different dimensional simulated and real systems, and then compare the accuracy of the estimates with GGM and LMARS outputs. The results show the advantage of this new model over its close alternatives.  相似文献   

11.
Analysis of a large dimensional contingency table is quite involved. Models corresponding to layers of a contingency table are easier to analyze than the full model. Relationships between the interaction parameters of the full log-linear model and that of its corresponding layer models are obtained. These relationships are not only useful to reduce the analysis but also useful to interpret various hierarchical models. We obtain these relationships for layers of one variable, and extend the results for the case when layers of more than one variable are considered. We also establish, under conditional independence, relationships between the interaction parameters of the full model and that of the corresponding marginal models. We discuss the concept of merging of factor levels based on these interaction parameters. Finally, we use the relationships between layer models and full model to obtain conditions for level merging based on layer interaction parameters. Several examples are discussed to illustrate the results.  相似文献   

12.
Elasticity (or elasticity function) is a new concept that allows us to characterize the probability distribution of any random variable in the same way as characteristic functions and hazard and reverse hazard functions do. Initially defined for continuous variables, it was necessary to extend the definition of elasticity and study its properties in the case of discrete variables. A first attempt to define discrete elasticity is seen in Veres-Ferrer and Pavía (2014a). This paper develops this definition and makes a comparative study of its properties, relating them to the properties shown by discrete hazard and reverse hazard, as both defined in Chechile (2011). Similar to continuous elasticity, one of the most interesting properties of discrete elasticity focuses on the rate of change that this undergoes throughout its support. This paper centers on the study of the rate of change and develops a set of properties that allows us to carry out a detailed analysis. Finally, it addresses the calculation of the elasticity for the resulting variable obtained from discretizing a continuous random variable, distinguishing whether its domain is in real positives or negatives.  相似文献   

13.
Abstract

Profile monitoring is applied when the quality of a product or a process can be determined by the relationship between a response variable and one or more independent variables. In most Phase II monitoring approaches, it is assumed that the process parameters are known. However, it is obvious that this assumption is not valid in many real-world applications. In fact, the process parameters should be estimated based on the in-control Phase I samples. In this study, the effect of parameter estimation on the performance of four Phase II control charts for monitoring multivariate multiple linear profiles is evaluated. In addition, since the accuracy of the parameter estimation has a significant impact on the performance of Phase II control charts, a new cluster-based approach is developed to address this effect. Moreover, we evaluate and compare the performance of the proposed approach with a previous approach in terms of two metrics, average of average run length and its standard deviation, which are used for considering practitioner-to-practitioner variability. In this approach, it is not necessary to know the distribution of the chart statistic. Therefore, in addition to ease of use, the proposed approach can be applied to other type of profiles. The superior performance of the proposed method compared to the competing one is shown in terms of all metrics. Based on the results obtained, our method yields less bias with small-variance Phase I estimates compared to the competing approach.  相似文献   

14.
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skin that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.  相似文献   

15.
Non-symmetric correspondence analysis (NSCA) is a useful technique for analysing a two-way contingency table. Frequently, the predictor variables are more than one; in this paper, we consider two categorical variables as predictor variables and one response variable. Interaction represents the joint effects of predictor variables on the response variable. When interaction is present, the interpretation of the main effects is incomplete or misleading. To separate the main effects and the interaction term, we introduce a method that, starting from the coordinates of multiple NSCA and using a two-way analysis of variance without interaction, allows a better interpretation of the impact of the predictor variable on the response variable. The proposed method has been applied on a well-known three-way contingency table proposed by Bockenholt and Bockenholt in which they cross-classify subjects by person's attitude towards abortion, number of years of education and religion. We analyse the case where the variables education and religion influence a person's attitude towards abortion.  相似文献   

16.
It has been recently revealed that the Shewhart control charts with variable sampling interval (VSI) perform better than the traditional Shewhart chart with the fixed sampling interval in detecting shifts in the process. In most of these research works, the normality and independency of the process data or measurements are assumed and that the process is subjected to only one assignable cause. While, in practice, these assumptions usually do not hold, some recent studies are focused on working with only one or two of these violations. In this paper, the situation in which the process data are correlated and follow a non-normal distribution and that there is multiplicity of assignable causes in the process is considered. For this case, a cost model for the economic design of the VSI X? control chart is developed, where the Burr distribution is employed to represent the non-normal distribution of the process data. To obtain the optimal values of the design parameters, a genetic algorithm is employed in which the response surface methodology is applied. A numerical example is presented to show the applicability and effectiveness of the proposed methodology. Sensitivity analysis is also carried out to evaluate the effects of cost and input parameters on the performance of the chart.  相似文献   

17.
Two important models in survival analysis are that of general random censorship and the proportional hazards submodel of Koziol and Green. The difference between the two models is the way in which the lifetime variable is censored (informative versus non-informative censoring). In this paper the two viewpoints are combined into a new model which allows the lifetimes to be censored by two types of variables, one of which censors in an informative way and the other one in a non-informative way. The lifetimes and the censoring times are also allowed to depend on covariates in a very general way. The estimator for the conditional distribution of the lifetimes generalizes that of Gather and Pawlitschko (1998. Metrika 48, 189–209), who recently studied the situation without covariate information. Results obtained are the uniform strong consistency (with rate), an almost sure asymptotic representation and the weak convergence of the process.  相似文献   

18.
Taguchi's statistic has long been known to be a more appropriate measure of association of the dependence for ordinal variables compared to the Pearson chi-squared statistic. Therefore, there is some advantage in using Taguchi's statistic in the correspondence analysis context when a two-way contingency table consists at least of an ordinal categorical variable. The aim of this paper, considering the contingency table with two ordinal categorical variables, is to show a decomposition of Taguchi's index into linear, quadratic and higher-order components. This decomposition has been developed using Emerson's orthogonal polynomials. Moreover, two case studies to explain the methodology have been analyzed.  相似文献   

19.
When using the co-twin control design for analysis of event times, one needs a model to address the possible within-pair association. One such model is the shared frailty model in which the random frailty variable creates the desired within-pair association. Standard inference for this model requires independence between the random effect and the covariates. We study how violations of this assumption affect inference for the regression coefficients and conclude that substantial bias may occur. We propose an alternative way of making inference for the regression parameters by using a fixed-effects models for survival in matched pairs. Fitting this model to data generated from the frailty model provides consistent and asymptotically normal estimates of regression coefficients, no matter whether the independence assumption is met.  相似文献   

20.
Bivariate responses of repeated measures data are usually analysed as two separate responses in the literature by several authors. The two responses usually tend to be related in some way and analysing this data jointly presents an opportunity to account for the joint movement, which may impact on the conclusions reached compared to analysing the responses separately. In this paper, a bivariate regression model with random effects (linear mixed model) is used to detect a change if any in the prescribing habits in the UK at the general practice (family medicine) level due to an educational intervention given repeated measures data before and after the intervention and a control group. The message was to increase the prescribing of one drug while simultaneously decreasing the prescribing of another. The effects of modelling a bivariate auto-regressive process are evaluated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号