首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In his book 'Out of the Crisis' the late Dr Edwards Deming asserted that 'if anyone adjusts a stable process to try to compensate for a result that is undesirable, or for a result that is extra good, the output will be worse than if he had left the process alone'. His famous funnel experiments supported this assertion. The development of the control chart by Dr Walter Shewhart stemmed from an approach made to him by the management of a Western Electric Company plant because of their awareness that adjustments made to processes often made matters worse. However, many industrial processes are such that the mean values of product quality characteristics shift and drift over time so that, instead of sequences of independent observations to which Deming's assertion applies, process owners are faced with autocorrelated data. The truth of Dr Deming's assertion is demonstrated, both theoretically and via computer simulation. The use of the Exponentially Weighted Moving Average (EWMA) for process monitoring is demonstrated and, for situations where process data exhibit autocorrelation, its use for feedback adjustment is discussed and demonstrated. Finally, successful applications of process improvements using EWMA-based control algorithms is discussed.  相似文献   

2.
This article assesses the impact that the late W. Edwards Deming had on statistical practice in general and industrial statistics in particular. It provides a direct-from-the-trenches comparison of Deming's goals with their realizations and a commentary on the relevance and challenges of Deming's concepts to today's industrial environment.  相似文献   

3.
When a process consists of several identical streams that are not highly correlated, an alternative to using separate control charts for each stream is to use a group control chart. Rather than plotting sample means from each stream at any time point, one could plot only the largest and/or smallest sample mean from among all the streams. Using the theory of stochastic processes and majorization together with numerical methods, the properties of a test that signals if r consecutive extreme values come from the same stream are examined. Both one and two-sided cases are considered. Average run lengths (ARL's), the least favorable configuration of the stream (population) means, and sample sizes necessary to have specified in-control and out-of-control ARL's are obtained. A test that signals if r-1 out of r consecutive extreme values come from the same stream is also considered  相似文献   

4.
James Testa 《Serials Review》2013,39(3):210-212
Abstract

For more than four decades, Thomson ISI (Institute for Scientific Information) has been committed to a fundamental mission: to provide essential products and services that enable access to and management of the highest quality, most relevant materials for all participants in the research process. In 1958, Dr. Eugene Garfield started ISI by borrowing five hundred dollars from Household Finance. Current Contents® of Chemical, Phamaco-Medical & Life Sciences was the sole product, covering 286 journals. Today the Thomson ISI database covers more than 16,000 international journals, books, and proceedings in the sciences, social sciences, and arts and humanities. This article describes the processes and standards that result in ISI's abstracting and indexing services. Serials Review 2003; 29:210–212.  相似文献   

5.
Despite its importance, there has been little attention in the modeling of time series data of categorical nature in the recent past. In this paper, we present a framework based on the Pegram's [An autoregressive model for multilag Markov chains. Journal of Applied Probabability 17, 350–362] operator that was originally proposed only to construct discrete AR(pp) processes. We extend the Pegram's operator to accommodate categorical processes with ARMA representations. We observe that the concept of correlation is not always suitable for categorical data. As a sensible alternative, we use the concept of mutual information, and introduce auto-mutual information to define the time series process of categorical data. Some model selection and inferential aspects are also discussed. We implement the developed methodologies to analyze a time series data set on infant sleep status.  相似文献   

6.
A practicing statistician looks at the multiple comparison controversy and related issues through the eyes of the users. The concept of consistency is introduced and discussed in relation to five of the more common multiple comparison procedures. All of the procedures are found to be inconsistent except the simplest procedure, the unrestricted least significant difference (LSD) procedure (or multiple t test). For this and other reasons the unrestricted LSD procedure is recommended for general use, with the proviso that it should be viewed as a hypothesis generator rather than as a method for simultaneous hypothesis generation and testing. The implications for Scheffé's test for general contrasts are also discussed, and a new recommendation is made.  相似文献   

7.
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   

8.
Confidence intervals for location parameters are expanded (in either direction) to some “crucial” points and the resulting increase in the confidence coefficient investigated.Particaular crucial points are chosen to illuminate some hypothesis testing problems.Special results are dervied for the normal distribution with estimated variance and, in particular, for the problem of classifiying treatments as better or worse than a control.For this problem the usual two-sided Dunnett procedure is seen to be inefficient.Suggestions are made for the use of already published tables for this problem.Mention is made of the use of expanded confidence intervals for all pairwise comparisons of treatments using an “honest ordering difference” rather than Tukey's “honest siginificant difference”.  相似文献   

9.
O.D. Anderson 《Statistics》2013,47(4):525-529
Conditions for the general Moving Average process, of order q, to be invertible or borderline non-invertible are deduced. These are termed the acceptability conditions. It turns out that they depend on the magnitude of the final moving average parameter, θ q . If ‖θ q ‖ >1, the process is not acceptable. Should ‖θ q ‖ = 1, the conditions, for any particular q, follow simply - if use is made of the remainder theorem. When ‖θq‖< 1, an appeal is made to ROUCH* E'S theorem, to establish the conditions. Analogous stationarity results immediately follow for autoregressive processes.  相似文献   

10.
Birnbaum's proof that C and M imply L, would lose its force if it is shown that in some situations M is not acceptable. Godambe (1979) has shown that Birnbaum's M is not as obvious or intuitive as the concept that a ‘mere relabelling’ of sample points should make no difference to the interference that can appropriately be drawn from a particular outcome of a given experiment. Akaike (1982) has shown that in certain situations M amounts to the assertion that a relabelling of sample points involving a false reporting of the outcome of an experiment should make no difference to the inference drawn from a particular outcome of a given experiment. It is shown in this paper that in the situation discussed by Akaike, even if M were to be considered acceptable, it is only a modified conditionality principle C? and M which can formally imply L; Birnbaum's conditionality principle C and M do not imply L.  相似文献   

11.
The question, whether Zipf's law arises only in consequence of interactions in a complex system or if it is also valid for random structures, has been controversially discussed in the literature over several decades. We show by means of simulations that the frequency distributions of simple random sequences usually obey this regularity. For tens of thousands of cases the goodness-of-fit is explicitly demonstrated by estimating the parameter and calculating the corresponding chi-square value. From the study it becomes clear that some existing results in the literature should be revised, and some ideas concerning the explanation of Zipf's law are provided.  相似文献   

12.
Mis-specification analyses of gamma and Wiener degradation processes   总被引:2,自引:0,他引:2  
Degradation models are widely used these days to assess the lifetime information of highly reliable products if there exist some quality characteristics (QC) whose degradation over time can be related to the reliability of the product. In this study, motivated by a laser data, we investigate the mis-specification effect on the prediction of product's MTTF (mean-time-to-failure) when the degradation model is wrongly fitted. More specifically, we derive an expression for the asymptotic distribution of quasi-MLE (QMLE) of the product's MTTF when the true model comes from gamma degradation process, but is wrongly assumed to be Wiener degradation process. The penalty for the model mis-specification can then be addressed sequentially. The result demonstrates that the effect on the accuracy of the product's MTTF prediction strongly depends on the ratio of critical value to the scale parameter of the gamma degradation process. The effects on the precision of the product's MTTF prediction are observed to be serious when the shape and scale parameters of the gamma degradation process are large. We then carry out a simulation study to evaluate the penalty of the model mis-specification, using which we show that the simulation results are quite close to the theoretical ones even when the sample size and termination time are not large. For the reverse mis-specification problem, i.e., when the true degradation is a Wiener process, but is wrongly assumed to be a gamma degradation process, we carry out a Monte Carlo simulation study to examine the effect of the corresponding model mis-specification. The obtained results reveal that the effect of this model mis-specification is negligible.  相似文献   

13.
There is a tendency for the true variability of feasible GLS estimators to be understated by asymptotic standard errors. For estimation of SUR models, this tendency becomes more severe in large equation systems when estimation of the error covariance matrix, C, becomes problematic. We explore a number of potential solutions involving the use of improved estimators for the disturbance covariance matrix and bootstrapping. In particular, Ullah and Racine (1992) have recently introduced a new class of estimators for SUR models that use nonparametric kernel density estimation techniques. The proposed estimators have the same structure as the feasible GLS estimator of Zellner (1962) differing only in the choice of estimator for C. Ullah and Racine (1992) prove that their nonparametric density estimator of C can be expressed as Zellner's original estimator plus a positive definite matrix that depends on the smoothing parameter chosen for the density estimation. It is this structure of the estimator that most interests us as it has the potential to be especially useful in large equation systems.

Atkinson and Wilson (1992) investigated the bias in the conventional and bootstrap estimators of coefficient standard errors in SUR models. They demonstrated that under certain conditions the former were superior, but they caution that neither estimator uniformly dominated and hence bootstrapping provides little improvement in the estimation of standard errors for the regression coefficients. Rilstone and Veal1 (1996) argue that an important qualification needs to be made to this somewhat negative conclusion. They demonstrated that bootstrapping can result in improvements in inferences if the procedures are applied to the t-ratios rather than to the standard errors. These issues are explored for the case of large equation systems and when bootstrapping is combined with improved covariance estimation.  相似文献   

14.
Abstract.  Properties of a specification test for the parametric form of the variance function in diffusion processes are discussed. The test is based on the estimation of certain integrals of the volatility function. If the volatility function does not depend on the variable x it is known that the corresponding statistics have an asymptotic normal distribution. However, most models of mathematical finance use a volatility function which depends on the state x . In this paper we prove that in the general case, where σ depends also on x the estimates of integrals of the volatility converge stably in law to random variables with a non-standard limit distribution. The limit distribution depends on the diffusion process X t itself and we use this result to develop a bootstrap test for the parametric form of the volatility function, which is consistent in the general diffusion model.  相似文献   

15.

Consider the simple linear regression problem with right-censored data. The Buckley-James estimator (BJE) (1979) has been considered superior over the other available procedures. So far the existing algorithms fail to present all possible BJE's. In this note, we present a feasible non-iterative algorithm which can find all BJE's. We use the Stanford heart transplant data to show that if we do not find all BJE's (6 BJE's for the data), then the analysis may be misleading as is the case in Buckley and James (1979, p. 435) and in Miller (1981, p. 156). Extension to multiple linear regression is also discussed.  相似文献   

16.
The use of a range estimator of the population standard deviation, sigma (σ), for determining sample sizes is discussed in this study. Standardized mean ranges (dn's), when divided into the ranges of sampling frames, provide estimates of the standard deviation of the population. These estimates can be used for determining sample sizes. The dn's are provided for seven different distributions for sampling frame sizes that range from 2 to 2000, For each of the seven distributions, functional relationships are developed such that dn = f(nSF) where nSF is the size of the sample frame. From these functions, dn's can be estimated for sampling frame sizes which are not presented in the study.  相似文献   

17.
For the class of autoregressive-moving average (ARMA) processes, we examine the relationship between the dual and the inverse processes. It is demonstrated that the inverse process generated by a causal and invertible ARMA (p, q) process is a causal and invertible ARMA (q, p) model. Moreover, it is established that this representation is strong if and only if the generating process is Gaussian. More precisely, it is derived that the linear innovation process of the inverse process is an all-pass model. Some examples and applications to time reversibility are given to illustrate the obtained results.  相似文献   

18.
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given.  相似文献   

19.
On making use of a result of Imhof, an integral representation of the distribution function of linear combinations of the components of a Dirichlet random vector is obtained. In fact, the distributions of several statistics such as Moran and Geary's indices, the Cliff‐Ord statistic for spatial correlation, the sample coefficient of determination, F‐ratios and the sample autocorrelation coefficient can be similarly determined. Linear combinations of the components of Dirichlet random vectors also turn out to be a key component in a decomposition of quadratic forms in spherically symmetric random vectors. An application involving the sample spectrum associated with series generated by ARMA processes is discussed.  相似文献   

20.
《随机性模型》2013,29(3):313-339
This paper focuses on the valuation of financial derivatives with transaction costs. These financial products derive their value from other financial products, which are usually modeled as diffusion processes and called “underlyings”. The prices of such financial derivatives are generally written as the expectation of a function of the underlying process and can therefore be written as solutions of Partial Differential Equations. If there are transaction costs in the market, we prove that the price of a derivative converges towards the solution of a non-linear Partial Differential Equation as these transaction costs go to zero and the frequency of their payment goes to infinity. Our result generalizes those of Leland (1985) and Henrotte (1994). It holds for derivatives that are functionals of the underlying process rather than just functions (so-called “path-dependent”). It also holds if other derivatives are used in lieu of underlyings, and for derivatives whose value is not supposed to be a convex function of the underlying's price.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号