首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 740 毫秒
1.
This study addresses the appropriate d 3 values for constructing range control charts (R-charts) when the distributions of the processes are the uniform, triangular, exponential, and Erlang. Comparisons of the range charts are based on Type I error probabilities obtained using simulations. The results reveal that inappropriate use of the d 3 values strongly affected the performance of the R-charts. Practitioners should be more careful in selecting suitable coefficients when using R-charts methods to process data. The distribution of the processes must be examined before the coefficients are chosen.  相似文献   

2.
This paper aims to derive explicit analytical solutions for Average Run Length (ARL) of CUSUM chart for the SARFIMA(P,D,Q)S process with exponential white noise. Measurement of performance was done with the ARL in terms of percentage error and CPU time. The results obtained from the explicit formulas were compared focusing on the performance using the numerical integral equation (NIE) method. Both methods had similarly excellent agreement with the percentage error at less than 0.25%. Meanwhile, the explicit formulas consumed less CPU time than the NIE method. It is clear that the explicit formulas are a good alternative in real applications.  相似文献   

3.
This paper is concerned with estimating the parameters of Tadikamalla-Johnson's LB distribution using the first four moments. Tables of the parameters of the LB distribution are given for selected values of skewness (0.0(0.05) 1.0(0.1)2.0) and corresponding available values of kurtosis at intervals of 0.2. The construction and use of these tables is explained with a numerical example.  相似文献   

4.
The well-known Johnson system of distributions was developed by N. L. Johnson (1949). Slifker and Shapiro (1980) presented a criterion for choosing a member from the three distributional classes (SB,SL, and Sv) in the Johnson system to fit a set of data. The criterion is based on the value of a quantile ratio which depends on a specified positive z value and the parameters of the distribution. In this paper, we present some properties of the quantile ratio for various distributions and for some selected z values. Some comments are made on using the criterion for selecting a Johnson distribution to fit empirical data.  相似文献   

5.
In this article, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows’ Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms’ performance.  相似文献   

6.
Process capability indices (PCIs) have been widely used in manufacturing industries to previde a quantitative measure of process potential and performance. While some efforts have been dedicated in the literature to the statistical properties of PCIs estimators, scarce attention has been given to the evaluation of these properties when sample data are affected by measurement errors. In this work we deal with the problem of measurement errors effects on the performance of PCIs. The analysis is illustrated with reference toC p , i.e. the simplest and most common measure suggested to evaluate process capability. The authors would like to thank two anonymous referees for their comments and suggestion that were useful in the preparation and improvement of this paper. This work was partially supported by a MURST research grant.  相似文献   

7.
A fast routine for converting regression algorithms into corresponding orthogonal regression (OR) algorithms was introduced in Ammann and Van Ness (1988). The present paper discusses the properties of various ordinary and robust OR procedures created using this routine. OR minimizes the sum of the orthogonal distances from the regression plane to the data points. OR has three types of applications. First, L 2 OR is the maximum likelihood solution of the Gaussian errors-in-variables (EV) regression problem. This L 2 solution is unstable, thus the robust OR algorithms created from robust regression algorithms should prove very useful. Secondly, OR is intimately related to principal components analysis. Therefore, the routine can also be used to create L 1, robust, etc. principal components algorithms. Thirdly, OR treats the x and y variables symmetrically which is important in many modeling problems. Using Monte Carlo studies this paper compares the performance of standard regression, robust regression, OR, and robust OR on Gaussian EV data, contaminated Gaussian EV data, heavy-tailed EV data, and contaminated heavy-tailed EV data.  相似文献   

8.
The performance of nine different nonparametric regression estimates is empirically compared on ten different real datasets. The number of data points in the real datasets varies between 7, 900 and 18, 000, where each real dataset contains between 5 and 20 variables. The nonparametric regression estimates include kernel, partitioning, nearest neighbor, additive spline, neural network, penalized smoothing splines, local linear kernel, regression trees, and random forests estimates. The main result is a table containing the empirical L2 risks of all nine nonparametric regression estimates on the evaluation part of the different datasets. The neural networks and random forests are the two estimates performing best. The datasets are publicly available, so that any new regression estimate can be easily compared with all nine estimates considered in this article by just applying it to the publicly available data and by computing its empirical L2 risks on the evaluation part of the datasets.  相似文献   

9.
Franklin and Wasserman (1991) introduced the use of Bootstrap sampling procedures for deriving nonparametric confidence intervals for the process capability index, Cpk, which are applicable for instances when at least twenty data points are available. This represents a significant reduction in the usually recommended sample requirement of 100 observations (see Gunther 1989). To facilitate and encourage the use of these procedures. a FORTRAN program is provided for computation of confidence intervals for Cpk. Three methods are provided for this calculation including the standard method, the percentile confidence interval, and the biased - corrected percentile confidence interval.  相似文献   

10.
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matrices via the modified Cholesky decomposition with lasso. Two different methods are proposed. They are the equi-angular and equi-sparse methods. We use simulation to compare the performance of the proposed methods with others available in the literature, including the sample covariance matrix, the banding method, and the L1-penalized normal loglikelihood method. We then apply the proposed methods to a portfolio selection problem using 80 series of daily stock returns. To facilitate the use of lasso in high-dimensional time series analysis, we develop the dynamic weighted lasso (DWL) algorithm that extends the LARS-lasso algorithm. In particular, the proposed algorithm can efficiently update the lasso solution as new data become available. It can also add or remove explanatory variables. The entire solution path of the L1-penalized normal loglikelihood method is also constructed.  相似文献   

11.
The existing process capability indices (PCI's) assume that the distribution of the process being investigated is normal. For non-normal distributions, PCI's become unreliable in that PCI's may indicate the process is capable when in fact it is not. In this paper, we propose a new index which can be applied to any distribution. The proposed indexCf:, is directly related to the probability of non-conformance of the process. For a given random sample, the estimation of Cf boils down to estimating non-parametrically the tail probabilities of an unknown distribution. The approach discussed in this paper is based on the works by Pickands (1975) and Smith (1987). We also discuss the construction of bootstrap confidence intervals of Cf: based on the so-called accelerated bias correction method (BC a:). Several simulations are carried out to demonstrate the flexibility and applicability of Cf:. Two real life data sets are analyzed using the proposed index.  相似文献   

12.
By using difference sets, we give an answer to the following problem concerning graphical codes: When is the binary code generated by the complete graph Kn contained in some binary Hamming code? It turns out that this holds if and only if n is one of the numbers 2, 3 and 6.  相似文献   

13.
This paper is concerned with estimating the parameters of Tadikamalla-Johnson's LUdistributions based on the method of moments. Tables of the parameters of the LU distribution are given for selected values of skewness (0.0(0.05) 1.0(0.1)2.0) and for twenty values of kurtosis at intervals of 0.2. The construction and use of these tables is explained with a numerical example.  相似文献   

14.
Abstract

The hypothesis tests of performance measures for an M/Ek/1 queueing system are considered. With pivotal models deduced from sufficient statistics for the unknown parameters, a generalized p-value approach to derive tests about parametric functions are proposed. The focus is on derivation of the p-values of hypothesis testing for five popular performance measures of the system in the steady state. Given a sample T, let p(T) be the p values we developed. We derive a closed form expression to show that, for small samples, the probability P(p(T) ? γ) is approximately equal to γ, for 0 ? γ ? 1.  相似文献   

15.
This article presents a constrained maximization of the Shapiro Wilk W statistic for estimating parameters of the Johnson S B distribution. The gradient of the W statistic with respect to the minimum and range parameters is used within a quasi-Newton framework to achieve a fit for all four parameters. The method is evaluated with measures of bias and precision using pseudo-random samples from three different S B populations. The population means were estimated with an average relative bias of less than 0.1% and the population standard deviations with less than 4.0% relative bias. The methodology appears promising as a tool for fitting this sometimes difficult distribution.  相似文献   

16.
Neighbor balanced designs are used to remove the neighbor effects but most of these designs require a large number of blocks. To neutralize the neighbor effects in such situations, GN2-designs are most desirable. This article deals with the (i) refinement of some series of GN2-designs constructed by Zafaryab et al. (2010) and (ii) construction of some new series of GN2-designs in circular blocks of equal size.  相似文献   

17.
Optimal design theory deals with the assessment of the optimal joint distribution of all independent variables prior to data collection. In many practical situations, however, covariates are involved for which the distribution is not previously determined. The optimal design problem may then be reformulated in terms of finding the optimal marginal distribution for a specific set of variables. In general, the optimal solution may depend on the unknown (conditional) distribution of the covariates. This article discusses the D A -maximin procedure to account for the uncertain distribution of the covariates. Sufficient conditions will be given under which the uniform design of a subset of independent discrete variables is D A -maximin. The sufficient conditions are formulated for Generalized Linear Mixed Models with an arbitrary number of quantitative and qualitative independent variables and random effects.  相似文献   

18.
The L1-type regularization provides a useful tool for variable selection in high-dimensional regression modeling. Various algorithms have been proposed to solve optimization problems for L1-type regularization. Especially the coordinate descent algorithm has been shown to be effective in sparse regression modeling. Although the algorithm shows a remarkable performance to solve optimization problems for L1-type regularization, it suffers from outliers, since the procedure is based on the inner product of predictor variables and partial residuals obtained from a non-robust manner. To overcome this drawback, we propose a robust coordinate descent algorithm, especially focusing on the high-dimensional regression modeling based on the principal components space. We show that the proposed robust algorithm converges to the minimum value of its objective function. Monte Carlo experiments and real data analysis are conducted to examine the efficiency of the proposed robust algorithm. We observe that our robust coordinate descent algorithm effectively performs for the high-dimensional regression modeling even in the presence of outliers.  相似文献   

19.
The resistance of least absolute values (L1) estimators to outliers and their robustness to heavy-tailed distributions make these estimators useful alternatives to the usual least squares estimators. The recent development of efficient algorithms for L1 estimation in linear models has permitted their use in practical data analysis. Although in general the L1 estimators are not unique, there are a number of properties they all share. The set of all L1 estimators for a given model and data set can be characterized as the convex hull of some extreme estimators. Properties of the extreme estimators and of the L1-estimate set are considered.  相似文献   

20.
Dummy (0, 1) variables are frequently used in statistical modeling to represent the effect of certain extraneous factors. This paper presents a special purpose linear programming algorithm for obtaining least-absolute-value estimators in a linear model with dummy variables. The algorithm employs a compact basis inverse procedure and incorporates the advanced basis exchange techniques available in specialized algorithms for the general linear least-absolute-value problem. Computational results with a computer code version of the algorithm are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号