首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Control charts show the distinction between the random and assignable causes of variation in a process. The real process may be affected by many characteristics and several assignable causes. Therefore, the economic statistical design of multiple control chart under Burr XII shock model with multiple assignable causes can be an appropriate candidate model. In this paper, we develop a cost model based on the optimization of the average cost per unit of time. Indeed, the cost model under the influence of a single match case assignable cause and multiple assignable causes under a same cost and time parameters were compared. Besides, a sensitivity analysis was also presented in which the changeability of loss-cost and design parameters were evaluated based on the changes in cost, time and Burr XII distribution parameters.  相似文献   

2.
ABSTRACT

In this article, we introduce new nonparametric Shewhart-type control charts that take into account the location of two order statistics of the test sample as well as the number of observations in that sample that lie between the control limits. Exact formulae for the alarm rate, the run length distribution and the average run length (ARL) are all derived. A key advantage of the new charts is that, due to its nonparametric nature, the false alarm rate (FAR) and in-control run length distribution is the same for all continuous process distributions. Tables are provided for the implementation of the proposed charts for some typical FAR and ARL values. Furthermore, a numerical study carried out reveals that the new charts are quite flexible and efficient in detecting shifts to Lehmann-type out-of-control situations, while they seem preferable from a robustness point of view in comparison with the distribution-free control chart of Balakrishnan et al. (2009).  相似文献   

3.
4.
5.
6.
7.
8.
9.
10.
ABSTRACT

A dual-record system (DRS) (equivalently two sample capture–recapture experiments) model, with time and behavioural response variation, has attracted much attention specifically in the domain of official statistics and epidemiology, as the assumption of list independence often fails. The relevant model suffers from parameter identifiability problem, and suitable Bayesian methodologies could be helpful. In this article, we formulate population size estimation in DRS as a missing data problem and two empirical Bayes approaches are proposed along with the discussion of an existing Bayes treatment. Some features and associated posterior convergence for these methods are mentioned. Investigation through an extensive simulation study finds that our proposed approaches compare favourably with the existing Bayes approach for this complex model depending upon the availability of directional nature of underlying behavioural response effect. A real-data example is given to illustrate these methods.  相似文献   

11.
12.
13.
14.
In the recent years, the notion of data depth has been used in nonparametric multivariate data analysis since it gives natural ‘centre-outward’ ordering of multivariate data points with respect to the given data cloud. In the literature, various nonparametric tests are developed for testing equality of location of two multivariate distributions based on data depth. Here, we define two nonparametric tests based on two different test statistic for testing equality of locations of two multivariate distributions. In the present work, we compare the performance of these tests with the tests developed by Li and Liu [New nonparametric tests of multivariate locations and scales using data depth. Statist Sci. 2004;(1):686–696] for testing equality of locations of two multivariate distributions. Comparison in terms of power is done for multivariate symmetric and skewed distributions using simulation for three popular depth functions. Application of tests to real life data is provided. Conclusion and recommendations are also provided.  相似文献   

15.
In this paper, we introduce two new statistics for detecting outliers in the Pareto distribution. These new statistics are the extension of the statistics for detecting outliers in exponential and gamma distributions. In fact, we compare the power of our test statistics with the other statistics and select the best test statistic for detecting outliers in the Pareto distribution. Finally, numerical examples of different insurance claims are used to see the performance of the test.  相似文献   

16.
A queuing system with two incongruent arrivals and services is considered. Two kinds of customers enter the system by Poisson process and the service times are assumed to have general distribution. After first kind service completion, it may feedback to repeat the first service, leave the system or go to give second service. The same policy is applied for the other kind of customer. All stochastic processes involved in this system are independent. We derive the probability generating function for each kind and for the system that yield the performance measures. Some numerical approaches examined the validity of the results.  相似文献   

17.
18.
19.
The penalized logistic regression is a useful tool for classifying samples and feature selection. Although the methodology has been widely used in various fields of research, their performance takes a sudden turn for the worst in the presence of outlier, since the logistic regression is based on the maximum log-likelihood method which is sensitive to outliers. It implies that we cannot accurately classify samples and find important factors having crucial information for classification. To overcome the problem, we propose a robust penalized logistic regression based on a weighted likelihood methodology. We also derive an information criterion for choosing the tuning parameters, which is a vital matter in robust penalized logistic regression modelling in line with generalized information criteria. We demonstrate through Monte Carlo simulations and real-world example that the proposed robust modelling strategies perform well for sparse logistic regression modelling even in the presence of outliers.  相似文献   

20.
Sensitivity analysis is an essential tool in the development of robust models for engineering, physical sciences, economics and policy-making, but typically requires running the model a large number of times in order to estimate sensitivity measures. While statistical emulators allow sensitivity analysis even on complex models, they only perform well with a moderately low number of model inputs: in higher dimensional problems they tend to require a restrictively high number of model runs unless the model is relatively linear. Therefore, an open question is how to tackle sensitivity problems in higher dimensionalities, at very low sample sizes. This article examines the relative performance of four sampling-based measures which can be used in such high-dimensional nonlinear problems. The measures tested are the Sobol' total sensitivity indices, the absolute mean of elementary effects, a derivative-based global sensitivity measure, and a modified derivative-based measure. Performance is assessed in a ‘screening’ context, by assessing the ability of each measure to identify influential and non-influential inputs on a wide variety of test functions at different dimensionalities. The results show that the best-performing measure in the screening context is dependent on the model or function, but derivative-based measures have a significant potential at low sample sizes that is currently not widely recognised.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号