共查询到20条相似文献,搜索用时 31 毫秒
1.
Based on the recursions in Huffer (1988) and Huffer and Lin (2001), we present a two-stage algorithm and two specialized methods for evaluating the probabilities involving linear combination of spacings of special forms. The two-stage algorithm combines the advantages of marking algorithm in Huffer and Lin (1997) and general algorithm in Huffer and Lin (2001). The proposed methods can analytically derive the exact expressions for some specific problems, and efficiently handle problems such as the distribution of the circular scan statistic and multiple coverage probabilities. 相似文献
2.
Leila Golparvar 《统计学通讯:模拟与计算》2017,46(1):404-422
The problem of selecting a population according to “selection and ranking” is an important statistical problem. The ideas in selecting the best populations with some demands having optimal criterion have been suggested originally by Bechhofer (1954) and Gupta (1956, 1965). In the area of ranking and selection, the large part of literature is connected with a single criterion. However, this may not satisfy the experimenter’s demand. We follow methodology of Huang and Lai (1999) and the main focus of this article is to select a best population under Type-II progressively censored data for the case of right tail exponential distributions with a bounded and unbounded supports for μi. We formulate the problem and develop a Bayesian setup with two kinds of bounded and unbounded prior for μi. We introduce an empirical Bayes procedure and study the large sample behavior of the proposed rule. It is shown that the proposed empirical Bayes selection rule is asymptotically optimal. 相似文献
3.
Housila P. Singh 《统计学通讯:理论与方法》2013,42(23):4222-4238
This article considers some classes of estimators of the population median of the study variable using information on an auxiliary variable with their properties under large sample approximation. Asymptotic optimum estimator (AOE) in each class of estimators has been investigated along with the approximate mean square error formulae. It has been shown that the proposed classes of estimators are better than these considered by Gross (1980), Kuk and Mak (1989), Singh et al. (2003a), and Al and Cingi (2009). An empirical study is carried out to judge the merits of the suggested class of estimators over other existing estimators. 相似文献
4.
In this article, we have evaluated the performance of different forecasters and tested association between their performances for different pairs of variables. We have used three data sets of track records of professional U.S. economic forecasters participating in the Blue Chip consensus forecasting service (the data sets contain the root mean square errors (RMSE) of different forecasters for different years). To evaluate the performance of forecasters we have covered three well-known tests, namely the usual F test (cf. Fisher (1923)), Kruskal Wallis test (cf. Kruskal and Wallis (1952)), and Extension of Median test (cf. Daniel (1990)). To test the association between the forecaster's performances for different pairs of variables, we have considered Gini mean correlation coefficient rg1 (cf. Yitzhaki, S., and Olkin, I. (1991) and Yitzhaki (2003)), Modified rank correlation coefficient (cf. Zimmerman (1994)) and three modifications of Spearman rank correlation coefficient. We have observed that different forecasters do not necessarily offer same average performance. Moreover, an evidence of association between two criteria does not always lead us reaching at the same decision. The outcomes of the study may help the practitioners in selecting the best forecaster(s) for policymaking purposes. 相似文献
5.
We propose a new ratio type estimator for estimating the finite population mean using two auxiliary variables in stratified two-phase sampling. Expressions for bias and mean squared error of the proposed estimator are derived up to the first order of approximation. The proposed estimator is more efficient than the usual stratified sample mean estimator, traditional stratified ratio estimator and some other stratified estimators including Bahl and Tuteja (1991), Chami et al. (2012), Chand (1975), Choudhury and Singh (2012), Hamad et al. (2013), Vishwakarma and Gangele (2014), Sanaullah et al. (2014), and Chanu and Singh (2014). 相似文献
6.
7.
The first-order product autoregressive (PAR(1)) model introduced by McKenzie in 1982 did not attract the attention of practitioners due to the unavailability of a proper estimation method. This article proposes an estimating function (EF) method to fill the gap. In particular, we suggest an optimal combination of linear and quadratic EFs to overcome the problem of parameter identification. The procedure is applied to Weibull and Gamma PAR(1) models. Simulation and data analysis show that the proposed method performs better than the existing methods. 相似文献
8.
Chen Li 《统计学通讯:理论与方法》2017,46(8):3934-3948
This article further investigates the allocation of coverage limits and deductibles to multiple independent risks from the viewpoint of policyholders with increasing utility functions. In a more general setup, we develop the usual stochastic orders on the retained loss, which either generalize or supplement the corresponding results due to Lu and Meng (2011) and Hu and Wang (2014). Also, the most unfavorable and favorable allocations of coverage limits and deductibles are developed for multiple risks with dominated reversed hazard rates and hazard rates, respectively. 相似文献
9.
In this work, we propose the construction of a chi-squared goodness-of-fit test in censored data case, for Bertholon model which can analyse various competing risks of failure or death. This test is based on a modification of the Nikulin-Rao-Robson (NRR) statistic proposed by Bagdonavicius and Nikulin (2011a, 2011b) for censored data. We applied this test to numerical examples from simulated samples and real data. 相似文献
10.
M. Pilar Alonso Asunción Beamonte Manuel Salvador 《Journal of applied statistics》2015,42(5):1043-1063
In this paper a methodology for the delineation of local labour markets (LLMs) using evolutionary algorithms is proposed. This procedure, based on that in Flórez-Revuelta et al. [13,14], introduces three modifications. First, initial groups of municipalities with a minimum size requirement are built using the travel time between them. Second, a not fully random initiation algorithm is proposed. And third, as a final stage of the procedure, a contiguity step is implemented. These modifications significantly decrease the computational times of the algorithm (up to a 99%) without any deterioration of the quality of the solutions. The optimization algorithm may give a set of potential solutions with very similar values with respect to the objective function what would lead to different partitions, both in terms of number of markets and their composition. In order to capture their common aspects an algorithm based on a cluster partitioning of k-means type is presented. This stage of the procedure also provides a ranking of LLMs foci useful for planners and administrations in decision-making processes on issues related to labour activities. Finally, to evaluate the performance of the algorithm a toy example with artificial data is analysed. The full methodology is illustrated through a real commuting data set of the region of Aragón (Spain). 相似文献
11.
Testing exponentiality based on Kullback—Leibler information for progressively Type II censored data
Hadi Alizadeh Noughabi 《统计学通讯:模拟与计算》2017,46(10):7624-7638
In many life-testing and reliability experiments, data are often censored in order to reduce the cost and time associated with testing and since the conventional Type-I and Type-II censoring schemes are not flexible enough, progressive censoring is developed by researchers. In this article, we develop a general goodness of fit test by using a new estimate of Kullback–Leibler information based on progressively Type-II censored data. Consistency and other properties of the proposed test are shown. Then, we use the proposed test statistic to test for exponentiality based on progressively Type-II censored data. The power values of the proposed test under different progressively Type-II censoring schemes are computed, through Monte Carlo simulations. It is observed that the proposed test is quite powerful in compared with the test proposed by Balakrishnan et al. (2007). Two real datasets from progressive censoring literature are finally presented for illustrative purpose. 相似文献
12.
Abouzar Bazyari 《统计学通讯:模拟与计算》2017,46(9):7194-7209
Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are unknown, arbitrary positive definite and unequal are considered. This problem of testing has been studied to some extent, for example, by Kulatunga and Sasabuchi (1984) when the covariance matrices are known and also Sasabuchi et al. (2003) and Sasabuchi (2007) when the covariance matrices are unknown but common. In this paper, a test statistic is proposed and because of the main advantage of the bootstrap test is that it avoids the derivation of the complex null distribution analytically, a bootstrap test statistic is derived and since the proposed test statistic is location invariance the bootstrap p-value defined logical and some steps are presented to estimate it. Our numerical studies via Monte Carlo simulation show that the proposed bootstrap test can correctly control the type I error rates. The power of the test for some of the p-dimensional normal distributions is computed by Monte Carlo simulation. Also, the null distribution of test statistic is estimated using kernel density. Finally, the bootstrap test is illustrated using a real data. 相似文献
13.
When a sufficient correlation between the study variable and the auxiliary variable exists, the ranks of the auxiliary variable are also correlated with the study variable, and thus, these ranks can be used as an effective tool in increasing the precision of an estimator. In this paper, we propose a new improved estimator of the finite population mean that incorporates the supplementary information in forms of: (i) the auxiliary variable and (ii) ranks of the auxiliary variable. Mathematical expressions for the bias and the mean-squared error of the proposed estimator are derived under the first order of approximation. The theoretical and empirical studies reveal that the proposed estimator always performs better than the usual mean, ratio, product, exponential-ratio and -product, classical regression estimators, and Rao (1991), Singh et al. (2009), Shabbir and Gupta (2010), Grover and Kaur (2011, 2014) estimators. 相似文献
14.
In this article, we analyze the performance of five estimation methods for the long memory parameter d. The goal of our article is to construct a wavelet estimate for the fractional differencing parameter in nonstationary long memory processes that dominate the well-known estimate of Shimotsu and Phillips (2005). The simulation results show that the wavelet estimation method of Lee (2005) with several tapering techniques performs better under most cases in nonstationary long memory. The comparison is based on the empirical root mean squared error of each estimate. 相似文献
15.
Antonello D'Ambra 《统计学通讯:理论与方法》2014,43(6):1209-1221
Non Symmetric Correspondence Analysis (NSCA) (D'Ambra and Lauro, 1989) is a useful technique for analyzing a two-way contingency table. The key difference between the symmetrical and non symmetrical versions of correspondence analysis rests on the measure of the association used to quantify the relationship between the variables. For a two-way, or multi-way, contingency table, the Pearson chi-squared statistic is commonly used when it can be assumed that the categorical variables are symmetrically related. However, for a two-way table, it may be that one variable can be treated as a predictor variable and the second variable can be considered as a response variable. Yet, for such a variable structure, the Pearson chi-squared statistic is not an appropriate measure of the association. Instead, one may consider the Goodman-Kruskal tau index. In the case that there are more than two cross-classified variables, multivariate versions of the Goodman-Kruskal tau index can be considered. These include Marcotorchino's index (Marcotorchino, 1985) and Gray-Williams’ index (Gray and Williams, 1975). In this article, the Multiple non Symmetric Correspondence Analysis (MNSCA), along with the decomposition of the TAU by Gray-Williams in main effects and interaction (D'Ambra et al., 2011), is used for the evaluation of the innovative performance of the manufacturing enterprises in Campania. Finally, to identify a category which is statistically significant, the confidence ellipses have been proposed for the Multiple Non Symmetric Correspondence Analysis starting from the ellipses suggested by Beh (2010) for the symmetrical analysis. 相似文献
16.
Mansson and Shukur (2011) investigated the performance of the Poisson ridge regression (PRR) estimator in terms of the mean square error (MSE) criterion. Similarly, Mansson (2012) investigated the performance of the Negative binomial ridge regression (NBRR) according to the MSE criterion. But there is no any analysis of the predictive performance of the PRR and NBRR estimators. Therefore, we define the PRR and the NBRR predictors to evaluate their predictive performances according to the prediction mean squared error under the target function. The Monte Carlo simulations and the real life numerical example are conducted to investigate the defined predictors' performance. 相似文献
17.
Yasin Asar 《统计学通讯:模拟与计算》2017,46(4):2576-2586
The binary logistic regression is a widely used statistical method when the dependent variable is binary or dichotomous. In some of the situations of logistic regression, independent variables are collinear which leads to the problem of multicollinearity. It is known that multicollinearity affects the variance of maximum likelihood estimator (MLE) negatively. Thus, this article introduces new methods to estimate the shrinkage parameters of Liu-type logistic estimator proposed by Inan and Erdogan (2013) which is a generalization of the Liu-type estimator defined by Liu (2003) for the linear model. A Monte Carlo study is used to show the effectiveness of the proposed methods over MLE using the mean squared error (MSE) and mean absolute error (MAE) criteria. A real data application is illustrated to show the benefits of new methods. According to the results of the simulation and application proposed methods have better performance than MLE. 相似文献
18.
《统计学通讯:理论与方法》2013,42(8-9):1789-1810
Mudholkar and Srivastava [1]adapted Mudholkar and Subbaiah's [2]modified stepwise procedure, using the trimmed means in place of the means and appropriate studentization, to construct robust tests for the significance of a mean vector. They concluded that the robust alternatives provide excellent type I error control, and a substantial gain in power over Hotelling's T 2test in case of heavy tailed populations without significant loss of power when the population is normal. In this paper we adapt the modified stepwise approach to construct simple tests for the significance of the orthant constrained mean vector of a p-variate normal population with unknown covariance matrix, and also for constructing robust tests without assuming normality. The simple normal theory tests have exact type I error, whereas the robust tests provide a reasonably type I error control and substantial power advantage over Perlman's [3]likelihood ratio test. 相似文献
19.
The generalized exponential (GE) distribution, which was introduced by Mudholkar and Srivastava in 1993, has been studied for various applications of lifetime modelings. In this article, five control charts, that comprise the Shewhart-type chart and four parametric bootstrap charts based on maximum likelihood estimation method, the moment estimation method, probability plot method, and least-square error method for the GE percentiles, are investigated. An extensive Monte Carlo simulation study is conducted to compare the performance among all five control charts in terms of average run length. Finally, an example is given for illustration. 相似文献
20.
Witold Orzeszko 《统计学通讯:模拟与计算》2017,46(7):5151-5165
Serial independence is tested using two measures of the effects of noise reduction in chaotic data, proposed by Orzeszko (2005). The extensive Monte Carlo simulations on the size and power of the new permutation-based tests are performed. Four popular nonparametric tests for serial independence are employed as a benchmark. The conducted simulations show that the new tests may be effective tools for detecting different kinds of dependencies. Moreover, they can distinguish between nonlinearity in the mean and nonlinearity in the variance. 相似文献