共查询到20条相似文献,搜索用时 31 毫秒
1.
Susanne Fuchs-Seliger 《Statistical Papers》1986,27(1):101-115
In the following, the economic counterparts of Eichhorn's and Voeller's tests for statistical price indices will be studied. We will see that replacing the statistical Commensurability Axiom in the economic price index theory by a property which is only concerned with price changes leads to similar relationships between this one and several other tests as in the statistical price index theory. 相似文献
2.
Bootstrap for generalized linear models 总被引:1,自引:1,他引:0
Günter Rothe 《Statistical Papers》1989,30(1):17-26
We consider the distribution of the (standardized) ML-estimator of the unknown parameter vector in a Generalized Linear Model with canonical link function. It will be shown that its (parametric) Bootstrap estimator is consistent under the same assumptions needed by Fahrmeir & Kaufmann (1985, 1986) to show its asymptotic normality. 相似文献
3.
We consider the problem of testing the null hypothesis of no change against the alternative of multiple change points in a series of independent observations. We propose an ANOVA-type test statistic and obtain its asymptotic null distribution. We also give approximations of its limiting critical values. We report the results of Monte Carlo studies conducted to compare the power of the proposed test against a number of its competitors. As illustrations we analyzed three real data sets. 相似文献
4.
W. Krämer 《Statistical Papers》1991,32(1):71-73
The OLS-estimator of the disturbance variance in the Linear Regression Model is shown to be asymptotically unbiased in the context of AR(1)-disturbances, although for any given design, E(s2/σ2) tends to zero as correlation increases. 相似文献
5.
Neveka M. Olmos Héctor Varela Heleno Bolfarine Héctor W. Gómez 《Statistical Papers》2014,55(4):967-981
In this paper we propose an extension of the generalized half-normal distribution studied in Cooray and Ananda (Commun Stat 37:1323–1337, 2008). This new distribution is defined by considering the quotient of two random variables, the one in the numerator being a generalized half normal distribution and the one in the denominator being a power of the uniform distribution on \((0,1)\) , respectively. The resulting distribution has greater kurtosis than the generalized half normal distribution. The density function of this more general distribution is derived jointly with some of its properties and moments. We discuss stochastic representation, maximum likelihood and moments estimation. Applications to real data sets are reported revealing that the proposed distribution can fit real data better than the slashed half-normal, generalized half-normal and Birnbaum–Saunders distributions. 相似文献
6.
Sebastian Steinmetz 《AStA Advances in Statistical Analysis》2014,98(4):371-387
Widely spread tools within the area of Statistical Process Control are control charts of various designs. Control chart applications are used to keep process parameters (e.g., mean \(\mu \) , standard deviation \(\sigma \) or percent defective \(p\) ) under surveillance so that a certain level of process quality can be assured. Well-established schemes such as exponentially weighted moving average charts (EWMA), cumulative sum charts or the classical Shewhart charts are frequently treated in theory and practice. Since Shewhart introduced a \(p\) chart (for attribute data), the question of controlling the percent defective was rarely a subject of an analysis, while several extensions were made using more advanced schemes (e.g., EWMA) to monitor effects on parameter deteriorations. Here, performance comparisons between a newly designed EWMA \(p\) control chart for application to continuous types of data, \(p=f(\mu ,\sigma )\) , and popular EWMA designs ( \(\bar{X}\) , \(\bar{X}\) - \(S^2\) ) are presented. Thus, isolines of the average run length are introduced for each scheme taking both changes in mean and standard deviation into account. Adequate extensions of the classical EWMA designs are used to make these specific comparisons feasible. The results presented are computed by using numerical methods. 相似文献
7.
Yo Sheena† 《Statistics》2013,47(5):387-399
We consider the orthogonally invariant estimation problem of the inverse of the scale matrix of Wishart distribution using Stein's loss (entropy loss). In this problem Krishnamoorthy and Gupta [2] proposed an estimator and showed its good performance in a Monte Carlo simulation. They conjectured their estimator is minimax. Perron [3] proved its minimaxity for p?=?2. In this paper we prove it for p?=?3 by using a new method. 相似文献
8.
Testing the fractionally integrated order of seasonal and nonseasonal unit roots is quite important for the economic and financial time series modeling. In this article, the widely used Robinson's (1994) test is applied to various well-known long memory models. Via Monte Carlo experiments, we study and compare the performances of this test using several sample sizes. 相似文献
9.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice. 相似文献
10.
《统计学通讯:理论与方法》2013,42(8):1309-1333
ABSTRACT The search for optimal non-parametric estimates of the cumulative distribution and hazard functions under order constraints inspired at least two earlier classic papers in mathematical statistics: those of Kiefer and Wolfowitz[1] and Grenander[2] respectively. In both cases, either the greatest convex minorant or the least concave majorant played a fundamental role. Based on Kiefer and Wolfowitz's work, Wang3-4 found asymptotically minimax estimates of the distribution function F and its cumulative hazard function Λ in the class of all increasing failure rate (IFR) and all increasing failure rate average (IFRA) distributions. In this paper, we will prove limit theorems which extend Wang's asymptotic results to the mixed censorship/truncation model as well as provide some other relevant results. The methods are illustrated on the Channing House data, originally analysed by Hyde.5-6 相似文献
11.
In this article, we consider the progressive Type II right censored sample from Pareto distribution. We introduce a new approach for constructing the simultaneous confidence interval of the unknown parameters of this distribution under progressive censoring. A Monte Carlo study is also presented for illustration. It is shown that this confidence region has a smaller area than that introduced by Ku? and Kaya (2007). 相似文献
12.
13.
Jiro Hodoshima 《统计学通讯:理论与方法》2014,43(3):578-598
This article investigates the properties of the likelihood function of Spanos’ conditional t heteroskedastic model (Spanos, 1994) On modeling heteroskedasticity: the student's t and elliptical linear regression models. It is shown that estimability of the degrees of freedom of t distribution and the block-diagonality of the information matrix of the joint likelihood function with respect to conditional mean parameters and remaining parameters hold for the model. The joint maximum likelihood estimator and its inference based on the t-statistic and χ2-statistic are examined in finite samples by simulation when the degrees of freedom is known and unknown. 相似文献
14.
Chung-Ho Chen 《统计学通讯:理论与方法》2013,42(10):1767-1778
Economic selection of process parameters has been an important topic in modern statistical process control. The optimum process parameters setting have a major effect on the expected profit/cost per item. There are some concerns on the problem of setting process parameters. Boucher and Jafari (1991) first considered the attribute single sampling plan applied in the selection of process target. Pulak and Al-Sultan (1996) extended Boucher and Jafari's model and presented the rectifying inspection plan for determining the optimum process mean. In this article, we further propose a modified Pulak and Al-Sultan model for determining the optimum process mean and standard deviation under the rectifying inspection plan with the average outgoing quality limit (AOQL) protection. Taguchi's (1986) symmetric quadratic quality loss function is adopted for evaluating the product quality. By solving the modified model, we can obtain the optimum process parameters with the maximum expected profit per item and the specified quality level can be reached. 相似文献
15.
In this paper, the focus is on sequential analysis of multivariate financial time series with heavy tails. The mean vector and the covariance matrix of multivariate non linear models are simultaneously monitored by modifying conventional control charts to identify structural changes in the data. The considered target process is a constant conditional correlation model (cf. Bollerslev, 1990), an extended constant conditional correlation model (cf. He and Teräsvirta, 2004), a dynamic conditional correlation model (cf. Engle, 2002), or a generalized dynamic conditional correlation model (cf. Capiello et al., 2006). For statistical surveillance we use control charts based on residuals. Further, the procedures are constructed for t-distribution. The detection speed of these charts is compared via Monte Carlo simulation. In the empirical study, the procedure with the best performance is applied to log-returns of the stock market indices FTSE and CAC. 相似文献
16.
Coppi et al. [7] applied Yang and Wu's [20] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm. 相似文献
17.
《统计学通讯:理论与方法》2013,42(8-9):1789-1810
Mudholkar and Srivastava [1]adapted Mudholkar and Subbaiah's [2]modified stepwise procedure, using the trimmed means in place of the means and appropriate studentization, to construct robust tests for the significance of a mean vector. They concluded that the robust alternatives provide excellent type I error control, and a substantial gain in power over Hotelling's T 2test in case of heavy tailed populations without significant loss of power when the population is normal. In this paper we adapt the modified stepwise approach to construct simple tests for the significance of the orthant constrained mean vector of a p-variate normal population with unknown covariance matrix, and also for constructing robust tests without assuming normality. The simple normal theory tests have exact type I error, whereas the robust tests provide a reasonably type I error control and substantial power advantage over Perlman's [3]likelihood ratio test. 相似文献
18.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
19.
The concept of inclusion probability proportional to size sampling plans excluding adjacent units separated by at most a distance of m (≥ 1) units {IPPSEA plans} is introduced. IPPSEA plans ensure that the first-order inclusion probabilities of units are proportional to size measures of the units, while the second-order inclusion probabilities are zero for pairs of adjacent units separated by a distance of m units or less. IPPSEA plans have been obtained by making use of binary, proper, and unequireplicated block designs and linear programing approach. The performance of IPPSEA plans using Horvitz–Thompson estimator of population total has been compared with existing sampling plans such as simple random sampling without replacement (SRSWOR), balanced sampling plans excluding adjacent units {BSA (m) plans}, probability proportional to size with replacement, Hartley and Rao's plan (1962), Rao et al.'s strategy (1962), and Sampford's IPPS plan (1967) using a real life population. Unbiased estimation of Horvitz–Thompson estimator of population total is not possible in these types of plans because some of the second-order inclusion probabilities are zero. To resolve this problem, one approximate variance estimation technique has been suggested. 相似文献
20.
Thomas Parker 《统计学通讯:理论与方法》2017,46(11):5195-5202
In this note, it is shown that the finite-sample distributions of the Wald, likelihood ratio, and Lagrange multiplier statistics in the classical linear regression model are members of the generalized beta model introduced by McDonald and Xu (1995a). This is useful for examining the properties of these test statistics. For example, this characterization makes it easy to find distribution, quantile, and density functions for each test statistic, makes it clear why Wald tests may overreject the null hypothesis using asymptotic critical values, and formalizes the fact that the Lagrange multiplier statistic follows a distribution with bounded support. 相似文献