首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the following, the economic counterparts of Eichhorn's and Voeller's tests for statistical price indices will be studied. We will see that replacing the statistical Commensurability Axiom in the economic price index theory by a property which is only concerned with price changes leads to similar relationships between this one and several other tests as in the statistical price index theory.  相似文献   

2.
Bootstrap for generalized linear models   总被引:1,自引:1,他引:0  
We consider the distribution of the (standardized) ML-estimator of the unknown parameter vector in a Generalized Linear Model with canonical link function. It will be shown that its (parametric) Bootstrap estimator is consistent under the same assumptions needed by Fahrmeir & Kaufmann (1985, 1986) to show its asymptotic normality.  相似文献   

3.
We consider the problem of testing the null hypothesis of no change against the alternative of multiple change points in a series of independent observations. We propose an ANOVA-type test statistic and obtain its asymptotic null distribution. We also give approximations of its limiting critical values. We report the results of Monte Carlo studies conducted to compare the power of the proposed test against a number of its competitors. As illustrations we analyzed three real data sets.  相似文献   

4.
The OLS-estimator of the disturbance variance in the Linear Regression Model is shown to be asymptotically unbiased in the context of AR(1)-disturbances, although for any given design, E(s22) tends to zero as correlation increases.  相似文献   

5.
In this paper we propose an extension of the generalized half-normal distribution studied in Cooray and Ananda (Commun Stat 37:1323–1337, 2008). This new distribution is defined by considering the quotient of two random variables, the one in the numerator being a generalized half normal distribution and the one in the denominator being a power of the uniform distribution on \((0,1)\) , respectively. The resulting distribution has greater kurtosis than the generalized half normal distribution. The density function of this more general distribution is derived jointly with some of its properties and moments. We discuss stochastic representation, maximum likelihood and moments estimation. Applications to real data sets are reported revealing that the proposed distribution can fit real data better than the slashed half-normal, generalized half-normal and Birnbaum–Saunders distributions.  相似文献   

6.
Widely spread tools within the area of Statistical Process Control are control charts of various designs. Control chart applications are used to keep process parameters (e.g., mean \(\mu \) , standard deviation \(\sigma \) or percent defective \(p\) ) under surveillance so that a certain level of process quality can be assured. Well-established schemes such as exponentially weighted moving average charts (EWMA), cumulative sum charts or the classical Shewhart charts are frequently treated in theory and practice. Since Shewhart introduced a \(p\) chart (for attribute data), the question of controlling the percent defective was rarely a subject of an analysis, while several extensions were made using more advanced schemes (e.g., EWMA) to monitor effects on parameter deteriorations. Here, performance comparisons between a newly designed EWMA \(p\) control chart for application to continuous types of data, \(p=f(\mu ,\sigma )\) , and popular EWMA designs ( \(\bar{X}\) , \(\bar{X}\) - \(S^2\) ) are presented. Thus, isolines of the average run length are introduced for each scheme taking both changes in mean and standard deviation into account. Adequate extensions of the classical EWMA designs are used to make these specific comparisons feasible. The results presented are computed by using numerical methods.  相似文献   

7.
Yo Sheena† 《Statistics》2013,47(5):387-399
We consider the orthogonally invariant estimation problem of the inverse of the scale matrix of Wishart distribution using Stein's loss (entropy loss). In this problem Krishnamoorthy and Gupta [2] Krishnamoorthy, K. and Gupta, A. K. (1989). Improved minimax estimation of a normal precision matrix. Canad. J. Statist., 17: 91102. [Crossref], [Web of Science ®] [Google Scholar] proposed an estimator and showed its good performance in a Monte Carlo simulation. They conjectured their estimator is minimax. Perron [3] Perron, F. (1997). On a conjecture of Krishnamoorthy and Gupta. J. Multivariate Anal., 62: 110120.  [Google Scholar] proved its minimaxity for p?=?2. In this paper we prove it for p?=?3 by using a new method.  相似文献   

8.
Testing the fractionally integrated order of seasonal and nonseasonal unit roots is quite important for the economic and financial time series modeling. In this article, the widely used Robinson's (1994 Robinson , P. M. ( 1994 ). Efficient tests of nonstationary hypotheses . J. Am. Stat. Assoc. 89 ( 428 ): 14201437 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) test is applied to various well-known long memory models. Via Monte Carlo experiments, we study and compare the performances of this test using several sample sizes.  相似文献   

9.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938 Wilks , S. S. ( 1938 ). The large-sample distribution of the likelihood ratio for testing composite hypotheses . Annals of Mathematical Statistics 9 : 6062 .[Crossref] [Google Scholar]) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995 Chen , L. ( 1995 ). Testing the mean of skewed distributions . Journal of the American Statistical Association 90 : 767772 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice.  相似文献   

10.
ABSTRACT

The search for optimal non-parametric estimates of the cumulative distribution and hazard functions under order constraints inspired at least two earlier classic papers in mathematical statistics: those of Kiefer and Wolfowitz[1] Kiefer, J. and Wolfowitz, J. 1976. Asymptotically Minimax Estimation of Concave and Convex Distribution Functions. Z. Wahrsch. Verw. Gebiete, 34: 7385. [Crossref], [Web of Science ®] [Google Scholar] and Grenander[2] Grenander, U. 1956. On the Theory of Mortality Measurement. Part II. Scand. Aktuarietidskrift J., 39: 125153.  [Google Scholar] respectively. In both cases, either the greatest convex minorant or the least concave majorant played a fundamental role. Based on Kiefer and Wolfowitz's work, Wang3-4 Wang, J.L. 1986. Asymptotically Minimax Estimators for Distributions with Increasing Failure Rate. Ann. Statist., 14: 11131131. Wang, J.L. 1987. Estimators of a Distribution Function with Increasing Failure Rate Average. J. Statist. Plann. Inference, 16: 415427.   found asymptotically minimax estimates of the distribution function F and its cumulative hazard function Λ in the class of all increasing failure rate (IFR) and all increasing failure rate average (IFRA) distributions. In this paper, we will prove limit theorems which extend Wang's asymptotic results to the mixed censorship/truncation model as well as provide some other relevant results. The methods are illustrated on the Channing House data, originally analysed by Hyde.5-6 Hyde, J. 1977. Testing Survival Under Right Censoring and Left Truncation. Biometrika, 64: 225230. Hyde, J. 1980. “Survival Analysis with Incomplete Observations”. In Biostatistics Casebook 3146. New York: Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics.    相似文献   

11.
In this article, we consider the progressive Type II right censored sample from Pareto distribution. We introduce a new approach for constructing the simultaneous confidence interval of the unknown parameters of this distribution under progressive censoring. A Monte Carlo study is also presented for illustration. It is shown that this confidence region has a smaller area than that introduced by Ku? and Kaya (2007 Ku? , C. , Kaya , M. F. ( 2007 ). Estimation for the parameters of the Pareto distribution under progressive censoring . Commun. Statist. Theor. Meth. 36 : 13591365 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

12.
13.
This article investigates the properties of the likelihood function of Spanos’ conditional t heteroskedastic model (Spanos, 1994 Spanos , A. ( 1994 ). On modeling heteroskedasticity: The student's t and elliptical linear regression models . Econometric Theor. 10 : 286315 .[Crossref], [Web of Science ®] [Google Scholar]) On modeling heteroskedasticity: the student's t and elliptical linear regression models. It is shown that estimability of the degrees of freedom of t distribution and the block-diagonality of the information matrix of the joint likelihood function with respect to conditional mean parameters and remaining parameters hold for the model. The joint maximum likelihood estimator and its inference based on the t-statistic and χ2-statistic are examined in finite samples by simulation when the degrees of freedom is known and unknown.  相似文献   

14.
Economic selection of process parameters has been an important topic in modern statistical process control. The optimum process parameters setting have a major effect on the expected profit/cost per item. There are some concerns on the problem of setting process parameters. Boucher and Jafari (1991 Boucher , T. O. , Jafari , M. A. ( 1991 ). The optimum target value for single filling operations with quality sampling plans . J. Qual. Technol. 23 : 4447 . [CSA] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first considered the attribute single sampling plan applied in the selection of process target. Pulak and Al-Sultan (1996 Pulak , M. F. S. , Al-Sultan , K. S. ( 1996 ). The optimum targeting for a single filling operation with rectifying inspection . Omega 24 : 727733 . [CSA] [CROSSREF] [Crossref], [Web of Science ®] [Google Scholar]) extended Boucher and Jafari's model and presented the rectifying inspection plan for determining the optimum process mean. In this article, we further propose a modified Pulak and Al-Sultan model for determining the optimum process mean and standard deviation under the rectifying inspection plan with the average outgoing quality limit (AOQL) protection. Taguchi's (1986 Taguchi , G. ( 1986 ). Introduction to Quality Engineering . Asian Productivity Organization . [Google Scholar]) symmetric quadratic quality loss function is adopted for evaluating the product quality. By solving the modified model, we can obtain the optimum process parameters with the maximum expected profit per item and the specified quality level can be reached.  相似文献   

15.
In this paper, the focus is on sequential analysis of multivariate financial time series with heavy tails. The mean vector and the covariance matrix of multivariate non linear models are simultaneously monitored by modifying conventional control charts to identify structural changes in the data. The considered target process is a constant conditional correlation model (cf. Bollerslev, 1990 Bollerslev, T. (1990). Modeling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH model. Rev. Econ. Stat. 72:498505.[Crossref], [Web of Science ®] [Google Scholar]), an extended constant conditional correlation model (cf. He and Teräsvirta, 2004 He, C., Teräsvirta, T. (2004). An extended constant conditional correlation GARCH model and its fourth-moment structure. Economet. Theory 20:904926.[Crossref], [Web of Science ®] [Google Scholar]), a dynamic conditional correlation model (cf. Engle, 2002 Engle, R.F. (2002). Dynamic conditional correlation: A simple class of multivariate GARCH models. J. Bus. Econ. Stat. 20(3):339350.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), or a generalized dynamic conditional correlation model (cf. Capiello et al., 2006 Capiello, L., Engle, R., Sheppard, K. (2006). Asymmetric correlations in the dynamics of global equity and bond returns. J. Financial Economet. 4(4):537572.[Crossref] [Google Scholar]). For statistical surveillance we use control charts based on residuals. Further, the procedures are constructed for t-distribution. The detection speed of these charts is compared via Monte Carlo simulation. In the empirical study, the procedure with the best performance is applied to log-returns of the stock market indices FTSE and CAC.  相似文献   

16.
Coppi et al. [7 R. Coppi, P. D'Urso, and P. Giordani, Fuzzy and possibilistic clustering for fuzzy data, Comput. Stat. Data Anal. 56 (2012), pp. 915927. doi: 10.1016/j.csda.2010.09.013[Crossref], [Web of Science ®] [Google Scholar]] applied Yang and Wu's [20 M.-S. Yang and K.-L. Wu, Unsupervised possibilistic clustering, Pattern Recognit. 30 (2006), pp. 521. doi: 10.1016/j.patcog.2005.07.005[Crossref], [Web of Science ®] [Google Scholar]] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm.  相似文献   

17.
Mudholkar and Srivastava [1] Mudholkar, G. S. and Srivastava, D. K. A class of robust stepwise alternatives to Hotelling's T2tests. Submitted to the Journal of Applied Statistics 1999 [Google Scholar]adapted Mudholkar and Subbaiah's [2] Mudholkar, G. S. and Subbaiah, P. 1980. Testing significance of a mean vector–a possible alternative to Hotelling's T2. Ann. Inst. Statist. Math., 32(A): 4352.  [Google Scholar]modified stepwise procedure, using the trimmed means in place of the means and appropriate studentization, to construct robust tests for the significance of a mean vector. They concluded that the robust alternatives provide excellent type I error control, and a substantial gain in power over Hotelling's T 2test in case of heavy tailed populations without significant loss of power when the population is normal. In this paper we adapt the modified stepwise approach to construct simple tests for the significance of the orthant constrained mean vector of a p-variate normal population with unknown covariance matrix, and also for constructing robust tests without assuming normality. The simple normal theory tests have exact type I error, whereas the robust tests provide a reasonably type I error control and substantial power advantage over Perlman's [3] Perlman, M. D. 1969. One-sided testing problems in multivariate analysis. Annals of Mathematical Statistics, 40: 549567. [Crossref] [Google Scholar]likelihood ratio test.  相似文献   

18.
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010 Ahmadi , J. , Doostparast , M. , Parsian , A. ( 2010 ). Bayes estimation based on random censored data for some life time models under symmetric and asymmetric loss functions . Communcations in Statistics-Theory and Methods , 39 : 30583071 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001 Lam , K. F. , Leung , T. L. ( 2001 ). Marginal likelihood estimation for proportional odds models with right censored data . Lifetime Data Analysis 7 : 3954 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003 Lawless , J. F. (2003). Statistical Models and Methods for Lifetime Data. , 2nd ed. New York : John Wiley & Sons. [Google Scholar], p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002 Smith , P. J. ( 2002 ). Analysis of Failure and Survival Data . London : Chapman & Hall, CRC . [Google Scholar]). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained.  相似文献   

19.
The concept of inclusion probability proportional to size sampling plans excluding adjacent units separated by at most a distance of m (≥ 1) units {IPPSEA plans} is introduced. IPPSEA plans ensure that the first-order inclusion probabilities of units are proportional to size measures of the units, while the second-order inclusion probabilities are zero for pairs of adjacent units separated by a distance of m units or less. IPPSEA plans have been obtained by making use of binary, proper, and unequireplicated block designs and linear programing approach. The performance of IPPSEA plans using Horvitz–Thompson estimator of population total has been compared with existing sampling plans such as simple random sampling without replacement (SRSWOR), balanced sampling plans excluding adjacent units {BSA (m) plans}, probability proportional to size with replacement, Hartley and Rao's plan (1962 Hartley , H. O. , Rao , J. N. K. ( 1962 ). Sampling with unequal probabilities and without replacement . Ann. Math. Statist. 33 : 350374 .[Crossref] [Google Scholar]), Rao et al.'s strategy (1962 Rao , J. N. K. , Hartley , H. O. , Cochran , W. G. ( 1962 ). On a simple procedure of unequal probability sampling without replacement . J. Roy. Statist. Soc. B 24 : 482491 . [Google Scholar]), and Sampford's IPPS plan (1967 Sampford , M. R. ( 1967 ). On sampling without replacement with unequal probabilities of selection . Biometrika 54 ( 3 ): 499513 .[Crossref], [PubMed] [Google Scholar]) using a real life population. Unbiased estimation of Horvitz–Thompson estimator of population total is not possible in these types of plans because some of the second-order inclusion probabilities are zero. To resolve this problem, one approximate variance estimation technique has been suggested.  相似文献   

20.
In this note, it is shown that the finite-sample distributions of the Wald, likelihood ratio, and Lagrange multiplier statistics in the classical linear regression model are members of the generalized beta model introduced by McDonald and Xu (1995a McDonald, J.B., Xu, Y.J. (1995a). A generalization of the beta distribution with applications. J. Econom. 66:133152.[Crossref], [Web of Science ®] [Google Scholar]). This is useful for examining the properties of these test statistics. For example, this characterization makes it easy to find distribution, quantile, and density functions for each test statistic, makes it clear why Wald tests may overreject the null hypothesis using asymptotic critical values, and formalizes the fact that the Lagrange multiplier statistic follows a distribution with bounded support.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号