首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

2.
In a 1965 Decision Theory course at Stanford University, Charles Stein began a digression with “an amusing problem”: is there a proper confidence interval for the mean based on a single observation from a normal distribution with both mean and variance unknown? Stein introduced the interval with endpoints ± c|X| and showed indeed that for c large enough, the minimum coverage probability (over all values for the mean and variance) could be made arbitrarily near one. While the problem and coverage calculation were in the author’s hand-written notes from the course, there was no development of any optimality result for the interval. Here, the Hunt–Stein construction plus analysis based on special features of the problem provides a “minimax” rule in the sense that it minimizes the maximum expected length among all procedures with fixed coverage (or, equivalently, maximizes the minimal coverage among all procedures with a fixed expected length). The minimax rule is a mixture of two confidence procedures that are equivariant under scale and sign changes, and are uniformly better than the classroom example or the natural interval X ± c|X|?.  相似文献   

3.
This article presents the “centered” method for establishing cell boundaries in the X 2 goodness-of-fit test, which when applied to common stock returns significantly reduces the high bias of the test statistic associated with the traditional Mann–Wald equiprobable approach. A modified null hypothesis is proposed to incorporate explicitly the usually implicit assumption that the observed discrete returns are “approximated” by the hypothesized continuous density. Simulation results indicate extremely biased X 2 values resulting from the traditional approach, particularly for low-priced and low volatile stocks. Daily stock returns for 114 firms are tested to determine whether they are approximated by a normal or one of several normal mixture densities. Results indicate a significantly higher degree of fit than that reported elsewhere to date.  相似文献   

4.
The mean vector associated with several independent variates from the exponential subclass of Hudson (1978) is estimated under weighted squared error loss. In particular, the formal Bayes and “Stein-like” estimators of the mean vector are given. Conditions are also given under which these estimators dominate any of the “natural estimators”. Our conditions for dominance are motivated by a result of Stein (1981), who treated the Np (θ, I) case with p ≥ 3. Stein showed that formal Bayes estimators dominate the usual estimator if the marginal density of the data is superharmonic. Our present exponential class generalization entails an elliptic differential inequality in some natural variables. Actually, we assume that each component of the data vector has a probability density function which satisfies a certain differential equation. While the densities of Hudson (1978) are particular solutions of this equation, other solutions are not of the exponential class if certain parameters are unknown. Our approach allows for the possibility of extending the parametric Stein-theory to useful nonexponential cases, but the problem of nuisance parameters is not treated here.  相似文献   

5.
In the article, the tests are constructed for the hypotheses that p ? 2 independent samples have the same distribution density (homogeneity hypothesis) or have the same well-defined distribution density (goodness-of-fit test). The limiting power of the constructed tests is found for some local “close” alternatives.  相似文献   

6.
We consider the specific transformation of a Wiener process {X(t), t ≥ 0} in the presence of an absorbing barrier a that results when this process is “time-locked” with respect to its first passage time T a through a criterion level a, and the evolution of X(t) is considered backwards (retrospectively) from T a . Formally, we study the random variables defined by Y(t) ≡ X(T a  ? t) and derive explicit results for their density and mean, and also for their asymptotic forms. We discuss how our results can aid interpretations of time series “response-locked” to their times of crossing a criterion level.  相似文献   

7.
We consider automatic data-driven density, regression and autoregression estimates, based on any random bandwidth selector h/T. We show that in a first-order asymptotic approximation they behave as well as the related estimates obtained with the “optimal” bandwidth hT as long as hT/hT → 1 in probability. The results are obtained for dependent observations; some of them are also new for independent observations.  相似文献   

8.
Consider k independent observations Yi (i= 1,., k) from two-parameter exponential populations i with location parameters μ and the same scale parameter If the μi are ranked as consider population as the “worst” population and IIp(k) as the “best” population (with some tagging so that p{) and p(k) are well defined in the case of equalities). If the Yi are ranked as we consider the procedure, “Select provided YR(k) Yr(k) is sufficiently large so that is demonstrably better than the other populations.” A similar procedure is studied for selecting the “demonstrably worst” population.  相似文献   

9.
“So the last shall be first, and the first last; for many be called, but few chosen.” Matthew 20:16 The “random” draw for positions on the Senate ballot papers in the 1975 election resulted in an apparently non-random ordering, to the possible advantage of one particular party. This paper assesses the statistical significance of the 1975 draw and looks at possible causes of the evident non-randomness. A simplified yet realistic mathematical model is used to describe conditions under which the so-called donkey vote can have an effect on the final outcome of the election, thereby confirming the widely-held belief that the order of parties on the Senate ballot paper is relevant. We examine other Senate elections between 1949 and 1983 for the existence of relevant non-randomness similar to the 1975 result. Finally, we report briefly on our submission to the 1983 Joint Select Committee on Electoral Reform, which led to an improvement in the randomisation procedure.  相似文献   

10.
P. Reimnitz 《Statistics》2013,47(2):245-263
The classical “Two Armed Bandit” problem with Bernoulli-distributed outcomes is being considered. First the terms “asymptotic nearly admissibility” and “asymptotic nearly optimality” are defined. A nontrivial asymptotic nearly admissible and (with respect to a certain Bayes risk) asymptotic nearly optimal strategy is presented, then these properties are shown. Finally, it is discussed how these results generalize to the non-Bernoulli cases and the “k-Armed Bandit” problem (;k≧2).  相似文献   

11.
12.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

13.
“Precision” may be thought of either as the closeness with which a reported value approximates a “true” value, or as the number of digits carried in computations, depending on context. With suitable formal definitions, it is shown that the precision of a reported value is the difference between the precision with which computations are performed and the “loss” in precision due to the computations. Loss in precision is a function of the quantity computed and of the algorithm used to compute it; in the case of the usual “computing formula” for variances and covariances, it is shown that the loss of precision is expected to be log k i k j where k i , the reciprocal of the coefficient of variation, is the ratio of the mean to the standard deviation of the ith variable. When the precision of a reported value, the precision of computations, and the loss of precision due to the computations are expressed to the same base, all three quantities have the units of significant digits in the corresponding number system. Using this metric for “precision,” the expected precision of a computed (co)variance may be estimated in advance of the computation; for data reported in the paper, the estimates agree closely with observed precision. Implications are drawn for the programming of general-purpose statistical programs, as well as for users of existing programs, in order to minimize the loss of precision resulting from characteristics of the data, A nomograph is provided to facilitate the estimation of precision in binary, decimal, and hexadecimal digits.  相似文献   

14.
New generalized correlation measures of 2012, GMC(Y|X), use Kernel regressions to overcome the linearity of Pearson's correlation coefficients. A new matrix of generalized correlation coefficients is such that when |r*ij| > |r*ji|, it is more likely that the column variable Xj is what Granger called the “instantaneous cause” or what we call “kernel cause” of the row variable Xi. New partial correlations ameliorate confounding. Various examples and simulations support robustness of new causality. We include bootstrap inference, robustness checks based on the dependence between regressor and error, and on the out-of-sample forecasts. Data for 198 countries on nine development variables support growth policy over redistribution and Deaton's criticism of foreign aid. Potential applications include Big Data, since our R code is available in the online supplementary material.  相似文献   

15.
We extend a diagnostic plot for the frailty distribution in proportional hazards models to the case of shared frailty. The plot is based on a closure property of exponential family failure distributions with canonical statistics z and g(z), namely that the frailty distribution among survivors at time t has the same form, with the same values of the parameters associated with g(z). We extend this property to shared frailty, considering various definitions of a “surviving” cluster at time t. We illustrate the effectiveness of the method in the case where the “death” of the cluster is defined by the first death among its members.  相似文献   

16.
It is well known that the inverse-square-root rule of Abramson (1982) for the bandwidth h of a variable-kernel density estimator achieves a reduction in bias from the fixed-bandwidth estimator, even when a nonnegative kernel is used. Without some form of “clipping” device similar to that of Abramson, the asymptotic bias can be much greater than O(h4) for target densities like the normal (Terrell and Scott 1992) or even compactly supported densities. However, Abramson used a nonsmooth clipping procedure intended for pointwise estimation. Instead, we propose a smoothly clipped estimator and establish a globally valid, uniformly convergent bias expansion for densities with uniformly continuous fourth derivatives. The main result extends Hall's (1990) formula (see also Terrell and Scott 1992) to several dimensions, and actually to a very general class of estimators. By allowing a clipping parameter to vary with the bandwidth, the usual O(h4) bias expression holds uniformly on any set where the target density is bounded away from zero.  相似文献   

17.
This article considers the non parametric estimation of absolutely continuous distribution functions of independent lifetimes of non identical components in k-out-of-n systems, 2 ? k ? n, from the observed “autopsy” data. In economics, ascending “button” or “clock” auctions with n heterogeneous bidders with independent private values present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions, the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. This article considers the sieve spaces of Bernstein polynomials which allow to easily implement constraints on the monotonicity of estimated distribution functions.  相似文献   

18.
We propose a simple method for evaluating the model that has been chosen by an adaptive regression procedure, our main focus being the lasso. This procedure deletes each chosen predictor and refits the lasso to get a set of models that are “close” to the chosen “base model,” and compares the error rates of the base model with that of nearby models. If the deletion of a predictor leads to significant deterioration in the model's predictive power, the predictor is called indispensable; otherwise, the nearby model is called acceptable and can serve as a good alternative to the base model. This provides both an assessment of the predictive contribution of each variable and a set of alternative models that may be used in place of the chosen model. We call this procedure “Next-Door analysis” since it examines models “next” to the base model. It can be applied to supervised learning problems with 1 penalization and stepwise procedures. We have implemented it in the R language as a library to accompany the well-known glmnet library. The Canadian Journal of Statistics 48: 447–470; 2020 © 2020 Statistical Society of Canada  相似文献   

19.
Abstract

The company now known as DC began as National Periodicals, publishing anthology series such as Adventure Comics, More Fun Comics, and Detective Comics. Superman, the first true “super-hero,” appeared on the scene in a brief story in Action Comics no. 1 (June, 1938). Batman appeared not long after, in the pages of Detective Comics no. 27 (May 1939)|3-and the world was never the same. In the late 1940s, National absorbed its competitor Ail-American Comics (which published such series as Green Lantern, Aquaman, and Green Arrow) and changed the company's name to Detective Comics, “DC” for short. The merger made DC the largest comic book company until the 1950s, when interest in the medium dried up, and Dell, who at that time published Walt Disney's comic books, took over the top spot.  相似文献   

20.
ABSTRACT

Various approaches can be used to construct a model from a null distribution and a test statistic. I prove that one such approach, originating with D. R. Cox, has the property that the p-value is never greater than the Generalized Likelihood Ratio (GLR). When combined with the general result that the GLR is never greater than any Bayes factor, we conclude that, under Cox’s model, the p-value is never greater than any Bayes factor. I also provide a generalization, illustrations for the canonical Normal model, and an alternative approach based on sufficiency. This result is relevant for the ongoing discussion about the evidential value of small p-values, and the movement among statisticians to “redefine statistical significance.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号