首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
The change from the z of “Student's” 1908 paper to the t of present day statistical theory and practice is traced and documented. It is shown that the change was brought about by the extension of “Student's” approach, by R.A. Fisher, to a broader class of problems, in response to a direct appeal from “Student” for a solution to one of these problems.  相似文献   

2.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

3.
Selection from k independent populations of the t (< k) populations with the smallest scale parameters has been considered under the Indifference Zone approach by Bechhofer k Sobel (1954). The same problem has been considered under the Subset Selection approach by Gupta & Sobel (1962a) for the normal variances case and by Carroll, Gupta & Huang (1975) for the more general case of stochastically increasing distributions. This paper uses the Subset Selection approach to place confidence bounds on the probability of selecting all “good” populations, or only “good” populations, for the Case of scale parameters, where a “good” population is defined to have one of the t smallest scale parameters. This is an extension of the location parameter results obtained by Bofinger & Mengersen (1986). Special results are obtained for the case of selecting normal populations based on variances and the necessary tables are presented.  相似文献   

4.
“Precision” may be thought of either as the closeness with which a reported value approximates a “true” value, or as the number of digits carried in computations, depending on context. With suitable formal definitions, it is shown that the precision of a reported value is the difference between the precision with which computations are performed and the “loss” in precision due to the computations. Loss in precision is a function of the quantity computed and of the algorithm used to compute it; in the case of the usual “computing formula” for variances and covariances, it is shown that the loss of precision is expected to be log k i k j where k i , the reciprocal of the coefficient of variation, is the ratio of the mean to the standard deviation of the ith variable. When the precision of a reported value, the precision of computations, and the loss of precision due to the computations are expressed to the same base, all three quantities have the units of significant digits in the corresponding number system. Using this metric for “precision,” the expected precision of a computed (co)variance may be estimated in advance of the computation; for data reported in the paper, the estimates agree closely with observed precision. Implications are drawn for the programming of general-purpose statistical programs, as well as for users of existing programs, in order to minimize the loss of precision resulting from characteristics of the data, A nomograph is provided to facilitate the estimation of precision in binary, decimal, and hexadecimal digits.  相似文献   

5.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

6.
Dedicated to the late Professor J. B. S. Haldane who brought to my attention the following very significant story from the ancient Indian epic Mahabharat (Nala—Damayanti Akhy[abar]n): The king lost his way in a jungle and was required to spend the night in a tree. The next day he told some fellow traveller that the total number of leaves on the tree were “so many”. On being challenged as to whether he counted all the leaves he replied; “No, but I counted leaves on a few branches of the tree and I know the science of die throwing”. (I can vouch for accurateness of the reproduction only in the essential respects.)  相似文献   

7.
P. Reimnitz 《Statistics》2013,47(2):245-263
The classical “Two Armed Bandit” problem with Bernoulli-distributed outcomes is being considered. First the terms “asymptotic nearly admissibility” and “asymptotic nearly optimality” are defined. A nontrivial asymptotic nearly admissible and (with respect to a certain Bayes risk) asymptotic nearly optimal strategy is presented, then these properties are shown. Finally, it is discussed how these results generalize to the non-Bernoulli cases and the “k-Armed Bandit” problem (;k≧2).  相似文献   

8.
The so-called “principal formulae” of planar integral geometry are conventionally couched in terms of the “kinematic density”dxdydθ. Here a corresponding theory with respect to the “Lebesgue density”dxdy, that is with rotations suppressed, is developed. The only real difference is that the new “fundamental formula of Blaschke”contains a term depending upon the relative orientations of the two domains involved. In particular, the remarkable iteration property of these formulae carries over. The usual principal formulae follow as a corollary of the formulae given here, upon, averaging over orientations.  相似文献   

9.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

10.
11.
This paper provides D-optimal spring balance designs for estimating individual weights when the number of objects to be weighed in each weighing, B, is fixed. D-optimal chemical balance designs for estimating total weight under both homogeneous and nonhomogeneous error variances are found when the number of objects weighed in each weighing is ≥ B, a fixed number.

We indicate the restriction used in Chacko & Dey(1978) and Kageyama(1988), i.e. that chemical designs X be restricted to designs in which exactly “a” objects are replaced on the left pan and exactly “b” on the right pan in each of the weighings for a, b > 0, is unnecessary.  相似文献   

12.
Consider k independent observations Yi (i= 1,., k) from two-parameter exponential populations i with location parameters μ and the same scale parameter If the μi are ranked as consider population as the “worst” population and IIp(k) as the “best” population (with some tagging so that p{) and p(k) are well defined in the case of equalities). If the Yi are ranked as we consider the procedure, “Select provided YR(k) Yr(k) is sufficiently large so that is demonstrably better than the other populations.” A similar procedure is studied for selecting the “demonstrably worst” population.  相似文献   

13.
A “spurious regression” is one in which the time-series variables are non stationary and independent. It is well known that in this context the OLS parameter estimates and the R 2 converge to functionals of Brownian motions, the “t-ratios” diverge in distribution, and the Durbin–Watson statistic converges in probability to zero. We derive corresponding results for some common tests for the normality and homoskedasticity of the errors in a spurious regression.  相似文献   

14.
Teresa Ledwina 《Statistics》2013,47(1):105-118
We state some necessary and sufficient conditions for admissibility of tests for a simple and a composite null hypotheses against ”one-sided” alternatives on multivariate exponential distributions with discrete support.

Admissibility of the maximum likelihood test for “one –sided” alternatives and z χ2test for the independence hypothesis in r× scontingency tables is deduced among others.  相似文献   

15.
This paper presents some further results related to a queueing model originally suggested by the author: “Two queues in parallel” (J. Boy. Statist. Soc, Ser. B, 31, 432–445, 1969). A detailed study of the model under the imposition of finite waiting rooms when each queue can accommodate at most two customers illustrates the complexity of the problems involved. However, this study leads to some results concerning the nature of the equilibrium probabilities in the more general finite waiting room cases. A study of the correlation between the input distributions and the correlation of the queue size distributions gives rise to some general conjectures concerning these relationships.  相似文献   

16.
Abstract

In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories. To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations. In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories. Two different subsampling procedures will be discussed, named “stratified” and “random” sampling. Although these two sampling methods lead to different probability distributions, it is shown that the likelihood inferences are identical. The MLE are derived and the situations with undefined MLE are examined in detail; it is shown that an unbiased MLE can be obtained by stratified sampling. Three different methods for constructing confidence upper limits are discussed; the Bayesian upper limit seems to be the most satisfactory. Our theoretical results are applied to two cases with r = 2 and k = 2 or 3, respectively.  相似文献   

17.
“So the last shall be first, and the first last; for many be called, but few chosen.” Matthew 20:16 The “random” draw for positions on the Senate ballot papers in the 1975 election resulted in an apparently non-random ordering, to the possible advantage of one particular party. This paper assesses the statistical significance of the 1975 draw and looks at possible causes of the evident non-randomness. A simplified yet realistic mathematical model is used to describe conditions under which the so-called donkey vote can have an effect on the final outcome of the election, thereby confirming the widely-held belief that the order of parties on the Senate ballot paper is relevant. We examine other Senate elections between 1949 and 1983 for the existence of relevant non-randomness similar to the 1975 result. Finally, we report briefly on our submission to the 1983 Joint Select Committee on Electoral Reform, which led to an improvement in the randomisation procedure.  相似文献   

18.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

19.
We propose a simple method for evaluating the model that has been chosen by an adaptive regression procedure, our main focus being the lasso. This procedure deletes each chosen predictor and refits the lasso to get a set of models that are “close” to the chosen “base model,” and compares the error rates of the base model with that of nearby models. If the deletion of a predictor leads to significant deterioration in the model's predictive power, the predictor is called indispensable; otherwise, the nearby model is called acceptable and can serve as a good alternative to the base model. This provides both an assessment of the predictive contribution of each variable and a set of alternative models that may be used in place of the chosen model. We call this procedure “Next-Door analysis” since it examines models “next” to the base model. It can be applied to supervised learning problems with 1 penalization and stepwise procedures. We have implemented it in the R language as a library to accompany the well-known glmnet library. The Canadian Journal of Statistics 48: 447–470; 2020 © 2020 Statistical Society of Canada  相似文献   

20.
This article considers the non parametric estimation of absolutely continuous distribution functions of independent lifetimes of non identical components in k-out-of-n systems, 2 ? k ? n, from the observed “autopsy” data. In economics, ascending “button” or “clock” auctions with n heterogeneous bidders with independent private values present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions, the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. This article considers the sieve spaces of Bernstein polynomials which allow to easily implement constraints on the monotonicity of estimated distribution functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号