首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Sample Size     
Conventionally, sample size calculations are viewed as calculations determining the right number of subjects needed for a study. Such calculations follow the classical paradigm: “for a difference X, I need sample size Y.” We argue that the paradigm “for a sample size Y, I get information Z” is more appropriate for many studies and reflects the information needed by scientists when planning a study. This approach applies to both physiological studies and Phase I and II interventional studies. We provide actual examples from our own consulting work to demonstrate this. We conclude that sample size should be viewed not as a unique right number, but rather as a factor needed to assess the utility of a study.  相似文献   

3.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

4.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

5.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

6.
P. Reimnitz 《Statistics》2013,47(2):245-263
The classical “Two Armed Bandit” problem with Bernoulli-distributed outcomes is being considered. First the terms “asymptotic nearly admissibility” and “asymptotic nearly optimality” are defined. A nontrivial asymptotic nearly admissible and (with respect to a certain Bayes risk) asymptotic nearly optimal strategy is presented, then these properties are shown. Finally, it is discussed how these results generalize to the non-Bernoulli cases and the “k-Armed Bandit” problem (;k≧2).  相似文献   

7.
“So the last shall be first, and the first last; for many be called, but few chosen.” Matthew 20:16 The “random” draw for positions on the Senate ballot papers in the 1975 election resulted in an apparently non-random ordering, to the possible advantage of one particular party. This paper assesses the statistical significance of the 1975 draw and looks at possible causes of the evident non-randomness. A simplified yet realistic mathematical model is used to describe conditions under which the so-called donkey vote can have an effect on the final outcome of the election, thereby confirming the widely-held belief that the order of parties on the Senate ballot paper is relevant. We examine other Senate elections between 1949 and 1983 for the existence of relevant non-randomness similar to the 1975 result. Finally, we report briefly on our submission to the 1983 Joint Select Committee on Electoral Reform, which led to an improvement in the randomisation procedure.  相似文献   

8.
ABSTRACT

We study estimation and inference when there are multiple values (“matches”) for the explanatory variables and only one of the matches is the correct one. This problem arises often when two datasets are linked together on the basis of information that does not uniquely identify regressor values. We offer a set of two intuitive conditions that ensure consistent inference using the average of the possible matches in a linear framework. The first condition is the exogeneity of the false match with respect to the regression error. The second condition is a notion of exchangeability between the true and false matches. Conditioning on the observed data, the probability that each match is correct is completely unrestricted. We perform a Monte Carlo study to investigate the estimator’s finite-sample performance relative to others proposed in the literature. Finally, we provide an empirical example revisiting a main area of application: the measurement of intergenerational elasticities in income. Supplementary materials for this article are available online.  相似文献   

9.
Galton's first work on regression probably led him to think of it as a unidirectional, genetic process, which he called “reversion.” A subsequent experiment on family heights made him realize that the phenomenon was symmetric and nongenetic. Galton then abandoned “reversion” in favor of “regression.” Final confirmation was provided through Dickson's mathematical analysis and Galton's examination of height data on brothers.  相似文献   

10.
The exponentially weighted moving average (EWMA) chart is often designed assuming the process parameters are known. In practice, the parameters are rarely known and need to be estimated from Phase I samples. Different Phase I samples are used when practitioners construct their own control chart's limits, which leads to the “Phase I between-practitioners” variability in the in-control average run length (ARL) of control charts. The standard deviation of the ARL (SDARL) is a good alternative to quantify this variability in control charts. Based on the SDARL metric, the performance of the EWMA median chart with estimated parameters is investigated in this paper. Some recommendations are given based on the SDARL metric. The results show that the EWMA median chart requires a much larger amount of Phase I data in order to reduce the variation in the in-control ARL up to a reasonable level. Due to the limitation of the amount of the Phase I data, the suggested EWMA median chart is designed with the bootstrap method which provides a good balance between the in-control and out-of-control ARL values.  相似文献   

11.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

12.
The so-called “principal formulae” of planar integral geometry are conventionally couched in terms of the “kinematic density”dxdydθ. Here a corresponding theory with respect to the “Lebesgue density”dxdy, that is with rotations suppressed, is developed. The only real difference is that the new “fundamental formula of Blaschke”contains a term depending upon the relative orientations of the two domains involved. In particular, the remarkable iteration property of these formulae carries over. The usual principal formulae follow as a corollary of the formulae given here, upon, averaging over orientations.  相似文献   

13.
In this article, we consider the distributions of the number of success runs of specified length and scans on a higher-order Markov tree under three different enumeration schemes (the “non overlapping”, the “at least”, and the “overlapping” scheme). Recursive formulae for the evaluation of their probability generating functions are established. We provide a proper framework for extending the exact distribution theory of runs and scans from based on sequences to based on directed trees. Some numerical results for the run and scan statistics are given in order to illustrate the computational aspects and the feasibility of our theoretical results. Finally, two special reliability systems are considered, which are closely related to our general results.  相似文献   

14.
Selection from k independent populations of the t (< k) populations with the smallest scale parameters has been considered under the Indifference Zone approach by Bechhofer k Sobel (1954). The same problem has been considered under the Subset Selection approach by Gupta & Sobel (1962a) for the normal variances case and by Carroll, Gupta & Huang (1975) for the more general case of stochastically increasing distributions. This paper uses the Subset Selection approach to place confidence bounds on the probability of selecting all “good” populations, or only “good” populations, for the Case of scale parameters, where a “good” population is defined to have one of the t smallest scale parameters. This is an extension of the location parameter results obtained by Bofinger & Mengersen (1986). Special results are obtained for the case of selecting normal populations based on variances and the necessary tables are presented.  相似文献   

15.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk.  相似文献   

16.
The change from the z of “Student's” 1908 paper to the t of present day statistical theory and practice is traced and documented. It is shown that the change was brought about by the extension of “Student's” approach, by R.A. Fisher, to a broader class of problems, in response to a direct appeal from “Student” for a solution to one of these problems.  相似文献   

17.
ABSTRACT

Elsewhere, I have promoted (univariate continuous) “transformation of scale” (ToS) distributions having densities of the form 2g?1(x)) where g is a symmetric distribution and Π is a transformation function with a special property. Here, I develop bivariate (readily multivariate) ToS distributions. Univariate ToS distributions have a transformation of random variable relationship with Azzalini-type skew-symmetric distributions; the bivariate ToS distribution here arises from marginal variable transformation of a particular form of bivariate skew-symmetric distribution. Examples are given, as are basic properties—unimodality, a covariance property, random variate generation—and connections with a bivariate inverse Gaussian distribution are pointed out.  相似文献   

18.
ABSTRACT

Various approaches can be used to construct a model from a null distribution and a test statistic. I prove that one such approach, originating with D. R. Cox, has the property that the p-value is never greater than the Generalized Likelihood Ratio (GLR). When combined with the general result that the GLR is never greater than any Bayes factor, we conclude that, under Cox’s model, the p-value is never greater than any Bayes factor. I also provide a generalization, illustrations for the canonical Normal model, and an alternative approach based on sufficiency. This result is relevant for the ongoing discussion about the evidential value of small p-values, and the movement among statisticians to “redefine statistical significance.”  相似文献   

19.
This article considers the non parametric estimation of absolutely continuous distribution functions of independent lifetimes of non identical components in k-out-of-n systems, 2 ? k ? n, from the observed “autopsy” data. In economics, ascending “button” or “clock” auctions with n heterogeneous bidders with independent private values present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions, the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. This article considers the sieve spaces of Bernstein polynomials which allow to easily implement constraints on the monotonicity of estimated distribution functions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号