首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Selection from k independent populations of the t (< k) populations with the smallest scale parameters has been considered under the Indifference Zone approach by Bechhofer k Sobel (1954). The same problem has been considered under the Subset Selection approach by Gupta & Sobel (1962a) for the normal variances case and by Carroll, Gupta & Huang (1975) for the more general case of stochastically increasing distributions. This paper uses the Subset Selection approach to place confidence bounds on the probability of selecting all “good” populations, or only “good” populations, for the Case of scale parameters, where a “good” population is defined to have one of the t smallest scale parameters. This is an extension of the location parameter results obtained by Bofinger & Mengersen (1986). Special results are obtained for the case of selecting normal populations based on variances and the necessary tables are presented.  相似文献   

2.
In May 2007, Scotland went to the polls to elect both local constituency and regional members to the Scottish Parliament. Astonishingly, 100000 votes were rejected as "spoiled". Voters had misunderstood the new form of ballot paper—3% had marked a single cross despite having two votes. Parties contesting the election described it as "a debacle", "a shambles", "totally unsatisfactory". One American called it more flawed than Florida's notorious "hanging chad" ballot that gave the 2000 Presidential election to George Bush. What went wrong? Sheila Bird looks into the Scottish ballot paper.  相似文献   

3.
Children represent a large underserved population of “therapeutic orphans,” as an estimated 80% of children are treated off‐label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or “borrowing”) of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure‐response information for antiepileptic drugs to pediatrics.  相似文献   

4.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

5.

In this paper two innovative procedures for the decomposition of the Pietra index are proposed. The first one allows the decomposition by sources, while the second one provides the decomposition by subpopulations. As special case of the latter procedure, the “classical” decomposition in two components (within and between) can be easily obtained. A remarkable feature of both the proposed procedures is that they permit the assessment of the contribution to the Pietra index at the smallest possible level: each source for the first one and each subpopulation for the second one. To highlight the usefulness of these procedures, two applications are provided regarding Italian professional football (soccer) teams.

  相似文献   

6.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

7.
The Australian Senate is elected using a form of proportional representation in each of the six States and two Territories. Those candidates who receive more than a "quota" of votes are elected. In those cases where not enough candidates receive a quota of votes, the surplus votes of each elected candidate (that is, votes over and above the quota) are transferred to the remaining candidates, according to preferences expressed by the voters. These surplus votes are chosen at random and although constraints are placed in order to reduce the sampling error, there is nevertheless a small error introduced in most elections due to sampling. -The method of calculating this error has several unusual features and is therefore given in detail for one case, that of the 1974 Senate election for Victoria. Results are tabulated for all States for the elections of 1970, 1974 and 1975. An estimate of how often the wrong candidate will be elected by chance is given. Finally, a simple method for avoiding the sampling problem is suggested.  相似文献   

8.
Abstract

For non-negative integer-valued random variables, the concept of “damaged” observations was introduced, for the first time, by Rao and Rubin [Rao, C. R., Rubin, H. (1964). On a characterization of the Poisson distribution. Sankhya 26:295–298] in 1964 on a paper concerning the characterization of Poisson distribution. In 1965, Rao [Rao, C. R. (1965). On discrete distribution arising out of methods of ascertainment. Sankhya Ser. A. 27:311–324] discusses some results related with inferences for parameters of a Poisson Model when it has occurred partial destruction of observations. A random variable is said to be damaged if it is unobservable, due to a damage mechanism which randomly reduces its magnitude. In subsequent years, considerable attention has been given to characterizations of distributions of such random variables that satisfy the “Rao–Rubin” condition. This article presents some inference aspects of a damaged Poisson distribution, under reasonable assumption that, when an observation on the random variable is made, it is also possible to determine whether or not some damage has occurred. In other words, we do not know how many items are damaged, but we can identify the existence of damage. Particularly it is illustrated the situation in which it is possible to identify the occurrence of some damage although it is not possible to determine the amount of items damaged. Maximum likelihood estimators of the underlying parameters and their asymptotic covariance matrix are obtained. Convergence of the estimates of parameters to the asymptotic values are studied through Monte Carlo simulations.  相似文献   

9.
In profile monitoring, some methods have been developed to detect the unspecified changes in the profiles. However, detecting changes away from the “normal” profile toward one of several prespecified “bad” profiles is one possible and challenging purpose. In this article, control charts with supplementary runs rules are developed to detect the prespecified changes in linear profiles. A control chart is first developed based on the Student's t-statistic in t test, and two runs rules are then supplemented to this chart, respectively. Simulation studies show that the proposed control schemes are effective and stable. Moreover, the control schemes are better than the existing alternative charts when the number of observations per sample profile is large. Finally, two illustrative examples indicate that our proposed schemes are effective and easy to be implemented.  相似文献   

10.
The symposium was held on July 26 and 27, 1983 at the scenic top floor Faculty Lounge of the Leon Lowenstein Center, a part of the Lincoln Center campus of Fordham University in New York. It was attended by about forty people from all over, as represented by the affiliations of the authors. This issue of Communications in Statisticsis devoted to the Fordham symposium. This introduction is limited to an overview with highlights, since abstracts accompany the papers.

The “call for papers” issued in November 1982 indicated that ridge methods and multicollinearity problems would be the main theme, and that both methodological and applied papers will be included.  相似文献   

11.
Abstract

In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories. To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations. In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories. Two different subsampling procedures will be discussed, named “stratified” and “random” sampling. Although these two sampling methods lead to different probability distributions, it is shown that the likelihood inferences are identical. The MLE are derived and the situations with undefined MLE are examined in detail; it is shown that an unbiased MLE can be obtained by stratified sampling. Three different methods for constructing confidence upper limits are discussed; the Bayesian upper limit seems to be the most satisfactory. Our theoretical results are applied to two cases with r = 2 and k = 2 or 3, respectively.  相似文献   

12.
The generalized negative exponential disparity, discussed in Bhandari et al. (Robust inference in parametric models using the family of generalized negative exponential disparities, 2006, ANZJS, 48 , 95–114), represents an important class of disparity measures that generates efficient estimators and tests with strong robustness properties. In their paper, however, Bhandari et al. failed to provide a sharp lower bound for the power breakdown point of the corresponding tests. This was acknowledged by the authors, who indicated the possible existence of a sharper bound, but noted that they did not “have a proof at this point”. In this paper we provide an improved bound for this power breakdown point, and show with an example how this can enhance the existing results.  相似文献   

13.
Abstract

“They Might Be Giants” gets a facelift, tackles a new medium and welcomes a co-editor. Michael Brown and Jessica Teeter bring you this first installment of “From Picas to Pixels: Life in the Trenches of Print and Web Publishing.” This installment features an interview with the publishers of a Web magazine called FILE Magazine, A Collection of Unexpected Photography.  相似文献   

14.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk.  相似文献   

15.
We extend a diagnostic plot for the frailty distribution in proportional hazards models to the case of shared frailty. The plot is based on a closure property of exponential family failure distributions with canonical statistics z and g(z), namely that the frailty distribution among survivors at time t has the same form, with the same values of the parameters associated with g(z). We extend this property to shared frailty, considering various definitions of a “surviving” cluster at time t. We illustrate the effectiveness of the method in the case where the “death” of the cluster is defined by the first death among its members.  相似文献   

16.
The change from the z of “Student's” 1908 paper to the t of present day statistical theory and practice is traced and documented. It is shown that the change was brought about by the extension of “Student's” approach, by R.A. Fisher, to a broader class of problems, in response to a direct appeal from “Student” for a solution to one of these problems.  相似文献   

17.
Box's paper helicopter has been used to teach experimental design for more than a decade. It is simple, inexpensive, and provides real data for an involved, multifactor experiment. Unfortunately it can also further an all-too-common practice that Professor Box himself has repeatedly cautioned against, namely ignoring the fundamental science while rushing to solve problems that may not be sufficiently understood. Often this slighting of the science so as to get on with the statistics is justified by referring to Box's oft-quoted maxim that “All models are wrong, however some are useful.” Nevertheless, what is equally true, to paraphrase both Professor Box and George Orwell, is that “All models are wrong, but some are more wrong than others.” To experiment effectively it is necessary to understand the relevant science so as to distinguish between what is usefully wrong, and what is dangerously wrong.

This article presents an improved analysis of Box's helicopter problem relying on statistical and engineering knowledge and shows that this leads to an enhanced paper helicopter, requiring fewer experimental trails and achieving superior performance. In fact, of the 20 experimental trials run for validation—10 each of the proposed aerodynamic design and the conventional full factorial optimum—the longest 10 flight times all belong to the aerodynamic optimum, while the shortest 10 all belong to the conventional full factorial optimum. I further discuss how ancillary engineering knowledge can be incorporated into thinking about—and teaching—experimental design.  相似文献   

18.
In this paper we study the procedures of Dudewicz and Dalal ( 1975 ), and the modifications suggested by Rinott ( 1978 ), for selecting the largest mean from k normal populations with unknown variances. We look at the case k = 2 in detail, because there is an optimal allocation scheme here. We do not really allocate the total number of samples into two groups, but we estimate this optimal sample size, as well, so as to guarantee the probability of correct selection (written as P(CS)) at least P?, 1/2 < P? < 1 . We prove that the procedure of Rinott is “asymptotically in-efficient” (to be defined below) in the sense of Chow and Robbins ( 1965 ) for any k  2. Next, we propose two-stage procedures having all the properties of Rinott's procedure, together with the property of “asymptotic efficiency” - which is highly desirable.  相似文献   

19.
“Precision” may be thought of either as the closeness with which a reported value approximates a “true” value, or as the number of digits carried in computations, depending on context. With suitable formal definitions, it is shown that the precision of a reported value is the difference between the precision with which computations are performed and the “loss” in precision due to the computations. Loss in precision is a function of the quantity computed and of the algorithm used to compute it; in the case of the usual “computing formula” for variances and covariances, it is shown that the loss of precision is expected to be log k i k j where k i , the reciprocal of the coefficient of variation, is the ratio of the mean to the standard deviation of the ith variable. When the precision of a reported value, the precision of computations, and the loss of precision due to the computations are expressed to the same base, all three quantities have the units of significant digits in the corresponding number system. Using this metric for “precision,” the expected precision of a computed (co)variance may be estimated in advance of the computation; for data reported in the paper, the estimates agree closely with observed precision. Implications are drawn for the programming of general-purpose statistical programs, as well as for users of existing programs, in order to minimize the loss of precision resulting from characteristics of the data, A nomograph is provided to facilitate the estimation of precision in binary, decimal, and hexadecimal digits.  相似文献   

20.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号