首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
William Stanley Jevons published his statistical analysis of the climate of Australia and New Zealand, in 1858. Florence Nightingale advised Sir George Grey to collect statistics on M a ori health. Frederick William Frankland published a significant study of mortality in New Zealand, in 1882; and in 1890 George Hogben pioneered the application of statistics to seismology. These people all contributed to statistical knowledge in New Zealand, but were not New Zealanders. Earnest Rutherford, Leslie John Comrie and Alexander Craig Aitken were born and educated in New Zealand, but they worked mainly in the UK. In 1911 Rutherford made very effective use of statistics in discovering the nuclear structure of atoms; in 1937 Comrie pioneered the use of punched-card machinery for large-scale statistical analysis; and Aitken did very important work in mathematical statistics.  相似文献   

3.
The aim of this paper is to propose a pedagogical explanation of the Le Cam theorem and to illustrate its use, through a practical application, for temporal cluster detection. This theorem focusses on the interval division by randomly chosen points. The aim of the theorem is to characterize the asymptotic behavior of a certain category of sums of functions applied to the length of successive intervals between points. It is not very intuitive and its understanding needs some deepening. After enouncing the theorem, its different aspects are explained and detailed in a way as pedagogical as possible. Theoretical applications are proposed through the proof of two propositions. Then a very concrete application of this theorem for temporal cluster detection is presented, tested by a power study, and compared with other global cluster detection tests. Finally, this approach is applied to the well-known Knox temporal data set.  相似文献   

4.
5.
In recent years an increase in nonresponse rates in major government and social surveys has been observed. It is thought that decreasing response rates and changes in nonresponse bias may affect, potentially severely, the quality of survey data. This paper discusses the problem of unit and item nonresponse in government surveys from an applied perspective and highlights some newer developments in this field with a focus on official statistics in the United Kingdom (UK). The main focus of the paper is on post-survey adjustment methods, in particular adjustment for item nonresponse. The use of various imputation and weighting methods is discussed in an example. The application also illustrates the close relationship between missing data and measurement error. JEL classification C42, C81  相似文献   

6.
In biostatistical applications interest often focuses on the estimation of the distribution of time between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed point in time, then the data is described by the well-understood singly censored current status model, also known as interval censored data, case I. Jewell et al. (1994) extended this current status model by allowing the initial time to be unobserved, with its distribution over an observed interval [A, B] known; the data is referred to as doubly censored current status data. This model has applications in AIDS partner studies. If the initial time is known to be uniformly distribute d, the model reduces to a submodel of the current status model with the same asymptotic information bounds as in the current status model, but the distribution of interest is essentially the derivative of the distribution of interest in the current status model. As a consequence the non-parametric maximum likelihood estimator is inconsistent. Moreover, this submodel contains only smooth heavy tailed distributions for which no moments exist. In this paper, we discuss the connection between the singly censored current status model and the doubly censored current status model (for the uniform initial time) in detail and explain the difficulties in estimation which arise in the doubly censored case. We propose a regularized MLE corresponding with the current status model. We prove rate results, efficiency of smooth functionals of the regularized MLE, and present a generally applicable efficient method for estimation of regression parameters, which does not rely on the existence of moments. We also discuss extending these ideas to a non-uniform distribution for the initial time.  相似文献   

7.
This paper deals with the construction of optimum partitions of for a clustering criterion which is based on a convex function of the class centroids as a generalization of the classical SSQ clustering criterion for n data points. We formulate a dual optimality problem involving two sets of variables and derive a maximum-support-plane (MSP) algorithm for constructing a (sub-)optimum partition as a generalized k-means algorithm. We present various modifications of the basic criterion and describe the corresponding MSP algorithm. It is shown that the method can also be used for solving optimality problems in classical statistics (maximizing Csiszárs -divergence) and for simultaneous classification of the rows and columns of a contingency table.  相似文献   

8.
9.
10.
11.
The order of the increase in the Fisher information measure contained in a finite number k of additive statistics or sample quantiles, constructed from a sample of size n, as n → ∞, is investigated. It is shown that the Fisher information in additive statistics increases asymptotically in a manner linear with respect to n, if 2 + δ moments of additive statistics exist for some δ > 0. If this condition does not hold, the order of increase in this information is non-linear and the information may even decrease. The problem of asymptotic sufficiency of sample quantiles is investigated and some linear analogues of maximum likelihood equations are constructed.  相似文献   

12.
We demonstrate how Bayes linear methods, based on partial prior specifications, bring us quickly to the heart of otherwise complex problems, giving us natural and systematic tools for evaluating our analyses which are not readily available in the usual Bayes formalism. We illustrate the approach using an example concerning problems of prediction in a large brewery. We describe the computer language [B/D] (an acronym for beliefs adjusted by data), which implements the approach. [B/D] incorporates a natural graphical representation of the analysis, providing a powerful way of thinking about the process of knowledge formulation and criticism which is also accessible to non-technical users.  相似文献   

13.
14.
15.
16.
17.
18.
19.
One of the pivotal devices B. Traven employs in his short story 'The Cattle Drive' is a contract between the cattle owner and the trail boss who brings the livestock to market. By specifying a per-diem rate, the contract appears to encourage a wage-maximizing trail boss to delay the delivery of the cattle. However, a statistical model of the contract demonstrates that a rational trail boss has an incentive to maintain a rapid rate of travel. The article concludes that statistics can be applied in non-traditional ways such as to the analysis of the plot of a fictional story. The statistical model suggests plausible alternative endings to the story based on various parameter assumptions. Finally, it demonstrates that a well-crafted story can provide an excellent case study of how contracts create incentives and influence decision-making.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号