首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 246 毫秒
1.
BENN  A.  KULPERGER  R. 《Statistics and Computing》1998,8(4):309-318
Massively parallel computing is a computing environment with thousands of subprocessors. It requires some special programming methods, but is well suited to certain imaging problems. One such statistical example is discussed in this paper. In addition there are other natural statistical problems for which this technology is well suited. This paper describes our experience, as statisticians, with a massively parallel computer in a problem of image correlation spectroscopy. Even with this computing environment some direct computations would still take in the order of a year to finish. It is shown that some of the algorithms of interest can be made parallel.  相似文献   

2.
ABSTRACT

Such is the grip of formal methods of statistical inference—that is, frequentist methods for generalizing from sample to population in enumerative studies—in the drawing of scientific inferences that the two are routinely deemed equivalent in the social, management, and biomedical sciences. This, despite the fact that legitimate employment of said methods is difficult to implement on practical grounds alone. But supposing the adoption of these procedures were simple does not get us far; crucially, methods of formal statistical inference are ill-suited to the analysis of much scientific data. Even findings from the claimed gold standard for examination by the latter, randomized controlled trials, can be problematic.

Scientific inference is a far broader concept than statistical inference. Its authority derives from the accumulation, over an extensive period of time, of both theoretical and empirical knowledge that has won the (provisional) acceptance of the scholarly community. A major focus of scientific inference can be viewed as the pursuit of significant sameness, meaning replicable and empirically generalizable results among phenomena. Regrettably, the obsession with users of statistical inference to report significant differences in data sets actively thwarts cumulative knowledge development.

The manifold problems surrounding the implementation and usefulness of formal methods of statistical inference in advancing science do not speak well of much teaching in methods/statistics classes. Serious reflection on statistics' role in producing viable knowledge is needed. Commendably, the American Statistical Association is committed to addressing this challenge, as further witnessed in this special online, open access issue of The American Statistician.  相似文献   

3.
Since 1943, numerous papers have discussed the problem of the distribution of the distance between random points in rectangles, considering special cases such as two points in the same square, points in adjacent squares, two rectangles sharing a side and others. The problems arise in a variety of settings: operations research, population studies, urban planning, physical chemistry, chemical physics and materials science. Reported results are all of special cases with formulas specific to each case. It is possible to put such problems in a general setting with a single formula that handles all the particular cases. The method is well suited to computing and use of graphics. Now that computers and graphic output are commonplace it seems worthwhile to describe the general method and provide program outlines for computing and plotting the resulting distributions. We do that in this article.  相似文献   

4.
Stereology typically concerns estimation of properties of a geometric structure from plane section information. This paperprovides a brief review of some statistical aspects of this rapidly developing field, with some reference to applications in the earth sciences. After an introductory discussion of the scope of stereology, section 2 briefly mentions results applicable when no assumptions can be made about the stochastic nature of the sampled matrix, statistical considerations then arising solelyfrom the ‘randomness’ of the plane section. The next two sections postulate embedded particles of specific shapes, the particular case of spheres being discussed in some detail.

References are made to results for ‘thin slices’ and other prob-ing mechanisms. Randomly located convex particles, of otherwise arbitrary shape, are discussed in section 5 and the review concludes with a specific application of stereological ideas to some data on neolithic mining.  相似文献   

5.
ABSTRACT

The current concerns about reproducibility have focused attention on proper use of statistics across the sciences. This gives statisticians an extraordinary opportunity to change what are widely regarded as statistical practices detrimental to the cause of good science. However, how that should be done is enormously complex, made more difficult by the balkanization of research methods and statistical traditions across scientific subdisciplines. Working within those sciences while also allying with science reform movements—operating simultaneously on the micro and macro levels—are the key to making lasting change in applied science.  相似文献   

6.
Conducting a clinical trial at multiple study centres raises the issue of whether and how to adjust for centre heterogeneity in the statistical analysis. In this paper, we address this issue for multicentre clinical trials with a time?to?event outcome. Based on simulations, we show that the current practice of ignoring centre heterogeneity can be seriously misleading, and we illustrate the performances of the frailty modelling approach over competing methods. A special attention is paid to the problem of misspecification of the frailty distribution. The appendix provides sample codes in R and in SAS to perform the analyses in this paper. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
贺铿 《统计研究》1998,15(5):3-6
全国统计科研工作会议迄今为止召开了五次,其中两次特别重要,即广州会议和北京平谷会议。广州会议解决了全国统计科研网络问题,各省、自治区、直辖市统计科研机构得到了强化,在全国范围内有了科研组织上的保证。北京平谷会议提出了跨世纪的统计科学研究发展纲要和加强...  相似文献   

8.

Regression spline smoothing is a popular approach for conducting nonparametric regression. An important issue associated with it is the choice of a "theoretically best" set of knots. Different statistical model selection methods, such as Akaike's information criterion and generalized cross-validation, have been applied to derive different "theoretically best" sets of knots. Typically these best knot sets are defined implicitly as the optimizers of some objective functions. Hence another equally important issue concerning regression spline smoothing is how to optimize such objective functions. In this article different numerical algorithms that are designed for carrying out such optimization problems are compared by means of a simulation study. Both the univariate and bivariate smoothing settings will be considered. Based on the simulation results, recommendations for choosing a suitable optimization algorithm under various settings will be provided.  相似文献   

9.
In this work it is shown how the k-means method for clustering objects can be applied in the context of statistical shape analysis. Because the choice of the suitable distance measure is a key issue for shape analysis, the Hartigan and Wong k-means algorithm is adapted for this situation. Simulations on controlled artificial data sets demonstrate that distances on the pre-shape spaces are more appropriate than the Euclidean distance on the tangent space. Finally, results are presented of an application to a real problem of oceanography, which in fact motivated the current work.  相似文献   

10.
Azzalini and Dalla Valle have recently discussed the multivariate skew normal distribution which extends the class of normal distributions by the addition of a shape parameter. The first part of the present paper examines further probabilistic properties of the distribution, with special emphasis on aspects of statistical relevance. Inferential and other statistical issues are discussed in the following part, with applications to some multivariate statistics problems, illustrated by numerical examples. Finally, a further extension is described which introduces a skewing factor of an elliptical density.  相似文献   

11.
Sensitivity analysis provides a way to mitigate traditional criticisms of Bayesian statistical decision theory, concerning dependence on subjective inputs. We suggest a general framework for sensitivity analysis allowing for perturbations in both the utility function and the prior distribution. Perturbations are constrained to classes modelling imprecision in judgements The framework discards first definitely bad alternatives; then, identifies alternatives that may share optimality with a current one; and, finally, detects least changes in the inputs leading to changes in ranking. The associated computational problems and their implementation are discussed.  相似文献   

12.
2014年11月,美国统计学会适应大数据时代的要求,发布了统计学本科专业指导性教学纲要。而在2013年,我国统计类本科专业刚刚进行了一次较大调整,目前的专业课程设置和教学内容改革还处于探索阶段。美国统计学会发布的这份指导性教学纲要对于推进我国统计类本科专业教育改革具有重要借鉴意义。本文首先概括性地介绍了美国统计学会发布的统计学本科专业指导性教学纲要的核心内容,包括统计专业本科生应该掌握的基本技能和应该修读的主要课程,然后分析了我国统计类本科专业教育存在的问题,并提出了在大数据时代改进我国统计类本科专业教育的几点建议。  相似文献   

13.
马世骁  郑文范 《统计研究》1999,16(10):17-25
一、改进我国现行科技统计年报制度的必要性  (一)我国现行统计年报制度中存在的问题科技统计是指对科技活动的投入、活动、产出等方面进行统计,是统计活动的一个重要组成部分。我国于1986年建立了科技统计的年报制度,经过十几年的运行,该制度取得了很大的成绩,但也暴露出一些问题。(1)现行科技统计指标的统计口径狭小,不能反映我国科技活动的实际。从国际上看,最主要科技统计指标是R&D活动的指标。R&D活动的特点是新颖性和创新性,但实际上R&D活动与生产活动之间还有许多科技活动,显然这些活动不属于R&D活动…  相似文献   

14.
Generalized Leverage and its Applications   总被引:2,自引:0,他引:2  
The generalized leverage of an estimator is defined in regression models as a measure of the importance of individual observations. We derive a simple but powerful result, developing an explicit expression for leverage in a general M -estimation problem, of which the maximum likelihood problems are special cases. A variety of applications are considered, most notably to the exponential family non-linear models. The relationship between leverage and local influence is also discussed. Numerical examples are given to illustrate our results  相似文献   

15.
Preface     
The appearance of this issue will mark almost two years since its inception by the late Editor, Professor Don B. Owen. Professor Malay Ghosh suggested a special issue on Pitman's Measure of Closeness (PMC) to Professor Owen in autumn of 1989. After a thorough review process the issue was finalized in June 1991. It is with remorse that we publish this special issue in memory of Professor Owen.

The completion of the issue coincided with a special conference, “Pitman's Measure of Cfoseness: Celebrating a Decade of Renaissance,” held on June 15, 1991 at the University of Texas at San Antonio (UTSA). The papers in this special issue, except those of Professor Kubokawa and Drs. Bertuzzi and Gandolfi, were presented and discussed. The conference participants included C.R. Rao (Pennsylvania State University), Colin Blyth (Queen's University), and H.T. David (Iowa State University). Further, P.K. Sen (University of North Carolina) and Malay Ghosh (University of Florida) gave keynote addresses that respectively set the themes for the morning and afternoon sessions.

The conference banquet held in the Regent's Room at the University of Texas at San Antonio, featured stimulating addresses by C.R. Rao and Colin Blyth on some of the major controversies of PMC such as intransitiveness and Berkson's conjecture. We are grateful to the University of Texas at San Antonio for making this conference a reality. In particular, we thank Professor Shair Ahmad, Director of the Division of Mathematics, Computer Science, and Statistics at UTSA for funding travel expenses not only for the invited speakers but also for many of the younger researchers. We also acknowledge University President Samuel Kirkpatrick who made the KIVA room available for the technical sessions and the Regent's Room for the Banquet. We also acknowledge the financial support of Bell Helicopter Textron, Inc. as a cosponsor of the conference.  相似文献   

16.
The paper considers vector ARMA processes with nonstationary innovations. It is suggested that this class of models provide a very efficient framework for nonstationary problems. A generalization of the Yule-Walker equations relating the underlying process is obtained. Identification procedures are discussed. The associated prediction problem is solved using the Hilbert space approach.  相似文献   

17.
Wavelet Regression Technique for Streamflow Prediction   总被引:1,自引:0,他引:1  
Murat Kü    ü  k  Necati A  iral   o  lu 《Journal of applied statistics》2006,33(9):943-960
In order to explain many secret events of natural phenomena, analyzing non-stationary series is generally an attractive issue for various research areas. The wavelet transform technique, which has been widely used last two decades, gives better results than former techniques for the analysis of earth science phenomena and for feature detection of real measurements. In this study, a new technique is offered for streamflow modeling by using the discrete wavelet transform. This new technique depends on the feature detection characteristic of the wavelet transform. The model was applied to two geographical locations with different climates. The results were compared with energy variation and error values of models. The new technique offers a good advantage through a physical interpretation. This technique is applied to streamflow regression models, because they are simple and widely used in practical applications. However, one can apply this technique to other models.  相似文献   

18.
In this paper, a semi‐parametric single‐index model is investigated. The link function is allowed to be unbounded and has unbounded support that answers a pending issue in the literature. Meanwhile, the link function is treated as a point in an infinitely many dimensional function space which enables us to derive the estimates for the index parameter and the link function simultaneously. This approach is different from the profile method commonly used in the literature. The estimator is derived from an optimisation with the constraint of identification condition for the index parameter, which addresses an important problem in the literature of single‐index models. In addition, making use of a property of Hermite orthogonal polynomials, an explicit estimator for the index parameter is obtained. Asymptotic properties for the two estimators of the index parameter are established. Their efficiency is discussed in some special cases as well. The finite sample properties of the two estimates are demonstrated through an extensive Monte Carlo study and an empirical example.  相似文献   

19.
The restrictive properties of compositional data, that is multivariate data with positive parts that carry only relative information in their components, call for special care to be taken while performing standard statistical methods, for example, regression analysis. Among the special methods suitable for handling this problem is the total least squares procedure (TLS, orthogonal regression, regression with errors in variables, calibration problem), performed after an appropriate log-ratio transformation. The difficulty or even impossibility of deeper statistical analysis (confidence regions, hypotheses testing) using the standard TLS techniques can be overcome by calibration solution based on linear regression. This approach can be combined with standard statistical inference, for example, confidence and prediction regions and bounds, hypotheses testing, etc., suitable for interpretation of results. Here, we deal with the simplest TLS problem where we assume a linear relationship between two errorless measurements of the same object (substance, quantity). We propose an iterative algorithm for estimating the calibration line and also give confidence ellipses for the location of unknown errorless results of measurement. Moreover, illustrative examples from the fields of geology, geochemistry and medicine are included. It is shown that the iterative algorithm converges to the same values as those obtained using the standard TLS techniques. Fitted lines and confidence regions are presented for both original and transformed compositional data. The paper contains basic principles of linear models and addresses many related problems.  相似文献   

20.
Lyell, a founder of the science of geology, used statistical models to describe the changes that had occurred in the earth and its environment. From this model he attempted to establish a time frame for each epoch. This article shows that Lyell's model is equivalent to the classic coupon problem included in many probability texts. Furthermore, it is shown that the time frame deduced by Lyell is inconsistent with the model he was using. The proper time frame consistent with the model is provided. A second model that was considered by Lyell is also investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号