首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 647 毫秒
1.
Earthquakes, evolution, avalanches—even something as simple as grains of sand tumbling down a pile: so many phenomena of nature seem to resist prediction. We still cannot tell when geological stresses will release themselves in a major quake or when the last extra grain of sand will make half the heap avalanche down. But have statisticians been looking in the wrong place? Or are these critical phenomena really inherently unpredictable? They are close to the borders, but still in predictable land, says Osvanny Ramos , one of the authors of an experiment to prove it.  相似文献   

2.
Dr  Linda Mountain 《Significance》2006,3(3):111-113
There cannot be a road user in the UK who does not have an opinion on speed cameras—in most cases a rather strongly held opinion. Indeed many of us will hold them responsible for the first blemish on an otherwise clean license. Their proper name is not speed but safety cameras—but do they really save lives, or are they just one more way of extracting money from the hapless motorist? Linda Mountain knows the answers.  相似文献   

3.
What is mentoring? Is it just a buzz word or is this really valuable? How can mentoring help one to grow and advance personally and professionally? How and where does one even begin? Many of us have these questions. In this article, I will share my perspective and provide some reflections on these questions based on my own personal and professional journey.  相似文献   

4.
The financial world is in meltdown—and it is the fault of modern financial theory. How did the theorists manage to believe so many impossible things for so long? William Janeway marvels and explains.  相似文献   

5.
Four stabbings to death in a single day. Ninety murders in 7 months. Shocking figures—or are they? Knife crime makes the headlines almost daily but are Londoners really at increased risk of being murdered? David Spiegelhalter and Arthur Barnett investigate—and find a predictable pattern of murder.  相似文献   

6.
More Than Things     
Abstract

The often invisible labor of serials, technical services, metadata, and electronic resources workers sits in the space between required and preferred, assessment and surveillance. Although libraries and information workers did not explicitly create the systems many of us live in, we are responsible for their everyday functioning. In many ways the narratives from technical services to the library are centered in objects: item counts, COUNTER stats, door counts, discovery, and other transactional data. And yet, we are stewards and maintainers, innovators and storytellers of the countless ways these objects are experienced. How can we help our colleagues understand the outreach component of this work? How do we responsibly confront power in our systems—which often miscalculates the necessity of care in favor of the shiny? What does it mean to honor expertise behind the scenes, and how might we gain agency in our systems once more?  相似文献   

7.
Abstract

Two problems need to be solved before being able to give proper advice to couples undergoing in vitro fertilization therapy. Firstly, does the long-run success rate really converge to 100%? Secondly, what the success rate can be expected within a reasonable finite number of cycles? We propose a model based on a Weibull distribution. Data on 23,520 couples were used to calculate the cumulative pregnancy rate.  相似文献   

8.
Yan Wong 《Significance》2010,7(1):45-48
Do cows really sense the earth's magnetic field and point north? Yan Wong had five minutes of film time to explain the statistics of finding out.  相似文献   

9.
In a headline-hitting trial at the Old Bailey last year, top jockey Kieren Fallon was accused of "throwing" races. Did he really ride to lose? John Haigh comes to a clear statistical conclusion.  相似文献   

10.
Thinner and ever more bizarrely shaped models strut the catwalks; outside the fashion shows real women get bigger as dress sizes get smaller, and in high-street shops size 18 customers squeeze themselves into dresses that claim to be size 12. What shape are we in as a nation—and what size are we really? Philip Treleaven explains how the UK National Sizing Survey can restore a sense of proportion—and may even be able to answer the question every woman asks in a dress shop: "Does my bum look big in this?"  相似文献   

11.
We congratulate the authors for the interesting paper. The reading has been really pleasant and instructive. We discuss briefly only some of the interesting results given in Devroye and James (Stat Methods Appl 2014) with particular attention to evolution problems. The contribution of the results collected in the paper is useful in a more wide class of applications in many areas of applied mathematics.  相似文献   

12.
This paper reviews five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main questions are: when should which type of analysis be applied; which statistical techniques may then be used? This paper claims that the proper sequence to follow in the evaluation of simulation models is as follows. 1) Validation, in which the availability of data on the real system determines which type of statistical technique to use for validation. 2) Screening: in the simulation‘s pilot phase the really important inputs can be identified through a novel technique, called sequential bifurcation, which uses aggregation and sequential experimentation. 3) Sensitivity analysis: the really important inputs should be subjected to a more detailed analysis, which includes interactions between these inputs; relevant statistical techniques are design of experiments (DOE) and regression analysis. 4) Uncertainty analysis: the important environmental inputs may have values that are not precisely known, so the uncertainties of the model outputs that result from the uncertainties in these model inputs should be quantified; relevant techniques are the Monte Carlo method and Latin hypercube sampling. 5) Optimization: the policy variables should be controlled; a relevant technique is Response Surface Methodology (RSM), which combines DOE, regression analysis, and steepest-ascent hill-climbing. The recommended sequence implies that sensitivity analysis procede uncertainty analysis. Several case studies for each phase are briefly discussed in this paper.  相似文献   

13.
XML     
Abstract

XML offers some significant enhancements and functionality over what can be achieved with HTML. Among them:

o ? XML is a language for defining markup languages. It is not restricted in scope to the fixed tags defined in HTML. Document types can be created that are customized to the application, type of data, and user community for which it is intended.

? Information content can be richer and easier to use, both because of the richer flexibility of tags and the hypertext linking abilities of XML are greater than those offered in HTML.

? Information should be more usable and accessible. Because of the nature of XML, communities will be able to create applications customized to their needs, rather than being restricted to software provided by major vendors as has become the case with HTML.

? XML files are valid SGML. They can be used in SGML environments outside of the Web. At the same time XML has removed many of the complexities inherent in SGML, providing a more simple model and thus making it easier to develop XML applications that would be the case for SGML.

  相似文献   

14.
Modelling and simulation are buzz words in clinical drug development. But is clinical trial simulation (CTS) really a revolutionary technique? There is not much more to CTS than applying standard methods of modelling, statistics and decision theory. However, doing this in a systematic way can mean a significant improvement in pharmaceutical research. This paper describes in simple examples how modelling could be used in clinical development. Four steps are identified: gathering relevant information about a drug and the disease; building a mathematical model; predicting the results of potential future trials; and optimizing clinical trials and the entire clinical programme. We discuss these steps and give a number of examples of model components, demonstrating that relatively unsophisticated models may also prove useful. We stress that modelling and simulation are decision tools and point out the benefits of integrating them with decision analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
ABSTRACT

When a binary dependent variable is misclassified, that is, recorded in the category other than where it really belongs, probit and logit estimates are biased and inconsistent. In some cases, the probability of misclassification may vary systematically with covariates, and thus be endogenous. In this paper, we develop an estimation approach that corrects for endogenous misclassification, validate our approach using a simulation study, and apply it to the analysis of a treatment program designed to improve family dynamics. Our results show that endogenous misclassification could lead to potentially incorrect conclusions unless corrected using an appropriate technique.  相似文献   

16.
Do you sincerely want to be cited? Prestige depends on the number of times your academic paper gets cited. But that need not be a measure of how good it is, nor even of how many times it is actually read. Mikhail Simkin and Vwani Roychowdhury explain their theory of the unread citation.  相似文献   

17.
In this paper we study the procedures of Dudewicz and Dalal ( 1975 ), and the modifications suggested by Rinott ( 1978 ), for selecting the largest mean from k normal populations with unknown variances. We look at the case k = 2 in detail, because there is an optimal allocation scheme here. We do not really allocate the total number of samples into two groups, but we estimate this optimal sample size, as well, so as to guarantee the probability of correct selection (written as P(CS)) at least P?, 1/2 < P? < 1 . We prove that the procedure of Rinott is “asymptotically in-efficient” (to be defined below) in the sense of Chow and Robbins ( 1965 ) for any k  2. Next, we propose two-stage procedures having all the properties of Rinott's procedure, together with the property of “asymptotic efficiency” - which is highly desirable.  相似文献   

18.
Elliott and Müller (2006) considered the problem of testing for general types of parameter variations, including infrequent breaks. They developed a framework that yields optimal tests, in the sense that they nearly attain some local Gaussian power envelop. The main ingredient in their setup is that the variance of the process generating the changes in the parameters must go to zero at a fast rate. They recommended the so-called qL?L test, a partial sums type test based on the residuals obtained from the restricted model. We show that for breaks that are very small, its power is indeed higher than other tests, including the popular sup-Wald (SW) test. However, the differences are very minor. When the magnitude of change is moderate to large, the power of the test is very low in the context of a regression with lagged dependent variables or when a correction is applied to account for serial correlation in the errors. In many cases, the power goes to zero as the magnitude of change increases. The power of the SW test does not show this non-monotonicity and its power is far superior to the qL?L test when the break is not very small. We claim that the optimality of the qL?L test does not come from the properties of the test statistics but the criterion adopted, which is not useful to analyze structural change tests. Instead, we use fixed-break size asymptotic approximations to assess the relative efficiency or power of the two tests. When doing so, it is shown that the SW test indeed dominates the qL?L test and, in many cases, the latter has zero relative asymptotic efficiency.  相似文献   

19.
The criterion of admissibility has been considered as one of the most important criterion in decision theory and many important results have been contributed in this direction. In this article, we propose a more flexible criterion, so-called ?-admissibility (which can be considered as weak admissibility), which generates a monotone sequence of classes of estimators. The limit of this sequence, class of 0+-admissible estimators, is the smallest class including the class of usual admissible estimators, which also belongs to the monotone sequence. Some sufficient and necessary conditions are proposed for ?-admissibility and 0+-admissibility. Under some weighted square loss, it can be shown that the usual MLE is 0+-admissible for the multivariate normal distribution and the multivariate Poisson distribution.  相似文献   

20.
A tutorial on spectral clustering   总被引:33,自引:0,他引:33  
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号