首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
A survey of participants in a large-scale business plan competition experiment, in which winners received an average of U.S. $50,000 each, is used to elicit ex-post beliefs about what the outcomes would have been in the alternative treatment status. Participants are asked the percent chance they would be operating a firm, and the number of employees and monthly sales they would have, had their treatment status been reversed. The study finds the control group to have reasonably accurate expectations of the large treatment effect they would experience on the likelihood of operating a firm, although this may reflect the treatment effect being close to an upper bound. The control group dramatically overestimates how much winning would help them grow the size of their firm. The treatment group overestimates how much winning helps their chance of their business surviving and also overestimates how much winning helps them grow their firms. In addition, these counterfactual expectations appear unable to generate accurate relative rankings of which groups of participants benefit most from treatment.  相似文献   

2.
The role of statistics in science today goes well beyond the analysis of experimental results: it extends into the fabric of the science itself. Nature seems to operate as much by statistical laws as physical ones. Yet before the mid-19th century no one had an inkling that this was so. The enfolding of statistical concepts is one of the most profound changes in the history of science. Basil Mahon tells how two remarkable men began this transformation.  相似文献   

3.
Carbon trading has been claimed as the most effective and practical way to limit greenhouse gas emissions, and the first full year of trading in the European Union has recently been completed. But how much has it actually reduced emissions—and can it work any better in future? Kirsty Clough examines the statistics.  相似文献   

4.
为了探究个体消费者怎样在C2C交易中形成对个体卖方的信任,提出了消费者的信任结构方程模型,并通过情景泛义模拟实验采集的数据进行检验。文章得出:消费者的感知网站质量是最主要因素,信任机制至关重要,C2C平台市场比个体卖方作用更大,声誉的影响强于规模,买卖方的交互比较关键等结论。  相似文献   

5.
Summary.  This is a response to Stone's criticisms of the Spottiswoode report to the UK Treasury which was responding to the Treasury's request for improved methods to evaluate the efficiency and productivity of the 43 police districts in England and Wales. The Spottiswoode report recommended uses of data envelopment analysis (DEA) and stochastic frontier analysis (SFA), which Stone critiqued en route to proposing an alternative approach. Here we note some of the most serious errors in his criticism and inaccurate portrayals of DEA and SFA. Most of our attention is devoted to DEA, and to Stone's recommended alternative approach without much attention to SFA, partly because of his abbreviated discussion of the latter. In our response we attempt to be constructive as well as critical by showing how Stone's proposed approach can be joined to DEA to expand his proposal beyond limitations in his formulations.  相似文献   

6.
One of the most popular methods and algorithms to partition data to k clusters is k-means clustering algorithm. Since this method relies on some basic conditions such as, the existence of mean and finite variance, it is unsuitable for data that their variances are infinite such as data with heavy tailed distribution. Pitman Measure of Closeness (PMC) is a criterion to show how much an estimator is close to its parameter with respect to another estimator. In this article using PMC, based on k-means clustering, a new distance and clustering algorithm is developed for heavy tailed data.  相似文献   

7.
Food safety scares make headlines. Fish in particular causes worry: should we eat it for our health, or avoid it for the pollutants that it accumulates? David Mortimer explains how the Food Standards Agency decides how much fish it is safe for us to eat.  相似文献   

8.
Currently there is much interest in using microarray gene-expression data to form prediction rules for the diagnosis of patient outcomes. A process of gene selection is usually carried out first to find those genes that are most useful according to some criterion for distinguishing between the given classes of tissue samples. However, there is a bias (selection bias) introduced in the estimate of the final version of a prediction rule that has been formed from a smaller subset of the genes that have been selected according to some optimality criterion. In this paper, we focus on the bias that arises when a full data set is not available in the first instance and the prediction rule is formed subsequently by working with the top-ranked genes from the full set. We demonstrate how large the subset of top genes must be before this selection bias is not of practical consequence.  相似文献   

9.
In a rank-order choice-based conjoint experiment, the respondent is asked to rank a number of alternatives of a number of choice sets. In this paper, we study the efficiency of those experiments and propose a D-optimality criterion for rank-order experiments to find designs yielding the most precise parameter estimators. For that purpose, an expression of the Fisher information matrix for the rank-ordered conditional logit model is derived which clearly shows how much additional information is provided by each extra ranking step. A simulation study shows that, besides the Bayesian D-optimal ranking design, the Bayesian D-optimal choice design is also an appropriate design for this type of experiments. Finally, it is shown that considerable improvements in estimation and prediction accuracy are obtained by including extra ranking steps in an experiment.  相似文献   

10.
Summary.  The concept of reliability denotes one of the most important psychometric properties of a measurement scale. Reliability refers to the capacity of the scale to discriminate between subjects in a given population. In classical test theory, it is often estimated by using the intraclass correlation coefficient based on two replicate measurements. However, the modelling framework that is used in this theory is often too narrow when applied in practical situations. Generalizability theory has extended reliability theory to a much broader framework but is confronted with some limitations when applied in a longitudinal setting. We explore how the definition of reliability can be generalized to a setting where subjects are measured repeatedly over time. On the basis of four defining properties for the concept of reliability, we propose a family of reliability measures which circumscribes the area in which reliability measures should be sought. It is shown how different members assess different aspects of the problem and that the reliability of the instrument can depend on the way that it is used. The methodology is motivated by and illustrated on data from a clinical study on schizophrenia. On the basis of this study, we estimate and compare the reliabilities of two different rating scales to evaluate the severity of the disorder.  相似文献   

11.
The two-parameter Inverse Gaussian (IG) distribution is often appropriate for modeling non negative right-skewed data due to the striking similarities with the Gaussian distribution in its basic properties and inference methods. There are about 40 such G-IG analogies developed in literature and were most recently tabulated by Mudholkar and Wang. Of these, the earliest and most commonly noted similarities are the significance tests based on student's t and F distribution for the homogeneity of one, two or several means of the IG populations. However, unlike the corresponding tests in Gaussian theory, little is known about the power function of the basic tests. In this article, we employ the IG-related root-reciprocal IG distribution and a notion of Reciprocal Symmetry to establish the monotonicity of the power function of the test of significance for the IG mean.  相似文献   

12.
Few creatures carry more emotion on their broad backs than whales; and few issues arouse as much passion as whaling. Each year around this time the International Whaling Commission comes under pressure to allow the resumption of commercial catching and killing of whales and the Save the Whale lobbyists protest. But how many whales are there? Can the scientists and statisticians tell us—and how much influence do they wield in the real world of whale-politik? Philip Hammond, a former Chairman of the Scientific Committee of the IWC, explains.  相似文献   

13.
14.
Assessment of efficacy in important subgroups – such as those defined by sex, age, race and region – in confirmatory trials is typically performed using separate analysis of the specific subgroup. This ignores relevant information from the complementary subgroup. Bayesian dynamic borrowing uses an informative prior based on analysis of the complementary subgroup and a weak prior distribution centred on a mean of zero to construct a robust mixture prior. This combination of priors allows for dynamic borrowing of prior information; the analysis learns how much of the complementary subgroup prior information to borrow based on the consistency between the subgroup of interest and the complementary subgroup. A tipping point analysis can be carried out to identify how much prior weight needs to be placed on the complementary subgroup component of the robust mixture prior to establish efficacy in the subgroup of interest. An attractive feature of the tipping point analysis is that it enables the evidence from the source subgroup, the evidence from the target subgroup, and the combined evidence to be displayed alongside each other. This method is illustrated with an example trial in severe asthma where efficacy in the adolescent subgroup was assessed using a mixture prior combining an informative prior from the adult data in the same trial with a non-informative prior.  相似文献   

15.
In statistics, Fourier series have been used extensively in such areas as time series and stochastic processes. These series; however, to a large degree have been neglected with regard to their use in statistical distribution theory. This omission appears quite striking when one considers that, after the elementary functions, the trigonometric functions are the most important functions in applied mathematics. In this paper a procedure is developed for utilizing Fourier series to represent distribution functions of finite range random variables as Fourier series with coefficients easily expressible (using Chebyshev polynomials) In terms of the moments of the distribution. This method allows the evaluation of probabilities for a wide class of distributions. It is applied to the  相似文献   

16.
AStA Advances in Statistical Analysis - In the Corona pandemic, it became clear with burning clarity how much good quality statistics are needed, and at the same time how unsuccessful we are at...  相似文献   

17.
The question of how much information is contained in an ordered observation was studied by Tukey (1964) in terms of a linear sensitivity measure. This paper deals with exact Fisher information for censored data. The concept of hazard rate function is extended and some fundamental moment relations are established between them and the score functions. Some new moment equalities are obtained for the normal and gamma distributions.  相似文献   

18.
Dose-finding in clinical studies is typically formulated as a quantile estimation problem, for which a correct specification of the variance function of the outcomes is important. This is especially true for sequential study where the variance assumption directly involves in the generation of the design points and hence sensitivity analysis may not be performed after the data are collected. In this light, there is a strong reason for avoiding parametric assumptions on the variance function, although this may incur efficiency loss. In this paper, we investigate how much information one may retrieve by making additional parametric assumptions on the variance in the context of a sequential least squares recursion. By asymptotic comparison, we demonstrate that assuming homoscedasticity achieves only a modest efficiency gain when compared to nonparametric variance estimation: when homoscedasticity in truth holds, the latter is at worst 88% as efficient as the former in the limiting case, and often achieves well over 90% efficiency for most practical situations. Extensive simulation studies concur with this observation under a wide range of scenarios.  相似文献   

19.
This article extends the standard regression discontinuity (RD) design to allow for sample selection or missing outcomes. We deal with both treatment endogeneity and sample selection. Identification in this article does not require any exclusion restrictions in the selection equation, nor does it require specifying any selection mechanism. The results can therefore be applied broadly, regardless of how sample selection is incurred. Identification instead relies on smoothness conditions. Smoothness conditions are empirically plausible, have readily testable implications, and are typically assumed even in the standard RD design. We first provide identification of the “extensive margin” and “intensive margin” effects. Then based on these identification results and principle stratification, sharp bounds are constructed for the treatment effects among the group of individuals that may be of particular policy interest, that is, those always participating compliers. These results are applied to evaluate the impacts of academic probation on college completion and final GPAs. Our analysis reveals striking gender differences at the extensive versus the intensive margin in response to this negative signal on performance.  相似文献   

20.
For much of 1970 and during the early part of 1971 some forty members of the American Statistical Association, working as seven task force groups under the guidance of a special Steering Committee, gave much time and thought to an examination of the objectives and activities of ASA. Their purpose was to try to determine what those objectives should be and how best to accomplish them. After careful review and study by the Board of ASA, the task force and steering committee findings and recommendations were adopted, modified or supplemented and issued as an integrated Board report under the title of A Study of Future Goals of ASA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号