首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article introduces principles of learning based on research in cognitive science that help explain how learning works. We adapt these principles to the teaching of statistical practice and illustrate the application of these principles to the curricular design of a new master's degree program in applied statistics. We emphasize how these principles can be used not only to improve instruction at the course level but also at the program level.  相似文献   

2.
Beginning statistics at the undergraduate level can be taught by using a few geometric principles of linear vector space theory. Even formulas for simple sample means and variances can be derived with these principles by assuming a univariate linear statistical model. The least squares estimator of the sample mean is found by a perpendicular projection. The analogy of the bivariate model to the univariate model is indicated, and an analogous perpendicular projection solution for it is shown. Vector geometric diagrams illustrate the basic concepts.

Once the basic technique is understood, the appropriate application or perpendicular projections can be used to illustrate the problems of multicollinearity and tests of hypotheses in regression models. The translation of the geometric concepts into concrete algebraic equations is shown. The emphasis is on geometric thinking as a means of visualizing and thereby improving an understanding of methods of data analysis.  相似文献   

3.
ABSTRACT

Expert opinion and judgment enter into the practice of statistical inference and decision-making in numerous ways. Indeed, there is essentially no aspect of scientific investigation in which judgment is not required. Judgment is necessarily subjective, but should be made as carefully, as objectively, and as scientifically as possible.

Elicitation of expert knowledge concerning an uncertain quantity expresses that knowledge in the form of a (subjective) probability distribution for the quantity. Such distributions play an important role in statistical inference (for example as prior distributions in a Bayesian analysis) and in evidence-based decision-making (for example as expressions of uncertainty regarding inputs to a decision model). This article sets out a number of practices through which elicitation can be made as rigorous and scientific as possible.

One such practice is to follow a recognized protocol that is designed to address and minimize the cognitive biases that experts are prone to when making probabilistic judgments. We review the leading protocols in the field, and contrast their different approaches to dealing with these biases through the medium of a detailed case study employing the SHELF protocol.

The article ends with discussion of how to elicit a joint probability distribution for multiple uncertain quantities, which is a challenge for all the leading protocols. Supplementary materials for this article are available online.  相似文献   

4.
Summary.  In recent years, advances in Markov chain Monte Carlo techniques have had a major influence on the practice of Bayesian statistics. An interesting but hitherto largely underexplored corollary of this fact is that Markov chain Monte Carlo techniques make it practical to consider broader classes of informative priors than have been used previously. Conjugate priors, long the workhorse of classic methods for eliciting informative priors, have their roots in a time when modern computational methods were unavailable. In the current environment more attractive alternatives are practicable. A reappraisal of these classic approaches is undertaken, and principles for generating modern elicitation methods are described. A new prior elicitation methodology in accord with these principles is then presented.  相似文献   

5.
For the interaction between the biostatistician and the clinician or research investigator to be successful, it is important not only for the investigator to be able to explain biological and medical principles in a way that can be understood by the biostatistician, so, too, the biostatistician needs tools to help the investigator understand both the practice of statistics and specific statistical methods. In our practice, we have found it useful to draw analogies between statistical concepts and familiar medical or everyday ideas. These analogies help to stress a point or provide an understanding on the part of the investigator. For example, explaining the reason for using a nonparametric procedure (a general procedure used when the underlying distribution of the data is not known or cannot be assumed) by comparing it to using broad spectrum antibiotics (a general antibiotic used when the specific bacteria causing infection is unknown or cannot be assumed) can be an effective teaching tool. We present a variety of useful (and hopefully amusing) analogies that can be adopted by statisticians to help investigators at all levels of experience better understand principles and practice of statistics.  相似文献   

6.
Traditionally, experience ratemaking is in principle based on the idea of Bühlmann’s credibility theory that, except for net premiums, was rarely applied to other premium calculation principles. This article uses Bühlmann’s credibility procedure to estimate moment-generating functions (MGFs) of risks and then deduces estimates of moments of those risks. For the premium calculation principles that can be expressed as functions of certain moments or more directly of the MGFs, this article develops a new type of experience ratemaking methods by means of the estimated MGFs and discusses their consistency and asymptotic normality. Numerical simulation shows that, under the Esscher and exponential premium principles, the new credibility estimates are better than existing credibilityestimates in the literature.  相似文献   

7.
R, s and s2 charts with estimated control limits are widely used in practice. Common practice in control-chart theory is to estimate the control limits using data from the process and, once the process is determined to be in control, to treat the resulting control limits as though fixed. While there are empirical rules for setting up the control charts using past or trial data, little is known about the run length distributions of these charts when the fact that control limits are estimated is taken into account. In this paper, we derive and evaluate the run length distributions associated with the R, s and s2 charts when the process standard deviation a is estimated. The results are then used to discuss the appropriateness of the widely followed empirical rules for choosing the number m of samples and the sample size n.  相似文献   

8.
Separability assumptions on functional structure have received a great deal of attention from econometricians and economic theorists because (a) separability provides the fundamental linkage between aggregation over goods and the maximization principles in economic theory, (b) separability provides the theoretical basis for partitioning the economy's structure into sectors, and (c) separability provides a theoretical hypothesis, which can produce parameter restrictions, permitting great simplification in estimation of large demand systems. The power of the various available tests for separability has never been determined, however. We conduct Monte Carlo studies to examine the capability of currently available methods to provide correct inferences about separability.  相似文献   

9.
Developments in the theory of frequentist parametric inference in recent decades have been driven largely by the desire to achieve higher-order accuracy, in particular distributional approximations that improve on first-order asymptotic theory by one or two orders of magnitude. At the same time, much methodology is specifically designed to respect key principles of parametric inference, in particular conditionality principles. Two main routes to higher-order accuracy have emerged: analytic methods based on 'small-sample asymptotics', and simulation, or 'bootstrap', approaches. It is argued here that, of these, the simulation methodology provides a simple and effective approach, which nevertheless retains finer inferential components of theory. The paper seeks to track likely developments of parametric inference, in an era dominated by the emergence of methodological problems involving complex dependences and/or high-dimensional parameters that typically exceed available data sample sizes.  相似文献   

10.
Modelling and simulation (M&S) is increasingly being applied in (clinical) drug development. It provides an opportune area for the community of pharmaceutical statisticians to pursue. In this article, we highlight useful principles behind the application of M&S. We claim that M&S should be focussed on decisions, tailored to its purpose and based in applied sciences, not relying entirely on data-driven statistical analysis. Further, M&S should be a continuous process making use of diverse information sources and applying Bayesian and frequentist methodology, as appropriate. In addition to forming a basis for analysing decision options, M&S provides a framework that can facilitate communication between stakeholders. Besides the discussion on modelling philosophy, we also describe how standard simulation practice can be ineffective and how simulation efficiency can often be greatly improved.  相似文献   

11.
Non-parametric Estimation of Tail Dependence   总被引:4,自引:0,他引:4  
Abstract.  Dependencies between extreme events (extremal dependencies) are attracting an increasing attention in modern risk management. In practice, the concept of tail dependence represents the current standard to describe the amount of extremal dependence. In theory, multi-variate extreme-value theory turns out to be the natural choice to model the latter dependencies. The present paper embeds tail dependence into the concept of tail copulae which describes the dependence structure in the tail of multivariate distributions but works more generally. Various non-parametric estimators for tail copulae and tail dependence are discussed, and weak convergence, asymptotic normality, and strong consistency of these estimators are shown by means of a functional delta method. Further, weak convergence of a general upper-order rank-statistics for extreme events is investigated and the relationship to tail dependence is provided. A simulation study compares the introduced estimators and two financial data sets were analysed by our methods.  相似文献   

12.
ABSTRACT

Relying on effect size as a measure of practical significance is turning out to be just as misleading as using p-values to determine the effectiveness of interventions for improving clinical practice in complex organizations such as schools. This article explains how effect sizes have misdirected practice in education and other disciplines. Even when effect size is incorporated into RCT research the recommendations of whether interventions are effective are misleading and generally useless to practitioners. As a result, a new criterion of practical benefit is recommended for evaluating research findings about the effectiveness of interventions in complex organizations where benchmarks of existing performance exist. Practical benefit exists when the unadjusted performance of an experimental group provides a noticeable advantage over an existing benchmark. Some basic principles for determining practical benefit are provided. Practical benefit is more intuitive and is expected to enable leaders to make more accurate assessments as to whether published research findings are likely to produce noticeable improvements in their organizations. In addition, practical benefit is used routinely as the research criterion for the alternative scientific methodology of improvement science that has an established track record of being a more efficient way to develop new interventions that improve practice dramatically than RCT research. Finally, the problems with practical significance suggest that the research community should seek different inferential methods for research designed to improve clinical performance in complex organizations, as compared to methods for testing theories and medicines.  相似文献   

13.
ABSTRACT

Longstanding concerns with the role and interpretation of p-values in statistical practice prompted the American Statistical Association (ASA) to make a statement on p-values. The ASA statement spurred a flurry of responses and discussions by statisticians, with many wondering about the steps necessary to expand the adoption of these principles. Introductory statistics classrooms are key locations to introduce and emphasize the nuance related to p-values; in part because they engrain appropriate analysis choices at the earliest stages of statistics education, and also because they reach the broadest group of students. We propose a framework for statistics departments to conduct a content audit for p-value principles in their introductory curriculum. We then discuss the process and results from applying this course audit framework within our own statistics department. We also recommend meeting with client departments as a complement to the course audit. Discussions about analyses and practices common to particular fields can help to evaluate if our service courses are meeting the needs of client departments and to identify what is needed in our introductory courses to combat the misunderstanding and future misuse of p-values.  相似文献   

14.
基于云理论的高校职员绩效评价研究   总被引:1,自引:0,他引:1  
吴正洋 《统计教育》2010,(12):31-34,41
职员的绩效评估是企事业单位人事考核的重要组成部分。在高等院校职员的评价体系中存在大量的定性描述指标,如何科学地实现定性指标向定量指标的转换,是使评估更加准确、科学的重要因素。云理论是在对传统模糊集理论和概率论进行交叉渗透的基础上,形成自然语言值表示的某个定性概念与其定量表示之间的转换模型。本文将云重心评价法引入到高校职员绩效评估方法中,建立了高校职员绩效评价体系,并通过实例,证明了运用该方法能方便有效地解决高校职员绩效评价中的指标转换问题。  相似文献   

15.
The change from the z of “Student's” 1908 paper to the t of present day statistical theory and practice is traced and documented. It is shown that the change was brought about by the extension of “Student's” approach, by R.A. Fisher, to a broader class of problems, in response to a direct appeal from “Student” for a solution to one of these problems.  相似文献   

16.
路维春  吴诣民 《统计研究》1999,16(11):26-29
一、统计理论研究方法体系的提出  在相当长的一段时间里,我国统计界在进行统计理论研究时虽有意无意地采用了一些研究方法,但是,这些研究方法基本上分散零乱的,没使之系统化,这也许是造成我国统计理论研究还比较落后的一个重要原因。随着市场经济的不断完善,统计理论研究的不断深入,人们发现我国统计理论研究在向新领域、新高度跨越时,现有的方法难以适应研究的需要,客观实际方面要求我们应不断地总结过去的研究方法,创建新的研究方法。因此,对统计理论研究方法本身的研究被提到议事日程。实际上,当今科学技术的飞跃进步,科…  相似文献   

17.
Recent contributions to kernel smoothing show that the performance of cross-validated bandwidth selectors improves significantly from indirectness and that the recent do-validated method seems to provide the most practical alternative among these methods. In this paper we show step by step how classical cross-validation improves in theory, as well as in practice, from indirectness and that do-validated estimators improve in theory, but not in practice, from further indirectness. This paper therefore provides a strong support for the practical and theoretical properties of do-validated bandwidth selection. Do-validation is currently being introduced to survival analysis in a number of contexts and this paper provides evidence that this might be the immediate step forward.  相似文献   

18.
在经济数据中寻找混沌   总被引:3,自引:0,他引:3       下载免费PDF全文
刘洪 《统计研究》1997,14(6):61-63
在经济数据中寻找混沌刘洪ABSTRACTItisachalengetotraditionaleconomictheoryandmethodologythateconomicsystemscancreatechaoticbehavior,andchao...  相似文献   

19.
The struggle to find and use indicators of sustainable development is intimately bound up with the process of deciding what we mean by sustainable development and what we shall do about it. In this field at least, indicators are intrinsically and unavoidably normative and political. The paper proposes an approach to indicators which reflects, and can further clarify and help to achieve, an important aspect of sustainable development. The paper is written from a practical, instrumental interest in indicators as a tool to put sustainable development principles into practice in public policy. The author is not a statistician and makes no claim to technical expertise but hopes that this `barefoot' practitioner perspective may be of some interest to the professionals. The main argument is introduced by a discussion of some of the pitfalls and limitations of sustainability indicators to date.  相似文献   

20.
In applied statistical data analysis, overdispersion is a common feature. It can be addressed using both multiplicative and additive random effects. A multiplicative model for count data incorporates a gamma random effect as a multiplicative factor into the mean, whereas an additive model assumes a normally distributed random effect, entered into the linear predictor. Using Bayesian principles, these ideas are applied to longitudinal count data, based on the so-called combined model. The performance of the additive and multiplicative approaches is compared using a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号