首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Linear controls are a well known simple technique for achieving variance reduction in computer simulation. Unfortunately the effectiveness of a linear control depends upon the correlation between the statistic of interest and the control, which is often low. Since statistics often have a nonlinear relation-ship with the potential control variables, nonlinear controls offer a means for improvement over linear controls. This paper focuses on the use of nonlinear controls for reducing the variance of quantile estimates in simulation. It is shown that one can substantially reduce the analytic effort required to develop a nonlinear control from a quantile estimator by using a strictly monotone transformation to create the nonlinear control. It is also shown that as one increases the sample size for the quantile estimator, the asymptotic multivariate normal distribution of the quantile of interest and the control reduces the effectiveness of the nonlinear control to that of the linear control. However, the data has to be sectioned to obtain an estimate of the variance of the controlled quantile estimate. Graphical methods are suggested for selecting the section size that maximizes the effectiveness of the nonlinear control  相似文献   

2.
The t-test of an individual coefficient is used widely in models of qualitative choice. However, it is well known that the t-test can yield misleading results when the sample size is small. This paper provides some experimental evidence on the finite sample properties of the t-test in models with sample selection biases, through a comparison of the t-test with the likelihood ratio and Lagrange multiplier tests, which are asymptotically equivalent to the squared t-test. The finite sample problems with the t-test are shown to be alarming, and much more serious than in models such as binary choice models. An empirical example is also presented to highlight the differences in the calculated test statistics.  相似文献   

3.
Regression plays a central role in the discipline of statistics and is the primary analytic technique in many research areas. Variable selection is a classical and major problem for regression. This article emphasizes the economic aspect of variable selection. The problem is formulated in terms of the cost of predictors to be purchased for future use: only the subset of covariates used in the model will need to be purchased. This leads to a decision-theoretic formulation of the variable selection problems, which includes the cost of predictors as well as their effect. We adopt a Bayesian perspective and propose two approaches to address uncertainty about the model and model parameters. These approaches, termed the restricted and extended approaches, lead us to rethink model averaging. From an objective or robust Bayes point of view, the former is preferred. The proposed method is applied to three popular datasets for illustration.  相似文献   

4.
Quantile regression is a technique to estimate conditional quantile curves. It provides a comprehensive picture of a response contingent on explanatory variables. In a flexible modeling framework, a specific form of the conditional quantile curve is not a priori fixed. This motivates a local parametric rather than a global fixed model fitting approach. A nonparametric smoothing estimator of the conditional quantile curve requires to balance between local curvature and stochastic variability. In this paper, we suggest a local model selection technique that provides an adaptive estimator of the conditional quantile regression curve at each design point. Theoretical results claim that the proposed adaptive procedure performs as good as an oracle which would minimize the local estimation risk for the problem at hand. We illustrate the performance of the procedure by an extensive simulation study and consider a couple of applications: to tail dependence analysis for the Hong Kong stock market and to analysis of the distributions of the risk factors of temperature dynamics.  相似文献   

5.
Summary.  Role-plays in which students act as clients and statistical consultants to each other in pairs have proved to be an effective class exercise. As well as helping to teach statistical methodology, they are effective at encouraging statistical thinking, problem solving, the use of context in applied statistical problems and improving attitudes towards statistics and the statistics profession. Furthermore, they are fun. This paper explores the advantages of using role-plays and provides some empirical evidence supporting their success. The paper argues that there is a place for teaching statistical consulting skills well before the traditional post-graduate qualification in statistics, including to school students with no knowledge of techniques in statistical inference.  相似文献   

6.
The field of nonparametric function estimation has broadened its appeal in recent years with an array of new tools for statistical analysis. In particular, theoretical and applied research on the field of wavelets has had noticeable influence on statistical topics such as nonparametric regression, nonparametric density estimation, nonparametric discrimination and many other related topics. This is a survey article that attempts to synthetize a broad variety of work on wavelets in statistics and includes some recent developments in nonparametric curve estimation that have been omitted from review articles and books on the subject. After a short introduction to wavelet theory, wavelets are treated in the familiar context of estimation of «smooth» functions. Both «linear» and «nonlinear» wavelet estimation methods are discussed and cross-validation methods for choosing the smoothing parameters are addressed. Finally, some areas of related research are mentioned, such as hypothesis testing, model selection, hazard rate estimation for censored data, and nonparametric change-point problems. The closing section formulates some promising research directions relating to wavelets in statistics.  相似文献   

7.
This paper outlines and discusses the advantages of an ‘Introduction to Statistical Consulting’ course (ISC) that exposes students to statistical consulting early in their studies. The course is intended for students before, or while, they study their units in statistical techniques, and assumes only a first‐year introductory statistics unit. The course exposes undergraduate students to the application of statistics and helps develop statistical thinking. An important goal is to introduce students to work as a statistician early in their studies because this motivates some students to study statistics further and provides a framework to motivate the learning of further statistical techniques. The ISC has proved popular with students, and this paper discusses the reasons for this popularity and the benefits of an ISC to statistical education and the statistics profession.  相似文献   

8.
夏滨生 《统计研究》2008,25(5):9-18
本文从分析统计的本质属性出发,根据统计广泛性特点,归纳广义统计和狭义统计的概念,建立“统计概念总系”,力求能够包罗统计万象,体现统计全貌,使统计这一事物有一个总纲纪,为人们全面认识统计提供了一个新的视角。文中重点对政府统计纵向构成进行了分析和解读,解开人们在传统认识上的挽扣儿,这是正确理解政府统计构成的关键认识点,由此方能建立起统计概念体系。  相似文献   

9.
This paper provides a simple methodology for approximating the distribution of indefinite quadratic forms in normal random variables. It is shown that the density function of a positive definite quadratic form can be approximated in terms of the product of a gamma density function and a polynomial. An extension which makes use of a generalized gamma density function is also considered. Such representations are based on the moments of a quadratic form, which can be determined from its cumulants by means of a recursive formula. After expressing an indefinite quadratic form as the difference of two positive definite quadratic forms, one can obtain an approximation to its density function by means of the transformation of variable technique. An explicit representation of the resulting density approximant is given in terms of a degenerate hypergeometric function. An easily implementable algorithm is provided. The proposed approximants produce very accurate percentiles over the entire range of the distribution. Several numerical examples illustrate the results. In particular, the methodology is applied to the Durbin–Watson statistic which is expressible as the ratio of two quadratic forms in normal random variables. Quadratic forms being ubiquitous in statistics, the approximating technique introduced herewith has numerous potential applications. Some relevant computational considerations are also discussed.  相似文献   

10.
This paper studies penalized quantile regression for dynamic panel data with fixed effects, where the penalty involves l1 shrinkage of the fixed effects. Using extensive Monte Carlo simulations, we present evidence that the penalty term reduces the dynamic panel bias and increases the efficiency of the estimators. The underlying intuition is that there is no need to use instrumental variables for the lagged dependent variable in the dynamic panel data model without fixed effects. This provides an additional use for the shrinkage models, other than model selection and efficiency gains. We propose a Bayesian information criterion based estimator for the parameter that controls the degree of shrinkage. We illustrate the usefulness of the novel econometric technique by estimating a “target leverage” model that includes a speed of capital structure adjustment. Using the proposed penalized quantile regression model the estimates of the adjustment speeds lie between 3% and 44% across the quantiles, showing strong evidence that there is substantial heterogeneity in the speed of adjustment among firms.  相似文献   

11.
Apart from having intrinsic mathematical interest, order statistics are also useful in the solution of many applied sampling and analysis problems. For a general review of the properties and uses of order statistics, see David (1981). This paper provides tabulations of means and variances of certain order statistics from the gamma distribution, for parameter values not previously available. The work was motivated by a particular quota sampling problem, for which existing tables are not adequate. The solution to this sampling problem actually requires the moments of the highest order statistic within a given set; however the calculation algorithm used involves a recurrence relation, which causes all the lower order statistics to be calculated first. Therefore we took the opportunity to develop more extensive tables for the gamma order statistic moments in general. Our tables provide values for the order statistic moments which were not available in previous tables, notably those for higher values of m, the gamma distribution shape parameter. However we have also retained the corresponding statistics for lower values of m, first to allow for checking accuracy of the computtions agtainst previous tables, and second to provide an integrated presentation of our new results with the previously known values in a consistent format  相似文献   

12.
 统计学博士是统计科研的中坚力量之一,其博士论文在一定程度上反映了当时我国统计学科研的热点和前沿,代表着我国统计教育的先进水平。本文通过对1987-2009年509篇统计学博士学位论文的选题及其研究内容进行统计,分析其变动规律和特点,总结选题的得失与启示,为今后科学选题以及进一步深入开展统计学术研究提供参考依据。  相似文献   

13.
官方统计数据修订的国际经验   总被引:1,自引:1,他引:0       下载免费PDF全文
文兼武  王少平 《统计研究》2010,27(10):13-17
随着社会各界对统计数据越来越关注,官方统计数据的修订问题越来越重要。科学修订统计数据对于确保统计数据质量,维护统计数据信誉十分重要。本文深入分析了国外官方统计数据修订的主要原因,阐述了有关国际组织和国家统计数据修订的种类、范围、条件和程序,详细介绍了美国和澳大利亚官方统计数据修订的具体做法和经验,并得出了几点有益的启示,以期对研究制定我国统计数据修订制度提供借鉴。  相似文献   

14.
扫描数据为政府统计源头数据信息化改革与宏观经济测度提供了新的技术范式。基于对世界各国利用扫描数据编制CPI的现状进行梳理研究,并针对中国扫描数据的现状和政府价格统计的特点,提出了一种利用扫描数据编制中国CPI的思路,力图为基于"大数据"的政府统计源头数据信息化改革提供理论和实践参考。  相似文献   

15.
The objective of this paper is to investigate through simulation the possible presence of the incidental parameters problem when performing frequentist model discrimination with stratified data. In this context, model discrimination amounts to considering a structural parameter taking values in a finite space, with k points, k≥2. This setting seems to have not yet been considered in the literature about the Neyman–Scott phenomenon. Here we provide Monte Carlo evidence of the severity of the incidental parameters problem also in the model discrimination setting and propose a remedy for a special class of models. In particular, we focus on models that are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with the best frequentist selection procedure based on maximization of the marginal likelihood induced by the maximal invariant, or of its Laplace approximation. Results of two Monte Carlo experiments indicate that when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero, unlike what happens for model discrimination based on exact or approximate marginal likelihoods. Finally, two examples with real data sets are given.  相似文献   

16.
Goodman and Kish (1950) introduced the problem of controlled selection in the sense of decreasing the selection probability of nonpreferred combinations, such as, for example, combinations which present organisational difficulties involving additional cost, etc. The authors (Avadhani and Sukhatme (1965), Sukhatme and Avadhani (1965)) have evolved certain techniques of controlled selection which eliminate altogether some non-preferred combinations and reduce the probability of selection of the remaining combinations, if any, to the minimum possible extent without deviating from the fundamental principles of random sampling. However, it is often felt convenient in practice to draw sampling units one after another from the population rather than combinations of units. But, at present no technique by which one can select the units one after another and at the same time reduce the probability of selection of non-preferred units is available in literature.
In this paper we have suggested a solution to this problem which not only minimizes the selection probability of non-preferred units (and of samples containing predominantly large numbers of nonpreferred units) but also provides more efficient estimators than the the usual probability proportional to size (P.P.S) sampling scheme.  相似文献   

17.
Logistic discrimination is a well documented method for classifying observations to two or more groups. However, estimation of the discriminant rule can be seriously affected by outliers. To overcome this, Cox and Ferry produced a robust logistic discrimination technique. Although their method worked in practice, parameter estimation was sometimes prone to convergence problems. This paper proposes a simplified robust logistic model which does not have any such problems and which takes a generalized linear model form. Misclassification rates calculated in a simulation exercise are used to compare the new method with ordinary logistic discrimination. Model diagnostics are also presented. The newly proposed model is then used on data collected from pregnant women at two district general hospitals. A robust logistic discriminant is calculated which can be used to predict accurately which method of feeding a woman will eventually use: breast feeding or bottle feeding.  相似文献   

18.
A substantial fraction of the statistical analyses and in particular statistical computing is done under the heading of multiple linear regression. That is the fitting of equations to multivariate data using the least squares technique for estimating parameters The optimality properties of these estimates are described in an ideal setting which is not often realized in practice.

Frequently, we do not have "good" data in the sense that the errors are non-normal or the variance is non-homogeneous. The data may contain outliers or extremes which are not easily detectable but variables in the proper functional, and we. may have the linearity

Prior to the mid-sixties regression programs provided just the basic least squares computations plus possibly a step-wise algorithm for variable selection. The increased interest in regression prompted by dramatic improvements in computers has led to a vast amount of literatur describing alternatives to least squares improved variable selection methods and extensive diagnostic procedures

The purpose of this paper is to summarize and illustrate some of these recent developments. In particular we shall review some of the potential problems with regression data discuss the statistics and techniques used to detect these problems and consider some of the proposed solutions. An example is presented to illustrate the effectiveness of these diagnostic methods in revealing such problems and the potential consequences of employing the proposed methods.  相似文献   

19.
This paper describes the one‐day introduction to experimental design training course at GlaxoSmithKline. In particular, the use of paper helicopter experiments has been an effective and efficient method for teaching experimental design techniques to scientific and other staff. A good supporting strategy by which the statistics department provides back‐up following the course is essential. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
Let X1,…,Xn be exchangeable normal variables with a common correlation p, and let X(1) > … > X(n) denote their order statistics. The random variable σni=nk+1xi, called the selection differential by geneticists, is of particular interest in genetic selection and related areas. In this paper we give results concerning a conjecture of Tong (1982) on the distribution of this random variable as a function of ρ. The same technique used can be applied to yield more general results for linear combinations of order statistics from elliptical distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号