首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we investigate a new procedure for the estimation of a linear quantile regression with possibly right-censored responses. Contrary to the main literature on the subject, we propose in this context to circumvent the formulation of conditional quantiles through the so-called “check” loss function that stems from the influential work of Koenker and Bassett (1978). Instead, our suggestion is here to estimate the quantile coefficients by minimizing an alternative measure of distance. In fact, our approach could be qualified as a generalization in a parametric regression framework of the technique consisting in inverting the conditional distribution of the response given the covariates. This is motivated by the knowledge that the main literature for censored data already relies on some nonparametric conditional distribution estimation as well. The ideas of effective dimension reduction are then exploited in order to accommodate for higher dimensional settings as well in this context. Extensive numerical results then suggest that such an approach provides a strongly competitive procedure to the classical approaches based on the check function, in fact both for complete and censored observations. From a theoretical prospect, both consistency and asymptotic normality of the proposed estimator for linear regression are obtained under classical regularity conditions. As a by-product, several asymptotic results on some “double-kernel” version of the conditional Kaplan–Meier distribution estimator based on effective dimension reduction, and its corresponding density estimator, are also obtained and may be of interest on their own. A brief application of our procedure to quasar data then serves to further highlight the relevance of the latter for quantile regression estimation with censored data.  相似文献   

2.
When a large amount of spatial data is available computational and modeling challenges arise and they are often labeled as “big n problem”. In this work we present a brief review of the literature. Then we focus on two approaches, respectively based on stochastic partial differential equations and integrated nested Laplace approximation, and on the tapering of the spatial covariance matrix. The fitting and predictive abilities of using the two methods in conjunction with Kriging interpolation are compared in a simulation study.  相似文献   

3.
Estimating conditional covariance matrices is important in statistics and finance. In this paper, we propose an averaging estimator for the conditional covariance, which combines the estimates of marginal conditional covariance matrices by Model Averaging MArginal Regression of Li, Linton, and Lu. This estimator avoids the “curse of dimensionality” problem that the local constant estimator of Yin et al. suffered from. We establish the asymptotic properties of the averaging weights and that of the proposed conditional covariance estimator. The finite sample performances are augmented by simulation. An application to portfolio allocation illustrates the practical superiority of the averaging estimator.  相似文献   

4.
We introduce a matrix operator, which we call “vecd” operator. This operator stacks up “diagonals” of a symmetric matrix. This operator is more convenient for some statistical analyses than the commonly used “vech” operator. We show an explicit relationship between the vecd and vech operators. Using this relationship, various properties of the vecd operator are derived. As applications of the vecd operator, we derive concise and explicit expressions of the Wald and score tests for equal variances of a multivariate normal distribution and for the diagonality of variance coefficient matrices in a multivariate generalized autoregressive conditional heteroscedastic (GARCH) model, respectively.  相似文献   

5.
Abstract

This paper provides an extension for “sequential order statistics” (SOS) introduced by Kamps. It is called “developed sequential order statistics” (DSOS) and is useful for describing lifetimes of engineering systems when component lifetimes are dependent. Explicit expressions for the joint density function, the marginal distributions and the means of DSOS are derived. Under the well known “conditional proportional hazard rate” (CPHR) model and the Gumbel families of copulas for dependency among component lifetimes, some findings are reported. For example, it is proved that the joint density functions of DSOS and SOS have the same structure. Various illustrative examples are also given.  相似文献   

6.
In many economic models, theory restricts the shape of functions, such as monotonicity or curvature conditions. This article reviews and presents a framework for constrained estimation and inference to test for shape conditions in parametric models. We show that “regional” shape-restricting estimators have important advantages in terms of model fit and flexibility (as opposed to standard “local” or “global” shape-restricting estimators). In our empirical illustration, this is the first article to impose and test for all shape restrictions required by economic theory simultaneously in the “Berndt and Wood” data. We find that this dataset is consistent with “duality theory,” whereas previous studies have found violations of economic theory. We discuss policy consequences for key parameters, such as whether energy and capital are complements or substitutes.  相似文献   

7.
After a brief review of social applications of Markov chains, the paper discusses nonlinear (“interactive”) Markov models in discrete and continuous time. The rather subtle relationship between the deterministic and stochastic versions of such models is explored by means of examples. It is shown that the behaviour of nonlinear systems over time periods of practical interest depends critically on the total size as well as on the system parameters. Particular attention is paid to strong and weak forms of quasi-stationarity exhibited by stochastic systems.  相似文献   

8.
The framework for a unified statistical theory of spline regression assuming fixed knots using the truncated polynomial or “+” function representation is presented. In particular, a partial ordering of some spline models is introduced to clarify their relationship and to indicate the hypotheses that can be tested by using either standard multiple regression procedures or a little-used conditional test developed by Hotelling (1940). The construction of spline models with polynomial pieces of different degrees is illustrated. A numerical example from a chemical experiment is given by using the GLM procedure of the statistical software package SAS (Barr et al. 1976).  相似文献   

9.
The “traditional” approach to the estimation of count-panel-data models with fixed effects is the conditional maximum likelihood estimator. The pseudo maximum likelihood principle can be used in these models to obtain orthogonality conditions that generate a robust estimator. This estimator is inconsistent, however, when the instruments are not strictly exogenous. This article proposes a generalized method of moments estimator for count-panel-data models with fixed effects, based on a transformation of the conditional mean specification, that is consistent even when the explanatory variables are predetermined. Two applications are discussed, the relationship between patents and research and development expenditures and the explanation of technology transfer.  相似文献   

10.
The Metropolis–Hastings algorithm is one of the most basic and well-studied Markov chain Monte Carlo methods. It generates a Markov chain which has as limit distribution the target distribution by simulating observations from a different proposal distribution. A proposed value is accepted with some particular probability otherwise the previous value is repeated. As a consequence, the accepted values are repeated a positive number of times and thus any resulting ergodic mean is, in fact, a weighted average. It turns out that this weighted average is an importance sampling-type estimator with random weights. By the standard theory of importance sampling, replacement of these random weights by their (conditional) expectations leads to more efficient estimators. In this paper we study the estimator arising by replacing the random weights with certain estimators of their conditional expectations. We illustrate by simulations that it is often more efficient than the original estimator while in the case of the independence Metropolis–Hastings and for distributions with finite support we formally prove that it is even better than the “optimal” importance sampling estimator.  相似文献   

11.
In this article, the Brier score is used to investigate the importance of clustering for the frailty survival model. For this purpose, two versions of the Brier score are constructed, i.e., a “conditional Brier score” and a “marginal Brier score.” Both versions of the Brier score show how the clustering effects and the covariate effects affect the predictive ability of the frailty model separately. Using a Bayesian and a likelihood approach, point estimates and 95% credible/confidence intervals are computed. The estimation properties of both procedures are evaluated in an extensive simulation study for both versions of the Brier score. Further, a validation strategy is developed to calculate an internally validated point estimate and credible/confidence interval. The ensemble of the developments is applied to a dental dataset.  相似文献   

12.
The role of Wikipedia for learning has been debated because it does not conform to the usual standards. Despite this, people use it, due to the ubiquity of Wikipedia entries in the outcomes from popular search engines. It is important for academic disciplines, including statistics, to ensure they are correctly represented in a medium where anyone can assume the role of discipline expert. In this context, we first develop a tool for evaluating Wikipedia articles for topics with a procedural component. Then, using this tool, five Wikipedia articles on basic statistical concepts are critiqued from the point of view of a self-learner: “arithmetic mean,” “standard deviation,” “standard error,” “confidence interval,” and “histogram.” We find that the articles, in general, are poor, and some articles contain inaccuracies. We propose that Wikipedia be actively discouraged for self-learning (using, for example, a classroom activity) except to give a brief overview; that in more formal learning environments, teachers be explicit about not using Wikipedia as a learning resource for course content; and, because Wikipedia is used regardless of considered advice or the organizational protocols in place, teachers move away from minimal contact with Wikipedia towards more constructive engagement.  相似文献   

13.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   

14.
关于AHP统计构权方法的几点看法   总被引:3,自引:0,他引:3       下载免费PDF全文
苏为华 《统计研究》1998,15(4):57-60
在多指标统计综合评价中,权数是影响评价结论的一个重要因素。不同的权数体系有可能导致不同的评价结论。最近几年来,人们指出了不少构造统计权数的方法。其中行之有效的构权方法当首推AHP构权法与DELPHI构权法。本文拟就AHP构权法谈几点自己的看法。  相似文献   

15.
Latent class analysis (LCA) has been found to have important applications in social and behavioral sciences for modeling categorical response variables, and nonresponse is typical when collecting data. In this study, the nonresponse mainly included “contingency questions” and real “missing data.” The primary objective of this research was to evaluate the effects of some potential factors on model selection indices in LCA with nonresponse data.

We simulated missing data with contingency questions and evaluated the accuracy rates of eight information criteria for selecting the correct models. The results showed that the main factors are latent class proportions, conditional probabilities, sample size, the number of items, the missing data rate, and the contingency data rate. Interactions of the conditional probabilities with class proportions, sample size, and the number of items are also significant. From our simulation results, the impact of missing data and contingency questions can be amended by increasing the sample size or the number of items.  相似文献   


16.
The so-called “principal formulae” of planar integral geometry are conventionally couched in terms of the “kinematic density”dxdydθ. Here a corresponding theory with respect to the “Lebesgue density”dxdy, that is with rotations suppressed, is developed. The only real difference is that the new “fundamental formula of Blaschke”contains a term depending upon the relative orientations of the two domains involved. In particular, the remarkable iteration property of these formulae carries over. The usual principal formulae follow as a corollary of the formulae given here, upon, averaging over orientations.  相似文献   

17.
Tail estimates are developed for power law probability distributions with exponential tempering, using a conditional maximum likelihood approach based on the upper-order statistics. Tempered power law distributions are intermediate between heavy power-law tails and Laplace or exponential tails, and are sometimes called “semi-heavy” tailed distributions. The estimation method is demonstrated on simulated data from a tempered stable distribution, and for several data sets from geophysics and finance that show a power law probability tail with some tempering.  相似文献   

18.
林晨等 《统计研究》2020,37(6):93-105
在国家提出发展战略性新兴产业的大背景下,本文将技术结构意义上的“基本”性和技术进步潜力结合起来,归纳出重点产业选择的内在逻辑,并从理论的角度论证了其对经济增长的意义。本文进一步给出了基于投入产出表数据来辨别重点产业的数值方法。基于矩阵三角化方法的数值分析发现,我国的重点产业包含通信设备、计算机及其他电子设备制造业,通用与专用设备制造业,交通运输设备制造业。本文也同时计算出了我国重点产业的竞争力水平,并开展了国际比较。测算结果表明:通用与专用设备制造业,交通运输设备制造业的竞争力相对较强,而通信设备、计算机及其他电子设备制造业中的生产性投入品的竞争力则相对较弱。  相似文献   

19.
Determining whether per capita output can be characterized by a stochastic trend is complicated by the fact that infrequent breaks in trend can bias standard unit root tests towards nonrejection of the unit root hypothesis. The bulk of the existing literature has focused on the application of unit root tests allowing for structural breaks in the trend function under the trend stationary alternative but not under the unit root null. These tests, however, provide little information regarding the existence and number of trend breaks. Moreover, these tests suffer from serious power and size distortions due to the asymmetric treatment of breaks under the null and alternative hypotheses. This article estimates the number of breaks in trend employing procedures that are robust to the unit root/stationarity properties of the data. Our analysis of the per capita gross domestic product (GDP) for Organization for Economic Cooperation and Development (OECD) countries thereby permits a robust classification of countries according to the “growth shift,” “level shift,” and “linear trend” hypotheses. In contrast to the extant literature, unit root tests conditional on the presence or absence of breaks do not provide evidence against the unit root hypothesis.  相似文献   

20.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号