首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33篇
  免费   1篇
理论方法论   1篇
统计学   33篇
  2020年   1篇
  2019年   1篇
  2018年   1篇
  2017年   3篇
  2016年   2篇
  2014年   2篇
  2013年   15篇
  2012年   3篇
  2008年   1篇
  2003年   2篇
  2002年   1篇
  2000年   1篇
  1988年   1篇
排序方式: 共有34条查询结果,搜索用时 140 毫秒
1.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   
2.
Functional data analysis (FDA)—the analysis of data that can be considered a set of observed continuous functions—is an increasingly common class of statistical analysis. One of the most widely used FDA methods is the cluster analysis of functional data; however, little work has been done to compare the performance of clustering methods on functional data. In this article, a simulation study compares the performance of four major hierarchical methods for clustering functional data. The simulated data varied in three ways: the nature of the signal functions (periodic, non periodic, or mixed), the amount of noise added to the signal functions, and the pattern of the true cluster sizes. The Rand index was used to compare the performance of each clustering method. As a secondary goal, clustering methods were also compared when the number of clusters has been misspecified. To illustrate the results, a real set of functional data was clustered where the true clustering structure is believed to be known. Comparing the clustering methods for the real data set confirmed the findings of the simulation. This study yields concrete suggestions to future researchers to determine the best method for clustering their functional data.  相似文献   
3.
The problem of choice of coordinates in Stein-type estimators,when simultaneously estimating normal means, is considered. The question of deciding whether to use all coordinates in one combined shrinkage estimators or to separate into groups and use separate shrinkage estimators on each group is considered in the situation in which part of the prior information may be " misspecified". It is observed that the amount of misspecification determines whether to use the combined shrinkage estimator the separate shrinkage estimator.  相似文献   
4.
This article develops nonparametric tests of independence between two stochastic processes satisfying β-mixing conditions. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, we take advantage of a generalized entropic measure so as to build a whole family of nonparametric tests of independence. We derive asymptotic normality and local power using the functional delta method for kernels. As a corollary, we also develop a class of entropy-based tests for serial independence. The latter are nuisance parameter free, and hence also qualify for dynamic misspecification analyses. We then investigate the finite-sample properties of our serial independence tests through Monte Carlo simulations. They perform quite well, entailing more power against some nonlinear AR alternatives than two popular nonparametric serial-independence tests.  相似文献   
5.
We address the problem of optimally forecasting a binary variable for a heterogeneous group of decision makers facing various (binary) decision problems that are tied together only by the unknown outcome. A typical example is a weather forecaster who needs to estimate the probability of rain tomorrow and then report it to the public. Given a conditional probability model for the outcome of interest (e.g., logit or probit), we introduce the idea of maximum welfare estimation and derive conditions under which traditional estimators, such as maximum likelihood or (nonlinear) least squares, are asymptotically socially optimal even when the underlying model is misspecified.  相似文献   
6.
王霞  洪永淼 《统计研究》2014,31(12):75-81
现有基于参数模型构造的条件异方差检验往往存在模型设定偏误问题。为了避免模型误设对检验结果的影响,并且同时捕获多种条件异方差现象,本文基于非参数回归构造了不依赖于特定模型形式的条件异方差检验统计量。该统计量可视作条件方差和无条件方差之间差异的加权平均,在原假设成立时渐近服从标准正态分布。数值模拟结果一方面表明本文统计量具有良好的有限样本性质,另一方面也说明条件均值模型误设会导致错误地拒绝条件同方差的原假设,凸显了本文引入非参数方法构造条件异方差检验的必要性。实证分析采用本文统计量探讨了国际主要股指收益率的条件异方差现象,得到了与Engle (1982)不同的检验结果,可能意味着股指收益率呈现出非线性动态特征。  相似文献   
7.
Econometrics textbooks make use of the assumption of a fixed regressor matrix, although it is nearly always unrealistic for economic data. For developing the properties of the general linear model, this is a convenient simplification. But this assumption is retained when examining model specification. This practice has generated results that, if not erroneous, are certainly incomplete and thus deceptive. In particular, the omission of a variable or the imposition of any incorrect restriction may well increase the variance of estimated parameters rather than reduce it, as stated in the textbooks.  相似文献   
8.

This work is motivated by the need to find experimental designs which are robust under different model assumptions. We measure robustness by calculating a measure of design efficiency with respect to a design optimality criterion and say that a design is robust if it is reasonably efficient under different model scenarios. We discuss two design criteria and an algorithm which can be used to obtain robust designs. The first criterion employs a Bayesian-type approach by putting a prior or weight on each candidate model and possibly priors on the corresponding model parameters. We define the first criterion as the expected value of the design efficiency over the priors. The second design criterion we study is the minimax design which minimizes the worst value of a design criterion over all candidate models. We establish conditions when these two criteria are equivalent when there are two candidate models. We apply our findings to the area of accelerated life testing and perform sensitivity analysis of designs with respect to priors and misspecification of planning values.  相似文献   
9.
Violation of correct specification may cause some undesirable results such as biased logistic regression coefficients and less efficient test statistics. In this paper, asymptotic relative efficiency (ARE) of various coefficients of determination in misspecified binary logistic regression models is investigated. Seven types of misspecification have been included. ARE of test statistics for exponential and Weibull distributions as a method of calculating optimal cutpoints is derived to demonstrate misspecification. Theoretical relationships between coefficients of determination have also been analyzed. Extensive simulations using bootstrap method and a real data application reveal more efficient one under various modeling scenarios.  相似文献   
10.
The potential observational equivalence between various types of nonlinearity and long memory has been recognized by the econometrics community since at least the contribution of Diebold and Inoue (2001 Diebold, F., Inoue, A. (2001). Long memory and regime switching. Journal of Econometrics 105:131159.[Crossref], [Web of Science ®] [Google Scholar]). A large literature has developed in an attempt to ascertain whether or not the long memory finding in many economic series is spurious. Yet to date, no study has analyzed the consequences of using long memory methods to test for unit roots when the “truth” derives from regime switching, structural breaks, or other types of mean reverting nonlinearity. In this article, I conduct a comprehensive Monte Carlo analysis to investigate the consequences of using tests designed to have power against fractional integration when the actual data generating process is unknown. I additionally consider the use of tests designed to have power against breaks and threshold nonlinearity. The findings are compelling and demonstrate that the use of long memory as an approximation to nonlinearity yields tests with relatively high power. In contrast, misspecification has severe consequences for tests designed to have power against threshold nonlinearity, and especially for tests designed to have power against breaks.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号