全文获取类型
收费全文 | 91篇 |
免费 | 0篇 |
专业分类
管理学 | 3篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 3篇 |
理论方法论 | 1篇 |
综合类 | 11篇 |
社会学 | 1篇 |
统计学 | 70篇 |
出版年
2021年 | 1篇 |
2020年 | 3篇 |
2019年 | 9篇 |
2018年 | 2篇 |
2017年 | 2篇 |
2016年 | 2篇 |
2014年 | 2篇 |
2013年 | 30篇 |
2012年 | 10篇 |
2011年 | 2篇 |
2010年 | 3篇 |
2007年 | 3篇 |
2006年 | 2篇 |
2005年 | 1篇 |
2004年 | 3篇 |
2003年 | 3篇 |
2002年 | 1篇 |
2001年 | 3篇 |
1998年 | 1篇 |
1997年 | 2篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1988年 | 1篇 |
1984年 | 1篇 |
1982年 | 1篇 |
排序方式: 共有91条查询结果,搜索用时 12 毫秒
81.
Donald R. Hoover 《统计学通讯:理论与方法》2013,42(5):1623-1637
The recent literature contains theorems improving on both the standard Bonferroni inequality (Hoover (1990)) and the Sidak/Slepian inequalities (Glaz and Johnson (1984)), The application of these improved theorems to upper bounds for non coverage of simultaneous confidence intervals on multivariate normal variables is explored. The improved Bonferroni upper bounds always hold, while improved Sidak/Slepian bounds only apply to special cases. It is shown that improved Sidak/Slepian bounds will always hold for Normal Markov Processes, a commonly occuring and easily identifiable class of multivariate normal variables. The improved Sidak/Slepian upper bound, if it applies, is proven to be superior to the computationally equivalent improved Bonferroni bound. This improvement, however, is not great when both methods are used to determine upper bounds for Type I error in the range of .01 to .10. 相似文献
82.
We consider likelihood and Bayesian inferences for seemingly unrelated (linear) regressions for the joint niultivariate terror (e.g. Zellner, 1976) and the independent t-error (e.g. Maronna, 1976) models. For likelihood inference, the scale matrix and the shape parameter for the joint terror model cannot be consistently estimated because of the lack of adequate information to identify the latter. The joint terror model also yields the same MLEs for the regression coefficients and the scale matrix as for the independent normal error model. which are not robust against outliers. Further, linear hypotheses with respect to the regression coefficients also give rise to the same mill distributions AS for the independent normal error model, though the MLE has a non-normal limiting distribution. In contrast to the striking similarities between the joint t-error and the independent normal error models, the independent f-error model yields AiLEs that are lubust against uuthers. Since the MLE of the shape parameter reflects the tails of the data distributions, this model extends the independent normal error model for modeling data distributions with relatively t hicker tails. These differences are also discussed with respect to the posterior and predictive distributions for Bayesian inference. 相似文献
83.
朱迎春 《浙江师范大学学报(社会科学版)》2003,28(5):91-93
深层回指中第三人称代词指代的确定是一个复杂的、动态的认知心理过程,其指代的确认是读者运用认知结构中有关的语言知识、世界知识和语境知识进行认知推理的过程. 相似文献
84.
李素英 《聊城大学学报(社会科学版)》2012,(1):68-71
文章对聊斋俚曲中的几个特殊疑问副词进行了探讨:一是认为疑问副词"难道"、"每哩"、"没哩"不表示反诘,而是表示测度,并考察了其历史渊源;二是对"可VP(么)"句式进行了具体分析,指出"可"具有测度、反诘、强调等三种语用功能。 相似文献
85.
Hall et al. (2007) propose a method for moment selection based on an information criterion that is a function of the entropy of the limiting distribution of the Generalized Method of Moments (GMM) estimator. They establish the consistency of the method subject to certain conditions that include the identification of the parameter vector by at least one of the moment conditions being considered. In this article, we examine the limiting behavior of this moment selection method when the parameter vector is weakly identified by all the moment conditions being considered. It is shown that the selected moment condition is random and hence not consistent in any meaningful sense. As a result, we propose a two-step procedure for moment selection in which identification is first tested using a statistic proposed by Stock and Yogo (2003) and then only if this statistic indicates identification does the researcher proceed to the second step in which the aforementioned information criterion is used to select moments. The properties of this two-step procedure are contrasted with those of strategies based on either using all available moments or using the information criterion without the identification pre-test. The performances of these strategies are compared via an evaluation of the finite sample behavior of various methods for inference about the parameter vector. The inference methods considered are based on the Wald statistic, Anderson and Rubin's (1949) statistic, Kleibergen (2002) K statistic, and combinations thereof in which the choice is based on the outcome of the test for weak identification. 相似文献
86.
《商业与经济统计学杂志》2012,30(1):183-200
ABSTRACTThis article investigates the finite sample properties of a range of inference methods for propensity score-based matching and weighting estimators frequently applied to evaluate the average treatment effect on the treated. We analyze both asymptotic approximations and bootstrap methods for computing variances and confidence intervals in our simulation designs, which are based on German register data and U.S. survey data. We vary the design w.r.t. treatment selectivity, effect heterogeneity, share of treated, and sample size. The results suggest that in general, theoretically justified bootstrap procedures (i.e., wild bootstrapping for pair matching and standard bootstrapping for “smoother” treatment effect estimators) dominate the asymptotic approximations in terms of coverage rates for both matching and weighting estimators. Most findings are robust across simulation designs and estimators. 相似文献
87.
A well-known difficulty in survey research is that respondents’ answers to questions can depend on arbitrary features of a survey’s design, such as the wording of questions or the ordering of answer choices. In this paper, we describe a novel set of tools for analyzing survey data characterized by such framing effects. We show that the conventional approach to analyzing data with framing effects—randomizing survey-takers across frames and pooling the responses—generally does not identify a useful parameter. In its place, we propose an alternative approach and provide conditions under which it identifies the responses that are unaffected by framing. We also present several results for shedding light on the population distribution of the individual characteristic the survey is designed to measure. 相似文献
88.
AbstractThe generalized extreme value (GEV) distribution is known as the limiting result for the modeling of maxima blocks of size n, which is used in the modeling of extreme events. However, it is possible for the data to present an excessive number of zeros when dealing with extreme data, making it difficult to analyze and estimate these events by using the usual GEV distribution. The Zero-Inflated Distribution (ZID) is widely known in literature for modeling data with inflated zeros, where the inflator parameter w is inserted. The present work aims to create a new approach to analyze zero-inflated extreme values, that will be applied in data of monthly maximum precipitation, that can occur during months where there was no precipitation, being these computed as zero. An inference was made on the Bayesian paradigm, and the parameter estimation was made by numerical approximations of the posterior distribution using Markov Chain Monte Carlo (MCMC) methods. Time series of some cities in the northeastern region of Brazil were analyzed, some of them with predominance of non-rainy months. The results of these applications showed the need to use this approach to obtain more accurate and with better adjustment measures results when compared to the standard distribution of extreme value analysis. 相似文献
89.
倾向性得分是估计平均处理效应的重要工具。但在观察性研究中,通常会由于协变量在处理组与对照组分布的不平衡性而导致极端倾向性得分的出现,即存在十分接近于0或1的倾向性得分,这使得因果推断的强可忽略假设接近于违背,进而导致平均处理效应的估计出现较大的偏差与方差。Li等(2018a)提出了协变量平衡加权法,在无混杂性假设下通过实现协变量分布的加权平衡,解决了极端倾向性得分带来的影响。本文在此基础上,提出了基于协变量平衡加权法的稳健且有效的估计方法,并通过引入超级学习算法提升了模型在实证应用中的稳健性;更进一步,将前一方法推广至理论上不依赖于结果回归模型和倾向性得分模型假设的基于协变量平衡加权的稳健有效估计。蒙特卡洛模拟表明,本文提出的两种方法在结果回归模型和倾向性得分模型均存在误设时仍具有极小的偏差和方差。实证部分将两种方法应用于右心导管插入术数据,发现右心导管插入术大约会增加患者6. 3%死亡率。 相似文献
90.
Thiago do Rêgo Sousa Stephan Haug Claudia Klüppelberg 《Scandinavian Journal of Statistics》2019,46(3):765-801
We advocate the use of an Indirect Inference method to estimate the parameter of a COGARCH(1,1) process for equally spaced observations. This requires that the true model can be simulated and a reasonable estimation method for an approximate auxiliary model. We follow previous approaches and use linear projections leading to an auxiliary autoregressive model for the squared COGARCH returns. The asymptotic theory of the Indirect Inference estimator relies on a uniform strong law of large numbers and asymptotic normality of the parameter estimates of the auxiliary model, which require continuity and differentiability of the COGARCH process with respect to its parameter and which we prove via Kolmogorov's continuity criterion. This leads to consistent and asymptotically normal Indirect Inference estimates under moment conditions on the driving Lévy process. A simulation study shows that the method yields a substantial finite sample bias reduction compared with previous estimators. 相似文献