全文获取类型
收费全文 | 91篇 |
免费 | 0篇 |
专业分类
管理学 | 3篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 3篇 |
理论方法论 | 1篇 |
综合类 | 11篇 |
社会学 | 1篇 |
统计学 | 70篇 |
出版年
2021年 | 1篇 |
2020年 | 3篇 |
2019年 | 9篇 |
2018年 | 2篇 |
2017年 | 2篇 |
2016年 | 2篇 |
2014年 | 2篇 |
2013年 | 30篇 |
2012年 | 10篇 |
2011年 | 2篇 |
2010年 | 3篇 |
2007年 | 3篇 |
2006年 | 2篇 |
2005年 | 1篇 |
2004年 | 3篇 |
2003年 | 3篇 |
2002年 | 1篇 |
2001年 | 3篇 |
1998年 | 1篇 |
1997年 | 2篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1988年 | 1篇 |
1984年 | 1篇 |
1982年 | 1篇 |
排序方式: 共有91条查询结果,搜索用时 0 毫秒
71.
Ronald D. Fricker Jr. Katherine Burke Xiaoyan Han William H. Woodall 《The American statistician》2019,73(1):374-384
ABSTRACTIn this article, we assess the 31 articles published in Basic and Applied Social Psychology (BASP) in 2016, which is one full year after the BASP editors banned the use of inferential statistics. We discuss how the authors collected their data, how they reported and summarized their data, and how they used their data to reach conclusions. We found multiple instances of authors overstating conclusions beyond what the data would support if statistical significance had been considered. Readers would be largely unable to recognize this because the necessary information to do so was not readily available. 相似文献
72.
In the framework of null hypothesis significance testing for functional data, we propose a procedure able to select intervals of the domain imputable for the rejection of a null hypothesis. An unadjusted p-value function and an adjusted one are the output of the procedure, namely interval-wise testing. Depending on the sort and level α of type-I error control, significant intervals can be selected by thresholding the two p-value functions at level α. We prove that the unadjusted (adjusted) p-value function point-wise (interval-wise) controls the probability of type-I error and it is point-wise (interval-wise) consistent. To enlighten the gain in terms of interpretation of the phenomenon under study, we applied the interval-wise testing to the analysis of a benchmark functional data set, i.e. Canadian daily temperatures. The new procedure provides insights that current state-of-the-art procedures do not, supporting similar advantages in the analysis of functional data with less prior knowledge. 相似文献
73.
作为当代发展最为迅猛的一种科学研究方法——贝叶斯主义方法,尽管已经深入到了科学研究的方方面面,但其先天的“先验”、“主观性”等标签使其陷入了不必要的纷争之中.通过贝叶斯对无差别原则和不变性原理的考察,发现先验概率的无约束即主观性,是贝叶斯主义的一个优点,更重要的是该主观性的产生是由严格的“推理机”——归纳逻辑所定义出的. 相似文献
74.
Jinwen Chen 《统计学通讯:理论与方法》2013,42(7):1247-1257
In this article, we consider the problem of estimating certain “parameters” in a mixture of probability measures. We show that a single sample is typically suitable for estimating the component measures, but not suitable for estimating the mixing measures, especially when consistency is required. To have consistent estimators of the mixing measure, several samples with increasing size are needed in general. 相似文献
75.
给出了策略熵的定义,论证了最大策略熵是纳什均衡的充分必要条件,并探讨了运用熵极大化准则来估计博弈参与者的混合策略的方法,得到了与传统方法相同的纳什均衡解,这表明纳什均衡策略是既定的收益约束条件的最大熵,为纳什均衡提供了基于信息论的解释,同时也为求解纳什均衡提供了一种极大熵估计方法,算例验证了该方法的可行性和有效性。 相似文献
76.
王左立 《南开学报(哲学社会科学版)》2006,(6):106-113
与归纳逻辑类似,演绎逻辑也面临着合理性问题。演绎逻辑的辩护问题是逻辑哲学中的重要问题。人们不能用归纳逻辑为演绎做辩护,用演绎逻辑为演绎做辩护也将导致失败。以往的演绎辩护,除了导致循环论证之外,还另有失败原因。在元语言中为对象语言的理论做辩护,这将导致无限的后退,最终也跳不出语言的范围。演绎逻辑是人们发明出来的推理工具,同时也是一种严格按照规则进行的游戏,人们对它无法做出本体论或认识论的辩护。 相似文献
77.
李素英 《聊城大学学报(社会科学版)》2012,(1):68-71
文章对聊斋俚曲中的几个特殊疑问副词进行了探讨:一是认为疑问副词"难道"、"每哩"、"没哩"不表示反诘,而是表示测度,并考察了其历史渊源;二是对"可VP(么)"句式进行了具体分析,指出"可"具有测度、反诘、强调等三种语用功能。 相似文献
78.
AbstractThe generalized extreme value (GEV) distribution is known as the limiting result for the modeling of maxima blocks of size n, which is used in the modeling of extreme events. However, it is possible for the data to present an excessive number of zeros when dealing with extreme data, making it difficult to analyze and estimate these events by using the usual GEV distribution. The Zero-Inflated Distribution (ZID) is widely known in literature for modeling data with inflated zeros, where the inflator parameter w is inserted. The present work aims to create a new approach to analyze zero-inflated extreme values, that will be applied in data of monthly maximum precipitation, that can occur during months where there was no precipitation, being these computed as zero. An inference was made on the Bayesian paradigm, and the parameter estimation was made by numerical approximations of the posterior distribution using Markov Chain Monte Carlo (MCMC) methods. Time series of some cities in the northeastern region of Brazil were analyzed, some of them with predominance of non-rainy months. The results of these applications showed the need to use this approach to obtain more accurate and with better adjustment measures results when compared to the standard distribution of extreme value analysis. 相似文献
79.
A well-known difficulty in survey research is that respondents’ answers to questions can depend on arbitrary features of a survey’s design, such as the wording of questions or the ordering of answer choices. In this paper, we describe a novel set of tools for analyzing survey data characterized by such framing effects. We show that the conventional approach to analyzing data with framing effects—randomizing survey-takers across frames and pooling the responses—generally does not identify a useful parameter. In its place, we propose an alternative approach and provide conditions under which it identifies the responses that are unaffected by framing. We also present several results for shedding light on the population distribution of the individual characteristic the survey is designed to measure. 相似文献
80.
Hall et al. (2007) propose a method for moment selection based on an information criterion that is a function of the entropy of the limiting distribution of the Generalized Method of Moments (GMM) estimator. They establish the consistency of the method subject to certain conditions that include the identification of the parameter vector by at least one of the moment conditions being considered. In this article, we examine the limiting behavior of this moment selection method when the parameter vector is weakly identified by all the moment conditions being considered. It is shown that the selected moment condition is random and hence not consistent in any meaningful sense. As a result, we propose a two-step procedure for moment selection in which identification is first tested using a statistic proposed by Stock and Yogo (2003) and then only if this statistic indicates identification does the researcher proceed to the second step in which the aforementioned information criterion is used to select moments. The properties of this two-step procedure are contrasted with those of strategies based on either using all available moments or using the information criterion without the identification pre-test. The performances of these strategies are compared via an evaluation of the finite sample behavior of various methods for inference about the parameter vector. The inference methods considered are based on the Wald statistic, Anderson and Rubin's (1949) statistic, Kleibergen (2002) K statistic, and combinations thereof in which the choice is based on the outcome of the test for weak identification. 相似文献