全文获取类型
收费全文 | 1156篇 |
免费 | 32篇 |
国内免费 | 6篇 |
专业分类
管理学 | 56篇 |
民族学 | 4篇 |
人口学 | 5篇 |
丛书文集 | 24篇 |
理论方法论 | 13篇 |
综合类 | 195篇 |
社会学 | 26篇 |
统计学 | 871篇 |
出版年
2024年 | 2篇 |
2023年 | 13篇 |
2022年 | 15篇 |
2021年 | 19篇 |
2020年 | 27篇 |
2019年 | 36篇 |
2018年 | 32篇 |
2017年 | 66篇 |
2016年 | 22篇 |
2015年 | 30篇 |
2014年 | 41篇 |
2013年 | 352篇 |
2012年 | 122篇 |
2011年 | 27篇 |
2010年 | 37篇 |
2009年 | 41篇 |
2008年 | 32篇 |
2007年 | 22篇 |
2006年 | 27篇 |
2005年 | 24篇 |
2004年 | 18篇 |
2003年 | 23篇 |
2002年 | 26篇 |
2001年 | 21篇 |
2000年 | 18篇 |
1999年 | 10篇 |
1998年 | 8篇 |
1997年 | 6篇 |
1996年 | 4篇 |
1995年 | 4篇 |
1994年 | 7篇 |
1993年 | 7篇 |
1992年 | 4篇 |
1991年 | 8篇 |
1990年 | 8篇 |
1989年 | 5篇 |
1988年 | 4篇 |
1987年 | 3篇 |
1986年 | 4篇 |
1984年 | 5篇 |
1983年 | 3篇 |
1982年 | 4篇 |
1981年 | 4篇 |
1979年 | 2篇 |
1977年 | 1篇 |
排序方式: 共有1194条查询结果,搜索用时 187 毫秒
1.
部分线性模型是一类非常重要的半参数回归模型,由于它既含有参数部分又含有非参数部分,与常规的线性模型相比具有更强的适应性和解释能力。文章研究带有局部平稳协变量的固定效应部分线性面板数据模型的统计推断。首先提出一个两阶段估计方法得到模型中未知参数和非参数函数的估计,并证明估计量的渐近性质,然后运用不变原理构造出非参数函数的一致置信带,最后通过数值模拟研究和实例分析验证了该方法的有效性。 相似文献
2.
蔡之兵 《吉首大学学报(社会科学版)》2020,41(3):67-76
每当先进的发展制度开始取代落后的发展制度时,整个世界格局就会进入重大的转换阶段。在中国五千年的发展历史中,有两次重大的历史变局完全改变了中国历史发展轨迹并深远地影响了当前中国的发展模式。新时代下的中国作为前两次历史变局影响的客体,曾经既是先进发展制度取代落后发展制度的受益者,也经历过作为落后发展制度主体而被先进发展制度冲击的过程。目前世界正处于第三次先进发展制度与落后发展制度交替的历史变局阶段,作为第三次重大历史变局的主体,在三次千年历史变局叠加的背景下,能否有效地认识、适应并改造利用前两次历史变局的经验与教训,构建系统性、科学性、可行性、领先性的中国特色社会主义制度,将决定中国在本次历史变局中能否顺利成为先进发展制度的主体从而实现民族复兴。 相似文献
3.
4.
David R. Bickel 《统计学通讯:理论与方法》2020,49(11):2703-2712
AbstractConfidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions. 相似文献
5.
日汉同声传译中常见问题及其对策 总被引:1,自引:0,他引:1
梁丽娟 《广西师范学院学报(哲学社会科学版)》2008,29(2):124-127
由于日汉两种语言所具有的不同特点,给同声传译造成了诸多的困难,影响了同声传译的质量。我们在进行日汉同声传译的时侯,可以采用先听先译、延长或缩短等候时间、预测、弥补、紧跟、切割归纳、调节语速语调语气等对策来解决遇到的困难。 相似文献
6.
Low dose risk estimation via simultaneous statistical inferences 总被引:2,自引:0,他引:2
Walter W. Piegorsch R. Webster West Wei Pan Ralph L. Kodell 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):245-258
Summary. The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose. 相似文献
7.
Kepher Henry Makambi 《Statistical Methods and Applications》2002,11(1):127-138
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center
interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution.
For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type
I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely
on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance
levels closer to the nominal level compared with the standard procedure. 相似文献
8.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants. 相似文献
9.
EVE BOFINGER 《Australian & New Zealand Journal of Statistics》1994,36(1):59-66
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis. 相似文献
10.
Peter B. Gilbert 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):143-158
Summary. To help to design vaccines for acquired immune deficiency syndrome that protect broadly against many genetic variants of the human immunodeficiency virus, the mutation rates at 118 positions in HIV amino-acid sequences of subtype C versus those of subtype B were compared. The false discovery rate (FDR) multiple-comparisons procedure can be used to determine statistical significance. When the test statistics have discrete distributions, the FDR procedure can be made more powerful by a simple modification. The paper develops a modified FDR procedure for discrete data and applies it to the human immunodeficiency virus data. The new procedure detects 15 positions with significantly different mutation rates compared with 11 that are detected by the original FDR method. Simulations delineate conditions under which the modified FDR procedure confers large gains in power over the original technique. In general FDR adjustment methods can be improved for discrete data by incorporating the modification proposed. 相似文献