全文获取类型
收费全文 | 1987篇 |
免费 | 131篇 |
国内免费 | 4篇 |
专业分类
管理学 | 188篇 |
民族学 | 11篇 |
人才学 | 1篇 |
人口学 | 90篇 |
丛书文集 | 139篇 |
理论方法论 | 81篇 |
综合类 | 791篇 |
社会学 | 105篇 |
统计学 | 716篇 |
出版年
2024年 | 1篇 |
2023年 | 3篇 |
2022年 | 24篇 |
2021年 | 30篇 |
2020年 | 52篇 |
2019年 | 56篇 |
2018年 | 67篇 |
2017年 | 35篇 |
2016年 | 52篇 |
2015年 | 68篇 |
2014年 | 126篇 |
2013年 | 286篇 |
2012年 | 196篇 |
2011年 | 179篇 |
2010年 | 141篇 |
2009年 | 115篇 |
2008年 | 92篇 |
2007年 | 85篇 |
2006年 | 94篇 |
2005年 | 72篇 |
2004年 | 73篇 |
2003年 | 56篇 |
2002年 | 54篇 |
2001年 | 46篇 |
2000年 | 29篇 |
1999年 | 12篇 |
1998年 | 10篇 |
1997年 | 19篇 |
1996年 | 7篇 |
1995年 | 5篇 |
1994年 | 2篇 |
1993年 | 3篇 |
1992年 | 2篇 |
1991年 | 5篇 |
1990年 | 5篇 |
1989年 | 2篇 |
1988年 | 2篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1983年 | 1篇 |
1982年 | 5篇 |
1981年 | 3篇 |
1979年 | 2篇 |
1966年 | 1篇 |
排序方式: 共有2122条查询结果,搜索用时 0 毫秒
151.
152.
为了测算分析社会保障制度的可持续发展,世界上不少国家建立了社会保障基金长期财务预测模型。与确定性财务预测模型相比,随机预测模型有利于阐明预测结果所面临的不确定。美国在运用随机预测模型对社会保障基金的财务状况做出预测方面走在世界最前端,我国对社会保障基金随机预测模型的研究基本处于空白。本文对美国社会保障署和国会预算办公室采用的社会保障基金长期随机预测模型进行了比较分析,对两种模型的选择给出了建议,最后提出了我国建立社会保障基金长期预测模型的一些建议。 相似文献
153.
上世纪中叶,因子分析和典型相关分析方法的发展完善,解决了潜变量的测度及其相关关系衡量问题,奠定了潜变量因果模型的方法论基础。此后,潜变量模型被引入到计量经济学研究领域,依次经历了共同结构范式模型、经典潜变量模型和非经典潜变量模型三个阶段,逐步成为现代计量经济模型的重要组成部分。本文从方法论角度对计量经济学中的潜变量模型发展过程进行了全面考察,比较了各个阶段建模方法论的特征,归纳总结了其发展演化规律,并对下一步研究的重点领域进行了展望。 相似文献
154.
《Journal of Statistical Computation and Simulation》2012,82(4):233-259
Finite sample properties of ML and REML estimators in time series regression models with fractional ARIMA noise are examined. In particular, theoretical approximations for bias of ML and REML estimators of the noise parameters are developed and their accuracy is assessed through simulations. The impact of noise parameter estimation on performance of t -statistics and likelihood ratio statistics for testing regression parameters is also investigated. 相似文献
155.
《Journal of Statistical Computation and Simulation》2012,82(3):217-232
Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods. 相似文献
156.
《Journal of Statistical Computation and Simulation》2012,82(3):187-203
When analyzing categorical data using loglinear models in sparse contingency tables, asymptotic results may fail. In this paper the empirical properties of three commonly used asymptotic tests of independence, based on the uniform association model for ordinal data, are investigated by means of Monte Carlo simulation. Five different bootstrapped tests of independence are presented and compared to the asymptotic tests. The comparisons are made with respect to both size and power properties of the tests. Results indicate that the asymptotic tests have poor size control. The test based on the estimated association parameter is severely conservative and the two chi-squared tests (Pearson, likelihood-ratio) are both liberal. The bootstrap tests that either use a parametric assumption or are based on non-pivotal test statistics do not perform better than the asymptotic tests in all situations. The bootstrap tests that are based on approximately pivotal statistics provide both adjustment of size and enhancement of power. These tests are therefore recommended for use in situations similar to those included in the simulation study. 相似文献
157.
《Journal of Statistical Computation and Simulation》2012,82(11):881-894
In time series analysis, signal extraction model (SEM) is used to estimate unobserved signal component from observed time series data. Since parameters of the components in SEM are often unknown in practice, a commonly used method is to estimate unobserved signal component using the maximum likelihood estimates (MLEs) of parameters of the components. This paper explores an alternative way to estimate unobserved signal component when parameters of the components are unknown. The suggested method makes use of importance sampling (IS) with Bayesian inference. The basic idea is to treat parameters of the components in SEM as a random vector and compute a posterior probability density function of the parameters using Bayesian inference. Then IS method is applied to integrate out the parameters and thus estimates of unobserved signal component, unconditional to the parameters, can be obtained. This method is illustrated with a real time series data. Then a Monte Carlo study with four different types of time series models is carried out to compare a performance of this method with that of a commonly used method. The study shows that IS method with Bayesian inference is computationally feasible and robust, and more efficient in terms of mean square errors (MSEs) than a commonly used method. 相似文献
158.
《Journal of Statistical Computation and Simulation》2012,82(10):831-840
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task. We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also. 相似文献
159.
《Journal of Statistical Computation and Simulation》2012,82(2):149-165
Bayesian estimation via MCMC methods opens up new possibilities in estimating complex models. However, there is still considerable debate about how selection among a set of candidate models, or averaging over closely competing models, might be undertaken. This article considers simple approaches for model averaging and choice using predictive and likelihood criteria and associated model weights on the basis of output for models that run in parallel. The operation of such procedures is illustrated with real data sets and a linear regression with simulated data where the true model is known. 相似文献
160.
《Journal of Statistical Computation and Simulation》2012,82(2):155-164
The sampling distribution of kendall's partial rank correlation coefficient, Jxy?z, is not known for N>4, where N is the number of subjectts. Moran (1951) used a direcr conbinatorial method to obtain the distribution of Jxy?z forN=4; however, ten minor computationa; errors in his Table 2apparently resulted in how erroneous entries for his frequency table. Since the parctial limits of the direct combinatorial approach have been reached once N>4, the first main objective of this paper was to obtain the exact distribution of Jxy?z for N=f, 6, and 7 using an electronic computer. The second was to use the Monte Carlo method to obtain reliable estimates of the quantiles of Jxy?z for N=8,9,...,30 相似文献