全文获取类型
收费全文 | 2426篇 |
免费 | 69篇 |
国内免费 | 9篇 |
专业分类
管理学 | 266篇 |
人口学 | 11篇 |
丛书文集 | 11篇 |
理论方法论 | 32篇 |
综合类 | 66篇 |
社会学 | 24篇 |
统计学 | 2094篇 |
出版年
2023年 | 36篇 |
2022年 | 28篇 |
2021年 | 40篇 |
2020年 | 39篇 |
2019年 | 95篇 |
2018年 | 113篇 |
2017年 | 200篇 |
2016年 | 90篇 |
2015年 | 84篇 |
2014年 | 118篇 |
2013年 | 536篇 |
2012年 | 201篇 |
2011年 | 85篇 |
2010年 | 84篇 |
2009年 | 102篇 |
2008年 | 74篇 |
2007年 | 78篇 |
2006年 | 73篇 |
2005年 | 58篇 |
2004年 | 58篇 |
2003年 | 38篇 |
2002年 | 33篇 |
2001年 | 29篇 |
2000年 | 35篇 |
1999年 | 26篇 |
1998年 | 28篇 |
1997年 | 21篇 |
1996年 | 9篇 |
1995年 | 8篇 |
1994年 | 14篇 |
1993年 | 7篇 |
1992年 | 11篇 |
1991年 | 13篇 |
1990年 | 3篇 |
1989年 | 5篇 |
1988年 | 7篇 |
1987年 | 3篇 |
1986年 | 3篇 |
1985年 | 4篇 |
1984年 | 2篇 |
1983年 | 3篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有2504条查询结果,搜索用时 0 毫秒
31.
本文首次将Elastic Net这种用于高度相关变量的惩罚方法用于面板数据的贝叶斯分位数回归,并基于非对称Laplace先验分布推导所有参数的后验分布,进而构建Gibbs抽样。为了验证模型的有效性,本文将面板数据的贝叶斯Elastic Net分位数回归方法(BQR. EN)与面板数据的贝叶斯分位数回归方法(BQR)、面板数据的贝叶斯Lasso分位数回归方法(BLQR)、面板数据的贝叶斯自适应Lasso分位数回归方法(BALQR)进行了多种情形下的全方位比较,结果表明BQR. EN方法适用于具有高度相关性、数据维度很高和尖峰厚尾分布特征的数据。进一步地,本文就BQR. EN方法在不同扰动项假设、不同样本量的情形展开模拟比较,验证了新方法的稳健性和小样本特性。最后,本文选取互联网金融类上市公司经济增加值(EVA)作为实证研究对象,检验新方法在实际问题中的参数估计与变量选择能力,实证结果符合预期。 相似文献
32.
李章吕 《重庆理工大学学报(社会科学版)》2016,(2):20-24
经济逻辑学是经济学和逻辑学的交叉学科,它的产生有其必然性,我们可以从经济学的基本假设、经济学的研究方法、经济学研究的确定性以及逻辑学的发展等几个角度给予论证。从贝叶斯决策理论的视角看,它是研究经济活动中理性决策和策略推理的科学。在理论驱动力和现实驱动力的双重作用下,经济逻辑的研究已开始了由形式逻辑范式向科学逻辑范式的转向。 相似文献
33.
Ayman Baklizi 《统计学通讯:模拟与计算》2016,45(8):2937-2946
We consider confidence intervals for the stress–strength reliability Pr(X< Y) in the two-parameter exponential distribution. We have derived the Bayesian highest posterior density interval using non-informative prior distributions. We have compared its performance with the intervals based on the generalized pivot variable intervals in terms of their coverage probabilities and expected lengths. Our simulation study shows that the Bayesian interval performs better according to the criteria used, especially when the sample sizes are very small. An example is given. 相似文献
34.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness. 相似文献
35.
36.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material. 相似文献
37.
38.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples. 相似文献
39.
In this study, the components of extra-Poisson variability are estimated assuming random effect models under a Bayesian approach. A standard existing methodology to estimate extra-Poisson variability assumes a negative binomial distribution. The obtained results show that using the proposed random effect model it is possible to get more accurate estimates for the extra-Poisson variability components when compared to the use of a negative binomial distribution where it is possible to estimate only one component of extra-Poisson variability. Some illustrative examples are introduced considering real data sets. 相似文献
40.
Fernando Ferraz do Nascimento Andreson Almeida Azevedo Valmaria Rocha da Silva Ferraz 《Journal of applied statistics》2021,48(16):3048
Extreme Value Theory (EVT) aims to study the tails of probability distributions in order to measure and quantify extreme events of maximum and minimum. In river flow data, an extreme level of a river may be related to the level of a neighboring river that flows into it. In this type of data, it is very common for flooding of a location to have been caused by a very large flow from an affluent river that is tens or hundreds of kilometers from this location. In this sense, an interesting approach is to consider a conditional model for the estimation of a multivariate model. Inspired by this idea, we propose a Bayesian model to describe the dependence of exceedance between rivers, where we considered a conditionally independent structure. In this model, the dependence between rivers is captured by modeling the excess marginally of one river as a consequence of linear functions of the other rivers. The results showed that there is a strong and positive connection between excesses in one river caused by the excesses of the other rivers. 相似文献