全文获取类型
收费全文 | 5263篇 |
免费 | 122篇 |
国内免费 | 30篇 |
专业分类
管理学 | 357篇 |
民族学 | 13篇 |
人口学 | 67篇 |
丛书文集 | 102篇 |
理论方法论 | 67篇 |
综合类 | 1060篇 |
社会学 | 145篇 |
统计学 | 3604篇 |
出版年
2024年 | 12篇 |
2023年 | 33篇 |
2022年 | 65篇 |
2021年 | 77篇 |
2020年 | 99篇 |
2019年 | 189篇 |
2018年 | 220篇 |
2017年 | 330篇 |
2016年 | 214篇 |
2015年 | 175篇 |
2014年 | 227篇 |
2013年 | 1225篇 |
2012年 | 423篇 |
2011年 | 206篇 |
2010年 | 189篇 |
2009年 | 188篇 |
2008年 | 200篇 |
2007年 | 161篇 |
2006年 | 149篇 |
2005年 | 147篇 |
2004年 | 135篇 |
2003年 | 99篇 |
2002年 | 96篇 |
2001年 | 71篇 |
2000年 | 86篇 |
1999年 | 65篇 |
1998年 | 66篇 |
1997年 | 48篇 |
1996年 | 28篇 |
1995年 | 34篇 |
1994年 | 21篇 |
1993年 | 22篇 |
1992年 | 23篇 |
1991年 | 12篇 |
1990年 | 11篇 |
1989年 | 7篇 |
1988年 | 8篇 |
1987年 | 6篇 |
1986年 | 3篇 |
1985年 | 9篇 |
1984年 | 7篇 |
1983年 | 7篇 |
1982年 | 5篇 |
1981年 | 3篇 |
1980年 | 6篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 5篇 |
1975年 | 1篇 |
排序方式: 共有5415条查询结果,搜索用时 0 毫秒
161.
上海股票市场股票收益率因素研究 总被引:14,自引:3,他引:14
根据上海股票市场从1995 年7 月到2000 年6 月所有A 股股票的月收益率、价格、市值和
公司财务数据,利用Fama-Macbeth 回归分析方法及构造动态组合方法,分析总市值、流通市值、
价格、账面市值比、市盈率、账面资产负债比等因素对股票回报率的影响. 发现上海股票市场具
有显著的市值效应、账面市值比效应、市盈率效应和价格效应. 这些效应不能用股票的beta 值
来解释. 同时发现Fama- French 的三因子模型不能完全解释这些效应,但在三因子模型的基础
上再加上一个市盈率因子可以很好地解 相似文献
162.
Frank Kleibergen 《Econometrica : journal of the Econometric Society》2002,70(5):1781-1803
We propose a novel statistic for conducting joint tests on all the structural parameters in instrumental variables regression. The statistic is straightforward to compute and equals a quadratic form of the score of the concentrated log–likelihood. It therefore attains its minimal value equal to zero at the maximum likelihood estimator. The statistic has a χ2 limiting distribution with a degrees of freedom parameter equal to the number of structural parameters. The limiting distribution does not depend on nuisance parameters. The statistic overcomes the deficiencies of the Anderson–Rubin statistic, whose limiting distribution has a degrees of freedom parameter equal to the number of instruments, and the likelihood based, Wald, likelihood ratio, and Lagrange multiplier statistics, whose limiting distributions depend on nuisance parameters. Size and power comparisons reveal that the statistic is a (asymptotic) size–corrected likelihood ratio statistic. We apply the statistic to the Angrist–Krueger (1991) data and find similar results as in Staiger and Stock (1997). 相似文献
163.
《Journal of Statistical Computation and Simulation》2012,82(9):1782-1792
An empirical likelihood ratio test is developed for testing for or against inequality constraints on regression parameters in linear regression analysis. The proposed approach imposes no parametric model nor identically distributing assumption on the random errors. The asymptotic distribution of the proposed test statistic under null hypothesis is shown to be of chi-bar-squared type. The asymptotic power under contiguous alternatives is also briefly discussed. Moreover, an adjusted empirical likelihood method is adopted to improve the small sample size behaviour of the proposed test. Several simulation studies are carried out to assess the finite sample performance of the proposed tests. The results reveal that the proposed tests could be valuable for improving inference efficiency. A real-life example is discussed to illustrate the theoretical results. 相似文献
164.
《Journal of Statistical Computation and Simulation》2012,82(9):1902-1916
The statistical methods for variable selection and prediction could be challenging when missing covariates exist. Although multiple imputation (MI) is a universally accepted technique for solving missing data problem, how to combine the MI results for variable selection is not quite clear, because different imputations may result in different selections. The widely applied variable selection methods include the sparse partial least-squares (SPLS) method and the penalized least-squares method, e.g. the elastic net (ENet) method. In this paper, we propose an MI-based weighted elastic net (MI-WENet) method that is based on stacked MI data and a weighting scheme for each observation in the stacked data set. In the MI-WENet method, MI accounts for sampling and imputation uncertainty for missing values, and the weight accounts for the observed information. Extensive numerical simulations are carried out to compare the proposed MI-WENet method with the other competing alternatives, such as the SPLS and ENet. In addition, we applied the MI-WENet method to examine the predictor variables for the endothelial function that can be characterized by median effective dose (ED50) and maximum effect (Emax) in an ex-vivo phenylephrine-induced extension and acetylcholine-induced relaxation experiment. 相似文献
165.
《Journal of Statistical Computation and Simulation》2012,82(14):2793-2807
In this article, we investigate the quantile regression analysis for semi-competing risks data in which a non-terminal event may be dependently censored by a terminal event. Due to the dependent censoring, the estimation of quantile regression coefficients on the non-terminal event becomes difficult. In order to handle this problem, we assume Archimedean Copula to specify the dependence of the non-terminal event and the terminal event. Portnoy [Censored regression quantiles. J Amer Statist Assoc. 2003;98:1001–1012] considered the quantile regression model under right-censoring data. We extend his approach to construct a weight function, and then impose the weight function to estimate the quantile regression parameter for the non-terminal event under semi-competing risks data. We also prove the consistency and asymptotic properties for the proposed estimator. According to the simulation studies, the performance of our proposed method is good. We also apply our suggested approach to analyse a real data. 相似文献
166.
《Journal of Statistical Computation and Simulation》2012,82(14):2808-2822
High-throughput data analyses are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. False discovery rate (FDR) has been considered a proper type I error rate to control for discovery-based high-throughput data analysis. Various multiple testing procedures have been proposed to control the FDR. The power and stability properties of some commonly used multiple testing procedures have not been extensively investigated yet, however. Simulation studies were conducted to compare power and stability properties of five widely used multiple testing procedures at different proportions of true discoveries for various sample sizes for both independent and dependent test statistics. Storey's two linear step-up procedures showed the best performance among all tested procedures considering FDR control, power, and variance of true discoveries. Leukaemia and ovarian cancer microarray studies were used to illustrate the power and stability characteristics of these five multiple testing procedures with FDR control. 相似文献
167.
《Journal of Statistical Computation and Simulation》2012,82(14):2874-2902
We propose tests for parameter constancy in the time series direction in panel data models. We construct a locally best invariant test based on Tanaka [Time series analysis: nonstationary and noninvertible distribution theory. New York: Wiley; 1996] and an asymptotically point optimal test based on Elliott and Müller [Efficient tests for general persistent time variation in regression coefficients. Rev Econ Stud. 2006;73:907–940]. We derive the limiting distributions of the test statistics as T→∞ while N is fixed, and calculate the critical values by applying numerical integration and response surface regression. Simulation results show that the proposed tests perform well if we apply them appropriately. 相似文献
168.
《Journal of Statistical Computation and Simulation》2012,82(18):3744-3754
One advantage of quantile regression, relative to the ordinary least-square (OLS) regression, is that the quantile regression estimates are more robust against outliers and non-normal errors in the response measurements. However, the relative efficiency of the quantile regression estimator with respect to the OLS estimator can be arbitrarily small. To overcome this problem, composite quantile regression methods have been proposed in the literature which are resistant to heavy-tailed errors or outliers in the response and at the same time are more efficient than the traditional single quantile-based quantile regression method. This paper studies the composite quantile regression from a Bayesian perspective. The advantage of the Bayesian hierarchical framework is that the weight of each component in the composite model can be treated as open parameter and automatically estimated through Markov chain Monte Carlo sampling procedure. Moreover, the lasso regularization can be naturally incorporated into the model to perform variable selection. The performance of the proposed method over the single quantile-based method was demonstrated via extensive simulations and real data analysis. 相似文献
169.
《Journal of Statistical Computation and Simulation》2012,82(17):3456-3481
ABSTRACTIn this paper we introduce the exponentiated Fréchet regression for modelling positive responses having a long-tailed distribution in a regression model, which are common in actuarial statistics. We propose two parameterizations each of which links the regression parameters with the explanatory variables. We then discuss the maximum likelihood estimation of the parameters both theoretically and empirically. In order to meet the needs of an actuary, closed-form expressions for certain risk measures for the exponentiated Fréchet distribution are also derived. We employ the proposed model to a motorcycle claim size data set. 相似文献
170.
《Journal of Statistical Computation and Simulation》2012,82(2):155-164
The sampling distribution of kendall's partial rank correlation coefficient, Jxy?z, is not known for N>4, where N is the number of subjectts. Moran (1951) used a direcr conbinatorial method to obtain the distribution of Jxy?z forN=4; however, ten minor computationa; errors in his Table 2apparently resulted in how erroneous entries for his frequency table. Since the parctial limits of the direct combinatorial approach have been reached once N>4, the first main objective of this paper was to obtain the exact distribution of Jxy?z for N=f, 6, and 7 using an electronic computer. The second was to use the Monte Carlo method to obtain reliable estimates of the quantiles of Jxy?z for N=8,9,...,30 相似文献