首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3386篇
  免费   74篇
管理学   564篇
民族学   15篇
人才学   1篇
人口学   230篇
丛书文集   26篇
理论方法论   468篇
综合类   42篇
社会学   1531篇
统计学   583篇
  2023年   18篇
  2021年   20篇
  2020年   48篇
  2019年   85篇
  2018年   83篇
  2017年   114篇
  2016年   120篇
  2015年   80篇
  2014年   85篇
  2013年   470篇
  2012年   137篇
  2011年   113篇
  2010年   104篇
  2009年   100篇
  2008年   115篇
  2007年   122篇
  2006年   101篇
  2005年   109篇
  2004年   105篇
  2003年   97篇
  2002年   103篇
  2001年   87篇
  2000年   77篇
  1999年   63篇
  1998年   51篇
  1997年   68篇
  1996年   56篇
  1995年   44篇
  1994年   48篇
  1993年   49篇
  1992年   31篇
  1991年   33篇
  1990年   41篇
  1989年   37篇
  1988年   34篇
  1987年   31篇
  1986年   23篇
  1985年   35篇
  1984年   37篇
  1983年   21篇
  1982年   32篇
  1981年   30篇
  1980年   27篇
  1979年   31篇
  1978年   17篇
  1977年   11篇
  1976年   28篇
  1975年   14篇
  1974年   18篇
  1973年   16篇
排序方式: 共有3460条查询结果,搜索用时 31 毫秒
71.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   
72.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
73.
We explore the application of dynamic graphics to the exploratory analysis of spatial data. We introduce a number of new tools and illustrate their use with prototype software, developed at Trinity College, Dublin. These tools are used to examine local variability—anomalies—through plots of the data that display its marginal and multivariate distributions, through interactive smoothers, and through plots motivated by the spatial auto-covariance ideas implicit in the variogram. We regard these as alternative and linked views of the data. We conclude that the most important single view of the data is the Map View: All other views must be cross-referred to this, and the software must encourage this. The view can be enriched by overlaying on other pertinent spatial information. We draw attention to the possibilities of one-many linking, and to the use of line-objects to link pairs of data points. We draw attention to the parallels with work on Geographical Information Systems.  相似文献   
74.
It seems difficult to find a formula in the literature that relates moments to cumulants (and vice versa) and is useful in computational work rather than in an algebraic approach. Hence I present four very simple recursive formulas that translate moments to cumulants and vice versa in the univariate and multivariate situations.  相似文献   
75.
The quadratic discriminant function is commonly used for the two group classification problem when the covariance matrices in the two populations are substantially unequal. This procedure is optimal when both populations are multivariate normal with known means and covariance matrices. This study examined the robustness of the QDF to non-normality. Sampling experiments were conducted to estimate expected actual error rates for the QDF when sampling from a variety of non-normal distributions. Results indicated that the QDF was robust to non-normality except when the distributions were highly skewed, in which case relatively large deviations from optimal were observed. In all cases studied the average probabilities of misclassification were relatively stable while the individual population error rates exhibited considerable variability.  相似文献   
76.
Kageyama Mohan (1984) have presented three methods of constructing new incomplete block designs from balanced incomplete block designs, They raise questions about the designs which come from each of their methods, These questions are answered, Another series of group divisible designs is derived as a special case of their second method.  相似文献   
77.
Simplified proofs are given of a standard result that establishes positive semi–definiteness of the difference of the inverses of two non–singular matrices, and of the extension of this result by Milliken and Akdeniz (1977) to the difference of the Moore–Penrose inverse of two singular matrices.  相似文献   
78.
A distribution function is estimated by a kernel method with

a poinrwise mean squared error criterion at a point x. Relation- ships between the mean squared error, the point x, the sample size and the required kernel smoothing parazeter are investigated for several distributions treated by Azzaiini (1981). In particular it is noted that at a centre of symmetry or near a mode of the distribution the kernei method breaks down. Point- wise estimation of a distribution function is motivated as a more useful technique than a reference range for preliminary medical diagnosis.  相似文献   
79.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]; Martens et al., 2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004 Lieberman , O. , Phillips , P. C. B. ( 2004 ). Expansions for the distribution of the maximum likelihood estimator of the fractional difference parameter . Econometric Theory 20 ( 3 ): 464484 . [Google Scholar], 2005 Lieberman , O. , Phillips , P. C. B. ( 2005 ). Expansions for approximate maximum likelihood estimators of the fractional difference parameter . The Econometrics Journal 8 : 367379 . [Google Scholar]) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and Martens et al. (2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   
80.
A number of recent papers have focused on the problem of testing for a unit root in the case where the driving shocks may be unconditionally heteroskedastic. These papers have, however, taken the lag length in the unit root test regression to be a deterministic function of the sample size, rather than data-determined, the latter being standard empirical practice. We investigate the finite sample impact of unconditional heteroskedasticity on conventional data-dependent lag selection methods in augmented Dickey–Fuller type regressions and propose new lag selection criteria which allow for unconditional heteroskedasticity. Standard lag selection methods are shown to have a tendency to over-fit the lag order under heteroskedasticity, resulting in significant power losses in the (wild bootstrap implementation of the) augmented Dickey–Fuller tests under the alternative. The proposed new lag selection criteria are shown to avoid this problem yet deliver unit root tests with almost identical finite sample properties as the corresponding tests based on conventional lag selection when the shocks are homoskedastic.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号