首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4860篇
  免费   91篇
管理学   990篇
民族学   24篇
人才学   3篇
人口学   402篇
丛书文集   27篇
理论方法论   525篇
综合类   50篇
社会学   2283篇
统计学   647篇
  2023年   24篇
  2021年   22篇
  2020年   53篇
  2019年   91篇
  2018年   98篇
  2017年   130篇
  2016年   133篇
  2015年   88篇
  2014年   144篇
  2013年   685篇
  2012年   258篇
  2011年   208篇
  2010年   189篇
  2009年   183篇
  2008年   139篇
  2007年   173篇
  2006年   204篇
  2005年   141篇
  2004年   135篇
  2003年   152篇
  2002年   151篇
  2001年   149篇
  2000年   122篇
  1999年   66篇
  1998年   61篇
  1997年   74篇
  1996年   99篇
  1995年   93篇
  1994年   100篇
  1993年   68篇
  1992年   40篇
  1991年   38篇
  1990年   47篇
  1989年   43篇
  1988年   42篇
  1987年   35篇
  1986年   31篇
  1985年   37篇
  1984年   39篇
  1983年   26篇
  1982年   42篇
  1981年   36篇
  1980年   39篇
  1979年   48篇
  1978年   21篇
  1977年   15篇
  1976年   31篇
  1975年   14篇
  1974年   19篇
  1973年   19篇
排序方式: 共有4951条查询结果,搜索用时 15 毫秒
91.
The author proposes a new method for flexible regression modeling of multi‐dimensional data, where the regression function is approximated by a linear combination of logistic basis functions. The method is adaptive, selecting simple or more complex models as appropriate. The number, location, and (to some extent) shape of the basis functions are automatically determined from the data. The method is also affine invariant, so accuracy of the fit is not affected by rotation or scaling of the covariates. Squared error and absolute error criteria are both available for estimation. The latter provides a robust estimator of the conditional median function. Computation is relatively fast, particularly for large data sets, so the method is well suited for data mining applications.  相似文献   
92.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed.  相似文献   
93.
“One method of error analysis (not the one we will use) is based upon the principles of mathematical statistics. Unfortunately, statistical methods can only be meaningfully applied when one has large amounts of data for a given system. In many cases … these large quantities of data are not available … then statistical methods are not applicable, and some other methods must be devised.”  相似文献   
94.
The mathematical problems of the – in an communication [3] described – principle for the calculation of individual thermodynamic activity coefficients of single ionic species in concentrated electrolyte solutions are specified. It is the Newtonian approximation method that makes possible the evaluation of the constants b 1,…b 4 in the concentration function (0.1) for the product of the activity coefficients.

The efficiency of the method is represented by the example of the activity coefficients of pure and of – with other electrolytes – mixed solutions of NaCIO4. The individual activity coefficients of the single ionic species are evaluated for several electrolytes of the concentration range from m = 0 to m = 10 mole/kg and published at another place [3, 17, 18].  相似文献   
95.
In this paper, we study the robustness properties of several procedures for the joint estimation of shape and scale in a generalized Pareto model. The estimators that we primarily focus upon, most bias robust estimator (MBRE) and optimal MSE-robust estimator (OMSE), are one-step estimators distinguished as optimally robust in the shrinking neighbourhood setting; that is, they minimize the maximal bias, respectively, on such a specific neighbourhood, the maximal mean squared error (MSE). For their initialization, we propose a particular location–dispersion estimator, MedkMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally robust estimators are compared to the maximum-likelihood, skipped maximum-likelihood, Cramér–von-Mises minimum distance, method-of-medians, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite-sample breakdown point and the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE – all evaluated uniformly on shrinking neighbourhoods. These asymptotic findings are complemented by an extensive simulation study to assess the finite-sample behaviour of the considered procedures. The applicability of the procedures and their stability against outliers are illustrated for the Danish fire insurance data set from the package evir.  相似文献   
96.
Researchers are increasingly using the standardized difference to compare the distribution of baseline covariates between treatment groups in observational studies. Standardized differences were initially developed in the context of comparing the mean of continuous variables between two groups. However, in medical research, many baseline covariates are dichotomous. In this article, we explore the utility and interpretation of the standardized difference for comparing the prevalence of dichotomous variables between two groups. We examined the relationship between the standardized difference, and the maximal difference in the prevalence of the binary variable between two groups, the relative risk relating the prevalence of the binary variable in one group compared to the prevalence in the other group, and the phi coefficient for measuring correlation between the treatment group and the binary variable. We found that a standardized difference of 10% (or 0.1) is equivalent to having a phi coefficient of 0.05 (indicating negligible correlation) for the correlation between treatment group and the binary variable.  相似文献   
97.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   
98.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
99.
We explore the application of dynamic graphics to the exploratory analysis of spatial data. We introduce a number of new tools and illustrate their use with prototype software, developed at Trinity College, Dublin. These tools are used to examine local variability—anomalies—through plots of the data that display its marginal and multivariate distributions, through interactive smoothers, and through plots motivated by the spatial auto-covariance ideas implicit in the variogram. We regard these as alternative and linked views of the data. We conclude that the most important single view of the data is the Map View: All other views must be cross-referred to this, and the software must encourage this. The view can be enriched by overlaying on other pertinent spatial information. We draw attention to the possibilities of one-many linking, and to the use of line-objects to link pairs of data points. We draw attention to the parallels with work on Geographical Information Systems.  相似文献   
100.
It seems difficult to find a formula in the literature that relates moments to cumulants (and vice versa) and is useful in computational work rather than in an algebraic approach. Hence I present four very simple recursive formulas that translate moments to cumulants and vice versa in the univariate and multivariate situations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号