首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3490篇
  免费   75篇
管理学   574篇
民族学   15篇
人才学   1篇
人口学   253篇
丛书文集   26篇
理论方法论   485篇
综合类   43篇
社会学   1579篇
统计学   589篇
  2023年   19篇
  2021年   20篇
  2020年   52篇
  2019年   87篇
  2018年   86篇
  2017年   113篇
  2016年   121篇
  2015年   80篇
  2014年   87篇
  2013年   482篇
  2012年   138篇
  2011年   114篇
  2010年   106篇
  2009年   103篇
  2008年   121篇
  2007年   124篇
  2006年   108篇
  2005年   113篇
  2004年   105篇
  2003年   101篇
  2002年   109篇
  2001年   90篇
  2000年   83篇
  1999年   67篇
  1998年   53篇
  1997年   70篇
  1996年   59篇
  1995年   43篇
  1994年   49篇
  1993年   50篇
  1992年   31篇
  1991年   36篇
  1990年   42篇
  1989年   39篇
  1988年   35篇
  1987年   32篇
  1986年   23篇
  1985年   36篇
  1984年   39篇
  1983年   23篇
  1982年   34篇
  1981年   30篇
  1980年   27篇
  1979年   32篇
  1978年   19篇
  1977年   12篇
  1976年   29篇
  1975年   16篇
  1974年   18篇
  1973年   16篇
排序方式: 共有3565条查询结果,搜索用时 15 毫秒
61.
Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to 'tilt' away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimizes a measure of the new distribution's dispersion. This theoretical proposal has a simple empirical version, which results in each data value being assigned a weight according to an assessment of its influence on dispersion. Importantly, distance can be measured directly in terms of the likely level of contamination, without reference to an empirical measure of scale. This makes the procedure particularly attractive for use in multivariate problems. It has several forms, depending on the definitions taken for dispersion and for distance between distributions. Examples of dispersion measures include variance and generalizations based on high order moments. Practicable measures of the distance between distributions may be based on power divergence, which includes Hellinger and Kullback–Leibler distances. The resulting location estimator has a smooth, redescending influence curve and appears to avoid computational difficulties that are typically associated with redescending estimators. Its breakdown point can be located at any desired value ε∈ (0, ½) simply by 'trimming' to a known distance (depending only on ε and the choice of distance measure) from the empirical distribution. The estimator has an affine equivariant multivariate form. Further, the general method is applicable to a range of statistical problems, including regression.  相似文献   
62.
Methods are suggested for improving the coverage accuracy of intervals for predicting future values of a random variable drawn from a sampled distribution. It is shown that properties of solutions to such problems may be quite unexpected. For example, the bootstrap and the jackknife perform very poorly when used to calibrate coverage, although the jackknife estimator of the true coverage is virtually unbiased. A version of the smoothed bootstrap can be employed for successful calibration, however. Interpolation among adjacent order statistics can also be an effective way of calibrating, although even there the results are unexpected. In particular, whereas the coverage error can be reduced from O ( n -1) to orders O ( n -2) and O ( n -3) (where n denotes the sample size) by interpolating among two and three order statistics respectively, the next two orders of reduction require interpolation among five and eight order statistics respectively.  相似文献   
63.
Spatiotemporal prediction for log-Gaussian Cox processes   总被引:1,自引:0,他引:1  
Space–time point pattern data have become more widely available as a result of technological developments in areas such as geographic information systems. We describe a flexible class of space–time point processes. Our models are Cox processes whose stochastic intensity is a space–time Ornstein–Uhlenbeck process. We develop moment-based methods of parameter estimation, show how to predict the underlying intensity by using a Markov chain Monte Carlo approach and illustrate the performance of our methods on a synthetic data set.  相似文献   
64.
The author proposes a new method for flexible regression modeling of multi‐dimensional data, where the regression function is approximated by a linear combination of logistic basis functions. The method is adaptive, selecting simple or more complex models as appropriate. The number, location, and (to some extent) shape of the basis functions are automatically determined from the data. The method is also affine invariant, so accuracy of the fit is not affected by rotation or scaling of the covariates. Squared error and absolute error criteria are both available for estimation. The latter provides a robust estimator of the conditional median function. Computation is relatively fast, particularly for large data sets, so the method is well suited for data mining applications.  相似文献   
65.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed.  相似文献   
66.
The mathematical problems of the – in an communication [3] described – principle for the calculation of individual thermodynamic activity coefficients of single ionic species in concentrated electrolyte solutions are specified. It is the Newtonian approximation method that makes possible the evaluation of the constants b 1,…b 4 in the concentration function (0.1) for the product of the activity coefficients.

The efficiency of the method is represented by the example of the activity coefficients of pure and of – with other electrolytes – mixed solutions of NaCIO4. The individual activity coefficients of the single ionic species are evaluated for several electrolytes of the concentration range from m = 0 to m = 10 mole/kg and published at another place [3, 17, 18].  相似文献   
67.
In this paper, we study the robustness properties of several procedures for the joint estimation of shape and scale in a generalized Pareto model. The estimators that we primarily focus upon, most bias robust estimator (MBRE) and optimal MSE-robust estimator (OMSE), are one-step estimators distinguished as optimally robust in the shrinking neighbourhood setting; that is, they minimize the maximal bias, respectively, on such a specific neighbourhood, the maximal mean squared error (MSE). For their initialization, we propose a particular location–dispersion estimator, MedkMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally robust estimators are compared to the maximum-likelihood, skipped maximum-likelihood, Cramér–von-Mises minimum distance, method-of-medians, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite-sample breakdown point and the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE – all evaluated uniformly on shrinking neighbourhoods. These asymptotic findings are complemented by an extensive simulation study to assess the finite-sample behaviour of the considered procedures. The applicability of the procedures and their stability against outliers are illustrated for the Danish fire insurance data set from the package evir.  相似文献   
68.
Researchers are increasingly using the standardized difference to compare the distribution of baseline covariates between treatment groups in observational studies. Standardized differences were initially developed in the context of comparing the mean of continuous variables between two groups. However, in medical research, many baseline covariates are dichotomous. In this article, we explore the utility and interpretation of the standardized difference for comparing the prevalence of dichotomous variables between two groups. We examined the relationship between the standardized difference, and the maximal difference in the prevalence of the binary variable between two groups, the relative risk relating the prevalence of the binary variable in one group compared to the prevalence in the other group, and the phi coefficient for measuring correlation between the treatment group and the binary variable. We found that a standardized difference of 10% (or 0.1) is equivalent to having a phi coefficient of 0.05 (indicating negligible correlation) for the correlation between treatment group and the binary variable.  相似文献   
69.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   
70.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号