首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3637篇
  免费   77篇
管理学   623篇
民族学   15篇
人才学   1篇
人口学   246篇
丛书文集   26篇
理论方法论   488篇
综合类   44篇
社会学   1650篇
统计学   621篇
  2023年   19篇
  2021年   21篇
  2020年   51篇
  2019年   90篇
  2018年   89篇
  2017年   120篇
  2016年   123篇
  2015年   82篇
  2014年   90篇
  2013年   524篇
  2012年   146篇
  2011年   124篇
  2010年   107篇
  2009年   107篇
  2008年   120篇
  2007年   131篇
  2006年   113篇
  2005年   118篇
  2004年   114篇
  2003年   104篇
  2002年   105篇
  2001年   95篇
  2000年   81篇
  1999年   73篇
  1998年   53篇
  1997年   70篇
  1996年   60篇
  1995年   45篇
  1994年   50篇
  1993年   51篇
  1992年   35篇
  1991年   37篇
  1990年   45篇
  1989年   39篇
  1988年   35篇
  1987年   33篇
  1986年   25篇
  1985年   38篇
  1984年   41篇
  1983年   25篇
  1982年   33篇
  1981年   33篇
  1980年   30篇
  1979年   32篇
  1978年   20篇
  1977年   14篇
  1976年   29篇
  1975年   15篇
  1974年   18篇
  1973年   17篇
排序方式: 共有3714条查询结果,搜索用时 15 毫秒
71.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
72.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   
73.
74.
The responses obtained from response surface designs that are run sequentially often exhibit serial correlation or time trends. The order in which the runs of the design are performed then has an impact on the precision of the parameter estimators. This article proposes the use of a variable-neighbourhood search algorithm to compute run orders that guarantee a precise estimation of the effects of the experimental factors. The importance of using good run orders is demonstrated by seeking D-optimal run orders for a central composite design in the presence of an AR(1) autocorrelation pattern.  相似文献   
75.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
76.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   
77.
We explore the application of dynamic graphics to the exploratory analysis of spatial data. We introduce a number of new tools and illustrate their use with prototype software, developed at Trinity College, Dublin. These tools are used to examine local variability—anomalies—through plots of the data that display its marginal and multivariate distributions, through interactive smoothers, and through plots motivated by the spatial auto-covariance ideas implicit in the variogram. We regard these as alternative and linked views of the data. We conclude that the most important single view of the data is the Map View: All other views must be cross-referred to this, and the software must encourage this. The view can be enriched by overlaying on other pertinent spatial information. We draw attention to the possibilities of one-many linking, and to the use of line-objects to link pairs of data points. We draw attention to the parallels with work on Geographical Information Systems.  相似文献   
78.
It seems difficult to find a formula in the literature that relates moments to cumulants (and vice versa) and is useful in computational work rather than in an algebraic approach. Hence I present four very simple recursive formulas that translate moments to cumulants and vice versa in the univariate and multivariate situations.  相似文献   
79.
The mathematical problems of the – in an communication [3] described – principle for the calculation of individual thermodynamic activity coefficients of single ionic species in concentrated electrolyte solutions are specified. It is the Newtonian approximation method that makes possible the evaluation of the constants b 1,…b 4 in the concentration function (0.1) for the product of the activity coefficients.

The efficiency of the method is represented by the example of the activity coefficients of pure and of – with other electrolytes – mixed solutions of NaCIO4. The individual activity coefficients of the single ionic species are evaluated for several electrolytes of the concentration range from m = 0 to m = 10 mole/kg and published at another place [3, 17, 18].  相似文献   
80.
In this paper, we study the robustness properties of several procedures for the joint estimation of shape and scale in a generalized Pareto model. The estimators that we primarily focus upon, most bias robust estimator (MBRE) and optimal MSE-robust estimator (OMSE), are one-step estimators distinguished as optimally robust in the shrinking neighbourhood setting; that is, they minimize the maximal bias, respectively, on such a specific neighbourhood, the maximal mean squared error (MSE). For their initialization, we propose a particular location–dispersion estimator, MedkMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally robust estimators are compared to the maximum-likelihood, skipped maximum-likelihood, Cramér–von-Mises minimum distance, method-of-medians, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite-sample breakdown point and the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE – all evaluated uniformly on shrinking neighbourhoods. These asymptotic findings are complemented by an extensive simulation study to assess the finite-sample behaviour of the considered procedures. The applicability of the procedures and their stability against outliers are illustrated for the Danish fire insurance data set from the package evir.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号