首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19888篇
  免费   494篇
管理学   2886篇
民族学   86篇
人才学   1篇
人口学   1999篇
丛书文集   95篇
教育普及   1篇
理论方法论   1679篇
现状及发展   1篇
综合类   237篇
社会学   9236篇
统计学   4161篇
  2023年   116篇
  2021年   104篇
  2020年   286篇
  2019年   405篇
  2018年   507篇
  2017年   661篇
  2016年   512篇
  2015年   335篇
  2014年   458篇
  2013年   3415篇
  2012年   662篇
  2011年   600篇
  2010年   464篇
  2009年   446篇
  2008年   466篇
  2007年   483篇
  2006年   435篇
  2005年   434篇
  2004年   406篇
  2003年   378篇
  2002年   424篇
  2001年   490篇
  2000年   479篇
  1999年   443篇
  1998年   320篇
  1997年   308篇
  1996年   307篇
  1995年   281篇
  1994年   255篇
  1993年   259篇
  1992年   320篇
  1991年   272篇
  1990年   284篇
  1989年   295篇
  1988年   254篇
  1987年   230篇
  1986年   241篇
  1985年   283篇
  1984年   283篇
  1983年   281篇
  1982年   228篇
  1981年   194篇
  1980年   193篇
  1979年   233篇
  1978年   178篇
  1977年   171篇
  1976年   159篇
  1975年   142篇
  1974年   137篇
  1973年   125篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
941.
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model. Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rrth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin et al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214–223]. The usefulness of these models is illustrated in a simulation study and in applications to three real data sets.  相似文献   
942.
Studying the right tail of a distribution, one can classify the distributions into three classes based on the extreme value index γγ. The class γ>0γ>0 corresponds to Pareto-type or heavy tailed distributions, while γ<0γ<0 indicates that the underlying distribution has a finite endpoint. The Weibull-type distributions form an important subgroup within the Gumbel class with γ=0γ=0. The tail behaviour can then be specified using the Weibull tail index. Classical estimators of this index show severe bias. In this paper we present a new estimation approach based on the mean excess function, which exhibits improved bias and mean squared error. The asserted properties are supported by simulation experiments and asymptotic results. Illustrations with real life data sets are provided.  相似文献   
943.
We consider in this paper the regularization by projection of a linear inverse problem Y=Af+εξY=Af+εξ where ξξ denotes a Gaussian white noise, A   a compact operator and ε>0ε>0 a noise level. Compared to the standard unbiased risk estimation (URE) method, the risk hull minimization (RHM) procedure presents a very interesting numerical behavior. However, the regularization in the singular value decomposition setting requires the knowledge of the eigenvalues of AA. Here, we deal with noisy eigenvalues: only observations on this sequence are available. We study the efficiency of the RHM method in this situation. More generally, we shed light on some properties usually related to the regularization with a noisy operator.  相似文献   
944.
Under appropriate long range dependence conditions, the point process of exceedances of a stationary sequence weakly converges to a homogeneous compound Poisson point process. This limiting point process can be characterized by the extremal index and the cluster-size probabilities. In this paper we address the problem of estimating these quantities and we consider the intervals estimators introduced in Ferro and Segers [2003. Inference for clusters of extreme values. J. Roy. Statist. Soc. Ser. B 545–556] and in Ferro [2004. Statistical methods for clusters of extreme values. Ph.D. Thesis, Lancaster University]. We establish asymptotic weak convergence to Gaussian random variables and we give their asymptotic variance.  相似文献   
945.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   
946.
A broad spectrum of flexible univariate and multivariate models can be constructed by using a hidden truncation paradigm. Such models can be viewed as being characterized by a basic marginal density, a family of conditional densities and a specified hidden truncation point, or points. The resulting class of distributions includes the basic marginal density as a special case (or as a limiting case), but also includes an array of models that may unexpectedly include many well known densities. Most of the well known skew-normal models (developed from the seed distribution popularized by Azzalini [(1985). A class of distributions which includes the normal ones. Scand. J. Statist. 12(2), 171–178]) can be viewed as being products of such a hidden truncation construction. However, the many hidden truncation models with non-normal component densities undoubtedly deserve further attention.  相似文献   
947.
This paper is concerned with semiparametric discrete kernel estimators when the unknown count distribution can be considered to have a general weighted Poisson form. The estimator is constructed by multiplying the Poisson estimate with a nonparametric discrete kernel-type estimate of the Poisson weight function. Comparisons are then carried out with the ordinary discrete kernel probability mass function estimators. The Poisson weight function is thus a local multiplicative correction factor, and is considered as the uniform measure to detect departures from the equidispersed Poisson distribution. In this way, the effects of dispersion and zero-proportion with respect to the standard Poisson distribution are also minimized. This method of estimation is also applied to the weighted binomial form for the count distribution having a finite support. The proposed estimators, in addition to being simple, easy-to-implement and effective, also outperform the competing nonparametric and parametric estimators in finite-sample situations. Two examples illustrate this new semiparametric estimation.  相似文献   
948.
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach.  相似文献   
949.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
950.
Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and test prediction intervals based on empirical analyses of past forecast errors for counties in the United States. Using decennial census data from 1900 to 2000, we apply trend extrapolation techniques to develop a set of county population forecasts; calculate forecast errors by comparing forecasts to subsequent census counts; and use the distribution of errors to construct empirical prediction intervals. We find that empirically-based prediction intervals provide reasonably accurate predictions of the precision of population forecasts, but provide little guidance regarding their tendency to be too high or too low. We believe the construction of empirically-based prediction intervals will help users of small-area population forecasts measure and evaluate the uncertainty inherent in population forecasts and plan more effectively for the future.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号