首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3826篇
  免费   102篇
  国内免费   13篇
管理学   179篇
民族学   1篇
人口学   37篇
丛书文集   21篇
理论方法论   17篇
综合类   317篇
社会学   24篇
统计学   3345篇
  2024年   1篇
  2023年   21篇
  2022年   31篇
  2021年   23篇
  2020年   67篇
  2019年   145篇
  2018年   160篇
  2017年   266篇
  2016年   124篇
  2015年   79篇
  2014年   110篇
  2013年   1146篇
  2012年   344篇
  2011年   94篇
  2010年   115篇
  2009年   131篇
  2008年   117篇
  2007年   88篇
  2006年   90篇
  2005年   87篇
  2004年   74篇
  2003年   59篇
  2002年   66篇
  2001年   61篇
  2000年   57篇
  1999年   59篇
  1998年   53篇
  1997年   42篇
  1996年   23篇
  1995年   20篇
  1994年   26篇
  1993年   19篇
  1992年   23篇
  1991年   8篇
  1990年   15篇
  1989年   9篇
  1988年   17篇
  1987年   8篇
  1986年   6篇
  1985年   4篇
  1984年   12篇
  1983年   13篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有3941条查询结果,搜索用时 62 毫秒
911.
Finite mixture models, that is, weighted averages of parametric distributions, provide a powerful way to extend parametric families of distributions to fit data sets not adequately fit by a single parametric distribution. First-order finite mixture models have been widely used in the physical, chemical, biological, and social sciences for over 100 years. Using maximum likelihood estimation, we demonstrate how a first-order finite mixture model can represent the large variability in data collected by the U.S. Environmental Protection Agency for the concentration of Radon 222 in drinking water supplied from ground water, even when 28% of the data fall at or below the minimum reporting level. Extending the use of maximum likelihood, we also illustrate how a second-order finite mixture model can separate and represent both the variability and the uncertainty in the data set.  相似文献   
912.
ABSTRACT

In economics and government statistics, aggregated data instead of individual level data are usually reported for data confidentiality and for simplicity. In this paper we develop a method of flexibly estimating the probability density function of the population using aggregated data obtained as group averages when individual level data are grouped according to quantile limits. The kernel density estimator has been commonly applied to such data without taking into account the data aggregation process and has been shown to perform poorly. Our method models the quantile function as an integral of the exponential of a spline function and deduces the density function from the quantile function. We match the aggregated data to their theoretical counterpart using least squares, and regularize the estimation by using the squared second derivatives of the density function as the penalty function. A computational algorithm is developed to implement the method. Application to simulated data and US household income survey data show that our penalized spline estimator can accurately recover the density function of the underlying population while the common use of kernel density estimation is severely biased. The method is applied to study the dynamic of China's urban income distribution using published interval aggregated data of 1985–2010.  相似文献   
913.
Maximum likelihood, goodness-of-fit, and symmetric percentile estimators of the power transformation parameterp, are considered. The comparative robustness of each estimation procedure is evaluated when the transformed data can be made symmetric, but may not necessarily be normal. Seven types of symmetric distributions are considered as well as four contaminated normal distributions over a range of six p values for samples of size 25, 50, and 100. The results indicate that the maximum likelihood estimator was slightly better than the goodness-of-fit estimator, but both were greatly superior to the percentile estimator. In general, the procedures were robust to distributional symmetric departures from normality, but increasing kurtosis caused appreciable increases in variation for estimated p values. The variability of p was found to decrease more than exponentially with decreases in the underlying normal distribution coefficient of variation. The standard likelihood ratio confidence interval procedure was found not to be generally useful.  相似文献   
914.
This paper examines a number of methods of handling missing outcomes in regressive logistic regression modelling of familial binary data, and compares them with an EM algorithm approach via a simulation study. The results indicate that a strategy based on imputation of missing values leads to biased estimates, and that a strategy of excluding incomplete families has a substantial effect on the variability of the parameter estimates. Recommendations are made which depend, amongst other factors, on the amount of missing data and on the availability of software.  相似文献   
915.
Consider a linear regression model with unknown regression parameters β0 and independent errors of unknown distribution. Block the observations into q groups whose independent variables have a common value and measure the homogeneity of the blocks of residuals by a Cramér‐von Mises q‐sample statistic Tq(β). This statistic is designed so that its expected value as a function of the chosen regression parameter β has a minimum value of zero precisely at the true value β0. The minimizer β of Tq(β) over all β is shown to be a consistent estimate of β0. It is also shown that the bootstrap distribution of Tq0) can be used to do a lack of fit test of the regression model and to construct a confidence region for β0  相似文献   
916.
This article gives a unified account of nonparametric statistics, covering testing, estimating, multiple comparisons, analysis of variance and regression, for rounded-off data, with ties handled by the average scores method. The theory is illustrated by means of numerous applications to the Biomedical, Engineering and Behavioral Sciences.  相似文献   
917.
The log-likelihood function (LLF) of the single (location) parameter Cauchy distribution can exhibit up to n relative maxima, where n is the sample size. To compute the maximum likelihood estimate of the location parameter, previously published methods have advocated scanning the LLF over a suf-ficiently large portion of the real line to locate the absolute maximum. This note shows that, given an easily derived upper bound on the second derivative of the negative LLF, Brent's univariate numerical global optimization method can be used to locate the absolute maximum among several relative maxima of the LLF without performing an exhaustive search over the real line.  相似文献   
918.
Ridge regression is re-examined and ridge estimators based on prior information are introduced. A necessary and sufficient condition is given for such ridge estimators to yield estimators of every nonnull linear combination of the regression coefficients with smaller mean square error than that of the Gauss-Markov best linear unbiased estimator.  相似文献   
919.
Uniformly minimum variance unbiased estimator (UMVUE) of reliability in stress-strength model (known stress) is obtained for a multicomponent survival model based on exponential distributions for parallel system. The variance of this estimator is compared with Cramer-Rao lower bound (CRB) for the variance of unbiased estimator of reliability, and the mean square error (MSE) of maximum likelihood estimator of reliability in case of two component system.  相似文献   
920.
Continuous data are often measured or used in binned or rounded form. In this paper we follow up on Hall's work analyzing the effect of using equally-spaced binned data in a kernel density estimator. It is shown that a surprisingly large amount of binning does not adversely affect the integrated mean squared error of a kernel estimate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号