首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4581篇
  免费   78篇
管理学   679篇
民族学   26篇
人才学   1篇
人口学   454篇
丛书文集   39篇
教育普及   1篇
理论方法论   539篇
综合类   51篇
社会学   2321篇
统计学   548篇
  2021年   28篇
  2020年   60篇
  2019年   78篇
  2018年   80篇
  2017年   130篇
  2016年   100篇
  2015年   74篇
  2014年   82篇
  2013年   777篇
  2012年   141篇
  2011年   168篇
  2010年   109篇
  2009年   133篇
  2008年   146篇
  2007年   144篇
  2006年   158篇
  2005年   142篇
  2004年   131篇
  2003年   122篇
  2002年   130篇
  2001年   88篇
  2000年   95篇
  1999年   85篇
  1998年   67篇
  1997年   71篇
  1996年   90篇
  1995年   66篇
  1994年   64篇
  1993年   59篇
  1992年   61篇
  1991年   58篇
  1990年   53篇
  1989年   63篇
  1988年   45篇
  1987年   42篇
  1986年   59篇
  1985年   58篇
  1984年   57篇
  1983年   45篇
  1982年   36篇
  1981年   43篇
  1980年   38篇
  1979年   47篇
  1978年   43篇
  1977年   34篇
  1976年   40篇
  1975年   40篇
  1974年   28篇
  1972年   17篇
  1971年   26篇
排序方式: 共有4659条查询结果,搜索用时 15 毫秒
81.
Summary.  We propose covariance-regularized regression, a family of methods for prediction in high dimensional settings that uses a shrunken estimate of the inverse covariance matrix of the features to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing the log-likelihood of the data, under a multivariate normal model, subject to a penalty; it is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyse gene expression data sets with multiple class and survival outcomes.  相似文献   
82.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
83.
Bayesian quantile regression for single-index models   总被引:2,自引:0,他引:2  
Using an asymmetric Laplace distribution, which provides a mechanism for Bayesian inference of quantile regression models, we develop a fully Bayesian approach to fitting single-index models in conditional quantile regression. In this work, we use a Gaussian process prior for the unknown nonparametric link function and a Laplace distribution on the index vector, with the latter motivated by the recent popularity of the Bayesian lasso idea. We design a Markov chain Monte Carlo algorithm for posterior inference. Careful consideration of the singularity of the kernel matrix, and tractability of some of the full conditional distributions leads to a partially collapsed approach where the nonparametric link function is integrated out in some of the sampling steps. Our simulations demonstrate the superior performance of the Bayesian method versus the frequentist approach. The method is further illustrated by an application to the hurricane data.  相似文献   
84.
Analytical methods for interval estimation of differences between variances have not been described. A simple analytical method is given for interval estimation of the difference between variances of two independent samples. It is shown, using simulations, that confidence intervals generated with this method have close to nominal coverage even when sample sizes are small and unequal and observations are highly skewed and leptokurtic, provided the difference in variances is not very large. The method is also adapted for testing the hypothesis of no difference between variances. The test is robust but slightly less powerful than Bonett's test with small samples.  相似文献   
85.
In reliability studies, one typically would assume a lifetime distribution for the units under study and then carry out the required analysis. One popular choice for the lifetime distribution is the family of two-parameter Weibull distributions (with scale and shape parameters) which, through a logarithmic transformation, can be transformed to the family of two-parameter extreme value distributions (with location and scale parameters). In carrying out a parametric analysis of this type, it is highly desirable to be able to test the validity of such a model assumption. A basic tool that is useful for this purpose is a quantile–quantile (QQ) plot, but in its use, the issue of the choice of plotting position arises. Here, by adopting the optimal plotting points based on Pitman closeness criterion proposed recently by Balakrishnan et al. (2010b Balakrishnan , N. , Davies , K. F. , Keating , J. P. , Mason , R. L. ( 2010b ). Computation of optimal plotting points based on Pitman Closeness with an application to goodness of fit for location-scale families. Submitted to Computational Statistics & Data Analysis.  [Google Scholar]), and referred to as simultaneous closeness probability (SCP) plotting points, we propose a correlation-type goodness of fit test for the extreme value distribution. We compute the SCP plotting points for various sample sizes and use them to determine the mean, standard deviation and critical values for the proposed correlation-type test statistic. Using these critical values, we carry out a power study, similar to the one carried out by Kinnison (1989 Kinnison , R. ( 1989 ). Correlation coefficient goodness of fit test for extreme value distribution . The American Statistician 43 : 98100 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), through which we demonstrate that the use of SCP plotting points results in better power than with the use of mean ranks as plotting points and nearly the same power as with the use of median ranks. We then demonstrate the use of the SCP plotting points and the associated correlation-type test for Weibull analysis with an illustrative example. Finally, for the sake of comparison, we also adapt two statistics proposed by Gan and Koehler (1990 Gan , F. F. , Koehler , K. J. ( 1990 ). Goodness of fit based on P-P probability plots . Technometrics 32 : 289303 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), in the context of probability–probability (PP) plots, based on SCP plotting points and compare their performance to those based on mean ranks. The empirical study also reveals that the tests from the QQ plot have better power than those from the PP plot.  相似文献   
86.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples.  相似文献   
87.
Equivalence tests are used when the objective is to find that two or more groups are nearly equivalent on some outcome, such that any difference is inconsequential. Equivalence tests are available for several research designs, however, paired-samples equivalence tests that are accessible and relevant to the research performed by psychologists have been understudied. This study evaluated parametric and nonparametric two one-sided paired-samples equivalence tests and a standardized paired-samples equivalence test developed by Wellek (2003 Wellek , S. (2003). Testing Statistical Hypotheses of Equivalence . New York : Chapman & Hall/CRC. [Google Scholar]). The two one-sided procedures had better Type I error control and greater power than Wellek's test, with the nonparametric procedure having increased power with non normal distributions.  相似文献   
88.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.  相似文献   
89.
In this article maximum likelihood techniques for estimating consumer demand functions when budget constraints are piecewise linear are exposited and surveyed. Consumer demand functions are formally derived under such constraints, and it is shown that the functions are themselves nonlinear as a result. The econometric problems in estimating such functions are exposited, and the importance of the stochastic specification is stressed, in particular the specification of both unobserved heterogeneity of preferences and measurement error. Econometric issues in estimation and testing are discussed, and the results of the studies that have been conducted to date are surveyed.  相似文献   
90.
In this article, we analyze the indexation of federal taxes, using an approach based on cost-of-living measurement. We use our Tax and Price Index methodology and data base to study an indexed system historically, comparing indexation with the Consumer Price Index (CPI) to actual tax policy, a tax system with constant parameters, and an “exact” indexing scheme. We reach three main conclusions: (a) The sequence of tax reductions implemented between 1967 and 1985 have fallen short of mimicking indexation, (b) wealthier households would have benefited relatively more than lower-income households from indexation, and (c) CPI indexation would not have completely eliminated bracket creep.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号