首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17883篇
  免费   448篇
  国内免费   2篇
管理学   2593篇
民族学   90篇
人才学   3篇
人口学   1640篇
丛书文集   94篇
理论方法论   1627篇
综合类   317篇
社会学   8539篇
统计学   3430篇
  2020年   257篇
  2019年   388篇
  2018年   461篇
  2017年   619篇
  2016年   441篇
  2015年   324篇
  2014年   429篇
  2013年   2962篇
  2012年   589篇
  2011年   561篇
  2010年   438篇
  2009年   416篇
  2008年   382篇
  2007年   462篇
  2006年   410篇
  2005年   414篇
  2004年   383篇
  2003年   310篇
  2002年   354篇
  2001年   458篇
  2000年   406篇
  1999年   366篇
  1998年   287篇
  1997年   273篇
  1996年   262篇
  1995年   240篇
  1994年   247篇
  1993年   239篇
  1992年   274篇
  1991年   258篇
  1990年   255篇
  1989年   242篇
  1988年   216篇
  1987年   227篇
  1986年   206篇
  1985年   246篇
  1984年   257篇
  1983年   250篇
  1982年   210篇
  1981年   166篇
  1980年   190篇
  1979年   197篇
  1978年   183篇
  1977年   154篇
  1976年   148篇
  1975年   161篇
  1974年   140篇
  1973年   117篇
  1972年   94篇
  1971年   95篇
排序方式: 共有10000条查询结果,搜索用时 531 毫秒
811.
812.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   
813.
Using relative utility curves to evaluate risk prediction   总被引:2,自引:0,他引:2  
Summary.  Because many medical decisions are based on risk prediction models that are constructed from medical history and results of tests, the evaluation of these prediction models is important. This paper makes five contributions to this evaluation: the relative utility curve which gauges the potential for better prediction in terms of utilities, without the need for a reference level for one utility, while providing a sensitivity analysis for misspecification of utilities, the relevant region, which is the set of values of prediction performance that are consistent with the recommended treatment status in the absence of prediction, the test threshold, which is the minimum number of tests that would be traded for a true positive prediction in order for the expected utility to be non-negative, the evaluation of two-stage predictions that reduce test costs and connections between various measures of performance of prediction. An application involving the risk of cardiovascular disease is discussed.  相似文献   
814.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   
815.
The reversible jump Markov chain Monte Carlo (MCMC) sampler (Green in Biometrika 82:711–732, 1995) has become an invaluable device for Bayesian practitioners. However, the primary difficulty with the sampler lies with the efficient construction of transitions between competing models of possibly differing dimensionality and interpretation. We propose the use of a marginal density estimator to construct between-model proposal distributions. This provides both a step towards black-box simulation for reversible jump samplers, and a tool to examine the utility of common between-model mapping strategies. We compare the performance of our approach to well established alternatives in both time series and mixture model examples.  相似文献   
816.
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model. Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rrth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin et al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214–223]. The usefulness of these models is illustrated in a simulation study and in applications to three real data sets.  相似文献   
817.
Studying the right tail of a distribution, one can classify the distributions into three classes based on the extreme value index γγ. The class γ>0γ>0 corresponds to Pareto-type or heavy tailed distributions, while γ<0γ<0 indicates that the underlying distribution has a finite endpoint. The Weibull-type distributions form an important subgroup within the Gumbel class with γ=0γ=0. The tail behaviour can then be specified using the Weibull tail index. Classical estimators of this index show severe bias. In this paper we present a new estimation approach based on the mean excess function, which exhibits improved bias and mean squared error. The asserted properties are supported by simulation experiments and asymptotic results. Illustrations with real life data sets are provided.  相似文献   
818.
Summary.  A general method for exploring multivariate data by comparing different estimates of multivariate scatter is presented. The method is based on the eigenvalue–eigenvector decomposition of one scatter matrix relative to another. In particular, it is shown that the eigenvectors can be used to generate an affine invariant co-ordinate system for the multivariate data. Consequently, we view this method as a method for invariant co-ordinate selection . By plotting the data with respect to this new invariant co-ordinate system, various data structures can be revealed. For example, under certain independent components models, it is shown that the invariant co- ordinates correspond to the independent components. Another example pertains to mixtures of elliptical distributions. In this case, it is shown that a subset of the invariant co-ordinates corresponds to Fisher's linear discriminant subspace, even though the class identifications of the data points are unknown. Some illustrative examples are given.  相似文献   
819.
We proposed a modification to the variant of link-tracing sampling suggested by Félix-Medina and Thompson [M.H. Félix-Medina, S.K. Thompson, Combining cluster sampling and link-tracing sampling to estimate the size of hidden populations, Journal of Official Statistics 20 (2004) 19–38] that allows the researcher to have certain control of the final sample size, precision of the estimates or other characteristics of the sample that the researcher is interested in controlling. We achieve this goal by selecting an initial sequential sample of sites instead of an initial simple random sample of sites as those authors suggested. We estimate the population size by means of the maximum likelihood estimators suggested by the above-mentioned authors or by the Bayesian estimators proposed by Félix-Medina and Monjardin [M.H. Félix-Medina, P.E. Monjardin, Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations: A Bayesian-assisted approach, Survey Methodology 32 (2006) 187–195]. Variances are estimated by means of jackknife and bootstrap estimators as well as by the delta estimators proposed in the two above-mentioned papers. Interval estimates of the population size are obtained by means of Wald and bootstrap confidence intervals. The results of an exploratory simulation study indicate good performance of the proposed sampling strategy.  相似文献   
820.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号