首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3807篇
  免费   85篇
管理学   666篇
民族学   28篇
人才学   4篇
人口学   297篇
丛书文集   24篇
理论方法论   395篇
综合类   23篇
社会学   1893篇
统计学   562篇
  2023年   24篇
  2022年   16篇
  2021年   16篇
  2020年   64篇
  2019年   103篇
  2018年   91篇
  2017年   129篇
  2016年   106篇
  2015年   87篇
  2014年   102篇
  2013年   606篇
  2012年   132篇
  2011年   108篇
  2010年   107篇
  2009年   108篇
  2008年   136篇
  2007年   110篇
  2006年   126篇
  2005年   150篇
  2004年   117篇
  2003年   109篇
  2002年   100篇
  2001年   89篇
  2000年   64篇
  1999年   81篇
  1998年   66篇
  1997年   44篇
  1996年   47篇
  1995年   48篇
  1994年   69篇
  1993年   55篇
  1992年   48篇
  1991年   51篇
  1990年   45篇
  1989年   35篇
  1988年   40篇
  1987年   42篇
  1986年   29篇
  1985年   55篇
  1984年   49篇
  1983年   37篇
  1982年   35篇
  1981年   23篇
  1980年   18篇
  1979年   13篇
  1978年   19篇
  1977年   20篇
  1976年   20篇
  1975年   21篇
  1974年   20篇
排序方式: 共有3892条查询结果,搜索用时 15 毫秒
281.
Group testing has its origin in the identification of syphilis in the U.S. army during World War II. Much of the theoretical framework of group testing was developed starting in the late 1950s, with continued work into the 1990s. Recently, with the advent of new laboratory and genetic technologies, there has been an increasing interest in group testing designs for cost saving purposes. In this article, we compare different nested designs, including Dorfman, Sterrett and an optimal nested procedure obtained through dynamic programming. To elucidate these comparisons, we develop closed-form expressions for the optimal Sterrett procedure and provide a concise review of the prior literature for other commonly used procedures. We consider designs where the prevalence of disease is known as well as investigate the robustness of these procedures, when it is incorrectly assumed. This article provides a technical presentation that will be of interest to researchers as well as from a pedagogical perspective. Supplementary material for this article is available online.  相似文献   
282.
283.
Consider a two-by-two factorial experiment with more than one replicate. Suppose that we have uncertain prior information that the two-factor interaction is zero. We describe new simultaneous frequentist confidence intervals for the four population cell means, with simultaneous confidence coefficient 1 ? α, that utilize this prior information in the following sense. These simultaneous confidence intervals define a cube with expected volume that (a) is relatively small when the two-factor interaction is zero and (b) has maximum value that is not too large. Also, these intervals coincide with the standard simultaneous confidence intervals obtained by Tukey’s method, with simultaneous confidence coefficient 1 ? α, when the data strongly contradict the prior information that the two-factor interaction is zero. We illustrate the application of these new simultaneous confidence intervals to a real data set.  相似文献   
284.
285.
Implementation of a full Bayesian non-parametric analysis involving neutral to the right processes (apart from the special case of the Dirichlet process) has been difficult for two reasons: first, the posterior distributions are complex and therefore only Bayes estimates (posterior expectations) have previously been presented; secondly, it is difficult to obtain an interpretation for the parameters of a neutral to the right process. In this paper we extend Ferguson & Phadia (1979) by presenting a general method for specifying the prior mean and variance of a neutral to the right process, providing the interpretation of the parameters. Additionally, we provide the basis for a full Bayesian analysis, via simulation, from the posterior process using a hybrid of new algorithms that is applicable to a large class of neutral to the right processes (Ferguson & Phadia only provide posterior means). The ideas are exemplified through illustrative analyses.  相似文献   
286.
The non-parametric maximum likelihood estimators (MLEs) are derived for survival functions associated with individual risks or system components in a reliability framework. Lifetimes are observed for systems that contain one or more of those components. Analogous to a competing risks model, the system is assumed to fail upon the first instance of any component failure; i.e. the system is configured in series. For any given risk or component type, the asymptotic distribution is shown to depend explicitly on the unknown survival function of the other risks, as well as the censoring distribution. Survival functions with increasing failure rate are investigated as a special case. The order restricted MLE is shown to be consistent under mild assumptions of the underlying component lifetime distributions.  相似文献   
287.
The author is concerned with log‐linear estimators of the size N of a population in a capture‐recapture experiment featuring heterogeneity in the individual capture probabilities and a time effect. He also considers models where the first capture influences the probability of subsequent captures. He derives several results from a new inequality associated with a dispersive ordering for discrete random variables. He shows that in a log‐linear model with inter‐individual heterogeneity, the estimator N is an increasing function of the heterogeneity parameter. He also shows that the inclusion of a time effect in the capture probabilities decreases N in models without heterogeneity. He further argues that a model featuring heterogeneity can accommodate a time effect through a small change in the heterogeneity parameter. He demonstrates these results using an inequality for the estimators of the heterogeneity parameters and illustrates them in a Monte Carlo experiment  相似文献   
288.
The authors propose a semiparametric approach to modeling and forecasting age‐specific mortality in the United States. Their method is based on an extension of a class of semiparametric models to time series. It combines information from several time series and estimates the predictive distribution conditional on past data. The conditional expectation, which is the most commonly used predictor in practice, is the first moment of this distribution. The authors compare their method to that of Lee and Carter.  相似文献   
289.
Quantifying uncertainty in the biospheric carbon flux for England and Wales   总被引:1,自引:0,他引:1  
Summary.  A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO2) concentrations in the atmosphere. Vegetation can extract CO2 through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 1012 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered.  相似文献   
290.
We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing convergence. Here we review three alternatives to MCMC methods: importance sampling, the forward-backward algorithm, and sequential Monte Carlo (SMC). We discuss how to design good proposal densities for importance sampling, show some of the range of models for which the forward-backward algorithm can be applied, and show how resampling ideas from SMC can be used to improve the efficiency of the other two methods. We demonstrate these methods on a range of examples, including estimating the transition density of a diffusion and of a discrete-state continuous-time Markov chain; inferring structure in population genetics; and segmenting genetic divergence data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号