首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8090篇
  免费   115篇
管理学   1201篇
民族学   31篇
人口学   784篇
丛书文集   27篇
理论方法论   614篇
综合类   107篇
社会学   3436篇
统计学   2005篇
  2021年   42篇
  2020年   122篇
  2019年   153篇
  2018年   221篇
  2017年   278篇
  2016年   195篇
  2015年   132篇
  2014年   193篇
  2013年   1339篇
  2012年   262篇
  2011年   246篇
  2010年   180篇
  2009年   178篇
  2008年   147篇
  2007年   183篇
  2006年   163篇
  2005年   164篇
  2004年   151篇
  2003年   145篇
  2002年   149篇
  2001年   200篇
  2000年   194篇
  1999年   172篇
  1998年   132篇
  1997年   126篇
  1996年   129篇
  1995年   108篇
  1994年   108篇
  1993年   98篇
  1992年   141篇
  1991年   119篇
  1990年   118篇
  1989年   112篇
  1988年   97篇
  1987年   93篇
  1986年   97篇
  1985年   112篇
  1984年   115篇
  1983年   125篇
  1982年   99篇
  1981年   89篇
  1980年   86篇
  1979年   95篇
  1978年   90篇
  1977年   81篇
  1976年   68篇
  1975年   70篇
  1974年   60篇
  1973年   58篇
  1971年   49篇
排序方式: 共有8205条查询结果,搜索用时 46 毫秒
961.
There has been much recent interest in supersaturated designs and their application in factor screening experiments. Supersaturated designs have mainly been constructed by using the E ( s 2)-optimality criterion originally proposed by Booth and Cox in 1962. However, until now E ( s 2)-optimal designs have only been established with certainty for n experimental runs when the number of factors m is a multiple of n-1 , and in adjacent cases where m = q ( n -1) + r (| r | 2, q an integer). A method of constructing E ( s 2)-optimal designs is presented which allows a reasonably complete solution to be found for various numbers of runs n including n ,=8 12, 16, 20, 24, 32, 40, 48, 64.  相似文献   
962.
A log-linear modelling approach is proposed for dealing with polytomous, unordered exposure variables in case-control epidemiological studies with matched pairs. Hypotheses concerning epidemiological parameters are shown to be expressable in terms of log-linear models for the expected frequencies of the case-by-control square concordance table representation of the matched data; relevant maximum likelihood estimates and goodness-of-fit statistics are presented. Possible extensions to account for ordered categorical risk factors and multiple controls are illustrated, and comparisons with previous work are discussed. Finally, the possibility of implementing the proposed method with GLIM is illustrated within the context of a data set already analyzed by other authors.  相似文献   
963.
964.
965.
Bayesian palaeoclimate reconstruction   总被引:1,自引:0,他引:1  
Summary.  We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.  相似文献   
966.
We introduce a general Monte Carlo method based on Nested Sampling (NS), for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ∼e −1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. We illustrate the new method on a test problem and find that it can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.  相似文献   
967.
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case.  相似文献   
968.
We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.  相似文献   
969.
Time series of daily mean temperature obtained from the European Climate Assessment data set is analyzed with respect to their extremal properties. A time-series clustering approach which combines Bayesian methodology, extreme value theory and classification techniques is adopted for the analysis of the regional variability of temperature extremes. The daily mean temperature records are clustered on the basis of their corresponding predictive distributions for 25-, 50- and 100-year return values. The results of the cluster analysis show a clear distinction between the highest altitude stations, for which the return values are lowest, and the remaining stations. Furthermore, a clear distinction is also found between the northernmost stations in Scandinavia and the stations in central and southern Europe. This spatial structure of the return period distributions for 25-, 50- and 100-years seems to be consistent with projected changes in the variability of temperature extremes over Europe pointing to a different behavior in central Europe than in northern Europe and the Mediterranean area, possibly related to the effect of soil moisture and land-atmosphere coupling.  相似文献   
970.
Abstract. In this paper, two non‐parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel‐based approaches. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号