首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3561篇
  免费   91篇
  国内免费   5篇
管理学   290篇
人口学   14篇
丛书文集   28篇
理论方法论   38篇
综合类   204篇
社会学   36篇
统计学   3047篇
  2023年   44篇
  2022年   32篇
  2021年   49篇
  2020年   52篇
  2019年   133篇
  2018年   143篇
  2017年   254篇
  2016年   119篇
  2015年   94篇
  2014年   135篇
  2013年   833篇
  2012年   268篇
  2011年   129篇
  2010年   106篇
  2009年   134篇
  2008年   118篇
  2007年   117篇
  2006年   106篇
  2005年   103篇
  2004年   99篇
  2003年   76篇
  2002年   67篇
  2001年   57篇
  2000年   58篇
  1999年   48篇
  1998年   57篇
  1997年   37篇
  1996年   16篇
  1995年   19篇
  1994年   19篇
  1993年   14篇
  1992年   19篇
  1991年   19篇
  1990年   5篇
  1989年   6篇
  1988年   12篇
  1987年   7篇
  1986年   4篇
  1985年   8篇
  1984年   5篇
  1983年   12篇
  1982年   11篇
  1981年   4篇
  1980年   4篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3657条查询结果,搜索用时 15 毫秒
31.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   
32.
Thispaper considers the stratified proportional hazards model witha focus on the assessment of stratum effects. The assessmentof such effects is often of interest, for example, in clinicaltrials. In this case, two relevant tests are the test of stratuminteraction with covariates and the test of stratum interactionwith baseline hazard functions. For the test of stratum interactionwith covariates, one can use the partial likelihood method (Kalbfleischand Prentice, 1980; Lin, 1994). For the test of stratum interactionwith baseline hazard functions, however, there seems to be noformal test available. We consider this problem and propose aclass of nonparametric tests. The asymptotic distributions ofthe tests are derived using the martingale theory. The proposedtests can also be used for survival comparisons which need tobe adjusted for covariate effects. The method is illustratedwith data from a lung cancer clinical trial.  相似文献   
33.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   
34.
Networks of ambient monitoring stations are used to monitor environmental pollution fields such as those for acid rain and air pollution. Such stations provide regular measurements of pollutant concentrations. The networks are established for a variety of purposes at various times so often several stations measuring different subsets of pollutant concentrations can be found in compact geographical regions. The problem of statistically combining these disparate information sources into a single 'network' then arises. Capitalizing on the efficiencies so achieved can then lead to the secondary problem of extending this network. The subject of this paper is a set of 31 air pollution monitoring stations in southern Ontario. Each of these regularly measures a particular subset of ionic sulphate, sulphite, nitrite and ozone. However, this subset varies from station to station. For example only two stations measure all four. Some measure just one. We describe a Bayesian framework for integrating the measurements of these stations to yield a spatial predictive distribution for unmonitored sites and unmeasured concentrations at existing stations. Furthermore we show how this network can be extended by using an entropy maximization criterion. The methods assume that the multivariate response field being measured has a joint Gaussian distribution conditional on its mean and covariance function. A conjugate prior is used for these parameters, some of its hyperparameters being fitted empirically.  相似文献   
35.
Model checking with discrete data regressions can be difficult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodness-of-fit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fitted to a historical data set on behavioural learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and specific discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are sufficiently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals defined from an underlying continuous model and quantile–quantile plots of these residuals.  相似文献   
36.
In recent years there has been a rapid growth in the amount of DNA being sequenced and in its availability through genetic databases. Statistical techniques which identify structure within these sequences can be of considerable assistance to molecular biologists particularly when they incorporate the discrete nature of changes caused by evolutionary processes. This paper focuses on the detection of homogeneous segments within heterogeneous DNA sequences. In particular, we study an intron from the chimpanzee α-fetoprotein gene; this protein plays an important role in the embryonic development of mammals. We present a Bayesian solution to this segmentation problem using a hidden Markov model implemented by Markov chain Monte Carlo methods. We consider the important practical problem of specifying informative prior knowledge about sequences of this type. Two Gibbs sampling algorithms are contrasted and the sensitivity of the analysis to the prior specification is investigated. Model selection and possible ways to overcome the label switching problem are also addressed. Our analysis of intron 7 identifies three distinct homogeneous segment types, two of which occur in more than one region, and one of which is reversible.  相似文献   
37.
The Finnish common toad data of Heikkinen and Hogmander are reanalysed using an alternative fully Bayesian model that does not require a pseudolikelihood approximation and an alternative prior distribution for the true presence or absence status of toads in each 10 km×10 km square. Markov chain Monte Carlo methods are used to obtain posterior probability estimates of the square-specific presences of the common toad and these are presented as a map. The results are different from those of Heikkinen and Hogmander and we offer an explanation in terms of the prior used for square-specific presence of the toads. We suggest that our approach is more faithful to the data and avoids unnecessary confounding of effects. We demonstrate how to extend our model efficiently with square-specific covariates and illustrate this by introducing deterministic spatial changes.  相似文献   
38.
When Shannon entropy is used as a criterion in the optimal design of experiments, advantage can be taken of the classical identity representing the joint entropy of parameters and observations as the sum of the marginal entropy of the observations and the preposterior conditional entropy of the parameters. Following previous work in which this idea was used in spatial sampling, the method is applied to standard parameterized Bayesian optimal experimental design. Under suitable conditions, which include non-linear as well as linear regression models, it is shown in a few steps that maximizing the marginal entropy of the sample is equivalent to minimizing the preposterior entropy, the usual Bayesian criterion, thus avoiding the use of conditional distributions. It is shown using this marginal formulation that under normality assumptions every standard model which has a two-point prior distribution on the parameters gives an optimal design supported on a single point. Other results include a new asymptotic formula which applies as the error variance is large and bounds on support size.  相似文献   
39.
This paper considers a class of summary measures of the dependence between a pair of failure time variables over a finite follow-up region. The class consists of measures that are weighted averages of local dependence measures, and includes the cross-ratio-measure and finite region version of Kendall's τ; recently proposed by the authors. Two new special cases are identified that can avoid the need to estimate the bivariate survivor function and that admit explicit variance estimators. Nonparametric estimators of such dependence measures are proposed and are shown to be consistent and asymptotically normal with variances that can be consistently estimated. Properties of selected estimators are evaluated in a simulation study, and the method is illustrated through an analysis of Australian Twin Study data.  相似文献   
40.
In a searching analysis of the fiducial argument Hacking (1965) proposed the Principle of Irrelevance as a condition under which the argument is valid. His statement of the Principle was essentially non-mathematical and this paper presents a mathematical development of the Principle. The relationship with likelihood inference is explored and some of the proposed counter-examples to fiducial theory are considered. It is shown that even with the Principle of Irrelevance examples of non-uniqueness of fiducial distributions exist.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号