首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1499篇
  免费   37篇
  国内免费   2篇
管理学   60篇
人口学   1篇
丛书文集   22篇
理论方法论   9篇
综合类   163篇
社会学   28篇
统计学   1255篇
  2024年   1篇
  2023年   12篇
  2022年   12篇
  2021年   13篇
  2020年   19篇
  2019年   54篇
  2018年   68篇
  2017年   104篇
  2016年   43篇
  2015年   24篇
  2014年   48篇
  2013年   332篇
  2012年   100篇
  2011年   55篇
  2010年   44篇
  2009年   53篇
  2008年   47篇
  2007年   52篇
  2006年   48篇
  2005年   55篇
  2004年   53篇
  2003年   40篇
  2002年   33篇
  2001年   30篇
  2000年   31篇
  1999年   20篇
  1998年   21篇
  1997年   24篇
  1996年   8篇
  1995年   14篇
  1994年   9篇
  1993年   8篇
  1992年   8篇
  1991年   10篇
  1990年   4篇
  1989年   2篇
  1988年   5篇
  1987年   6篇
  1986年   4篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   6篇
  1981年   2篇
  1980年   2篇
  1979年   2篇
  1978年   2篇
排序方式: 共有1538条查询结果,搜索用时 62 毫秒
51.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   
52.
This paper is concerned with interval estimation for the breakpoint parameter in segmented regression. We present score‐type confidence intervals derived from the score statistic itself and from the recently proposed gradient statistic. Due to lack of regularity conditions of the score, non‐smoothness and non‐monotonicity, naive application of the score‐based statistics is unfeasible and we propose to exploit the smoothed score obtained via induced smoothing. We compare our proposals with the traditional methods based on the Wald and the likelihood ratio statistics via simulations and an analysis of a real dataset: results show that the smoothed score‐like statistics perform in practice somewhat better than competitors, even when the model is not correctly specified.  相似文献   
53.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   
54.
This paper reviews difficulties with the interpretation and use of the prior parameter u required in the Dirichlet approach to nonpararnetric Bayesian statistics. Two subjective prior distributions are introduced and studied. These priors are obtained computationally by requiring that the experimenter specify certain constraints.  相似文献   
55.
ABSTRACT

A new class of weighted signed-rank-based estimates for estimating the parameter vector of an autoregressive time series is considered. The Wilcoxon signed-rank estimate and the GR-estimates of Terpstra et al. (GR-Estimates for an Autoregressive Time Series. Statistics and Probability Letters 2001, 51, 165–172; Generalized Rank Estimates for an Autoregressive Time Series: A U-Statistic Approach. Statistical Inference for Stochastic Processes 2001, 4, 155–179) can be viewed as special cases of the so-called GSR-estimates. Asymptotic linearity properties are derived for the GSR-estimates. Based on these properties, and a symmetry assumption, the GSR-estimates are shown to be asymptotically normal at rate n 1/2. The theory of U-Statistics along with a characterization of weak dependence that is inherent in stationary AR(p) models are the primary tools used to obtain the results. Tests of hypotheses as well as standard errors for confidence interval procedures can be based on such results. An efficiency study indicates that, for an appropriately chosen set of weights, the GSR-estimate is more efficient than the GR-estimate. Furthermore, the GSR-estimate has an added advantage in that an intercept term can be estimated simultaneously; unlike the GR-estimate. Two examples and a small simulation study are used to illustrate the computational and robust aspects of the GSR-estimates.  相似文献   
56.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   
57.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
58.
We consider confidence intervals for the stress–strength reliability Pr(X< Y) in the two-parameter exponential distribution. We have derived the Bayesian highest posterior density interval using non-informative prior distributions. We have compared its performance with the intervals based on the generalized pivot variable intervals in terms of their coverage probabilities and expected lengths. Our simulation study shows that the Bayesian interval performs better according to the criteria used, especially when the sample sizes are very small. An example is given.  相似文献   
59.
Assignment of individuals to correct species or population of origin based on a comparison of allele profiles has in recent years become more accurate due to improvements in DNA marker technology. A method of assessing the error in such assignment problems is présentés. The method is based on the exact hypergeometric distributions of contingency tables conditioned on marginal totals. The result is a confidence region of fixed confidence level. This confidence level is calculable exactly in principle, and estimable very accurately by simulation, without knowledge of the true population allele frequencies. Various properties of these techniques are examined through application to several examples of actual DNA marker data and through simulation studies. Methods which may reduce computation time are discussed and illustrated.  相似文献   
60.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号