首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2293篇
  免费   54篇
  国内免费   2篇
管理学   67篇
人口学   4篇
丛书文集   31篇
理论方法论   15篇
综合类   196篇
社会学   34篇
统计学   2002篇
  2024年   1篇
  2023年   14篇
  2022年   12篇
  2021年   18篇
  2020年   27篇
  2019年   80篇
  2018年   95篇
  2017年   191篇
  2016年   56篇
  2015年   42篇
  2014年   69篇
  2013年   603篇
  2012年   143篇
  2011年   93篇
  2010年   74篇
  2009年   75篇
  2008年   73篇
  2007年   74篇
  2006年   68篇
  2005年   74篇
  2004年   58篇
  2003年   53篇
  2002年   41篇
  2001年   49篇
  2000年   42篇
  1999年   32篇
  1998年   35篇
  1997年   29篇
  1996年   12篇
  1995年   20篇
  1994年   9篇
  1993年   8篇
  1992年   9篇
  1991年   13篇
  1990年   4篇
  1989年   3篇
  1988年   5篇
  1987年   6篇
  1986年   2篇
  1985年   4篇
  1984年   4篇
  1983年   8篇
  1982年   8篇
  1981年   4篇
  1980年   3篇
  1979年   2篇
  1978年   3篇
  1975年   1篇
排序方式: 共有2349条查询结果,搜索用时 31 毫秒
61.
62.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   
63.
We introduce and study general mathematical properties of a new generator of continuous distributions with one extra parameter called the generalized odd half-Cauchy family. We present some special models and investigate the asymptotics and shapes. The new density function can be expressed as a linear mixture of exponentiated densities based on the same baseline distribution. We derive a power series for the quantile function. We discuss the estimation of the model parameters by maximum likelihood and prove empirically the flexibility of the new family by means of two real data sets.  相似文献   
64.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   
65.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study.  相似文献   
66.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   
67.
Understanding how wood develops has become an important problematic of plant sciences. However, studying wood formation requires the acquisition of count data difficult to interpret. Here, the annual wood formation dynamics of a conifer tree species were modeled using generalized linear and additive models (GLM and GAM); GAM for location, scale, and shape (GAMLSS); a discrete semiparametric kernel regression for count data. The performance of models is evaluated using bootstrap methods. GLM was useful to describe the wood formation general pattern but had a lack of fitting, while GAM, GAMLSS, and kernel regression had a higher sensibility to short-term variations.  相似文献   
68.
In this paper, two control charts based on the generalized linear test (GLT) and contingency table are proposed for Phase-II monitoring of multivariate categorical processes. The performances of the proposed methods are compared with the exponentially weighted moving average-generalized likelihood ratio test (EWMA-GLRT) control chart proposed in the literature. The results show the better performance of the proposed control charts under moderate and large shifts. Moreover, a new scheme is proposed to identify the parameter responsible for an out-of-control signal. The performance of the proposed diagnosing procedure is evaluated through some simulation experiments.  相似文献   
69.
This paper is concerned with interval estimation for the breakpoint parameter in segmented regression. We present score‐type confidence intervals derived from the score statistic itself and from the recently proposed gradient statistic. Due to lack of regularity conditions of the score, non‐smoothness and non‐monotonicity, naive application of the score‐based statistics is unfeasible and we propose to exploit the smoothed score obtained via induced smoothing. We compare our proposals with the traditional methods based on the Wald and the likelihood ratio statistics via simulations and an analysis of a real dataset: results show that the smoothed score‐like statistics perform in practice somewhat better than competitors, even when the model is not correctly specified.  相似文献   
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号