首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   294篇
  免费   11篇
管理学   12篇
丛书文集   1篇
综合类   4篇
社会学   2篇
统计学   286篇
  2023年   3篇
  2022年   1篇
  2021年   1篇
  2020年   2篇
  2019年   13篇
  2018年   18篇
  2017年   44篇
  2016年   9篇
  2015年   8篇
  2014年   18篇
  2013年   62篇
  2012年   36篇
  2011年   12篇
  2010年   12篇
  2009年   12篇
  2008年   6篇
  2007年   14篇
  2006年   9篇
  2005年   4篇
  2004年   5篇
  2003年   6篇
  2002年   4篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
  1998年   2篇
排序方式: 共有305条查询结果,搜索用时 140 毫秒
1.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
2.
Recent work on point processes includes studying posterior convergence rates of estimating a continuous intensity function. In this article, convergence rates for estimating the intensity function and change‐point are derived for the more general case of a piecewise continuous intensity function. We study the problem of estimating the intensity function of an inhomogeneous Poisson process with a change‐point using non‐parametric Bayesian methods. An Markov Chain Monte Carlo (MCMC) algorithm is proposed to obtain estimates of the intensity function and the change‐point which is illustrated using simulation studies and applications. The Canadian Journal of Statistics 47: 604–618; 2019 © 2019 Statistical Society of Canada  相似文献   
3.
The theory of higher-order asymptotics provides accurate approximations to posterior distributions for a scalar parameter of interest, and to the corresponding tail area, for practical use in Bayesian analysis. The aim of this article is to extend these approximations to pseudo-posterior distributions, e.g., posterior distributions based on a pseudo-likelihood function and a suitable prior, which are proved to be particularly useful when the full likelihood is analytically or computationally infeasible. In particular, from a theoretical point of view, we derive the Laplace approximation for a pseudo-posterior distribution, and for the corresponding tail area, for a scalar parameter of interest, also in the presence of nuisance parameters. From a computational point of view, starting from these higher-order approximations, we discuss the higher-order tail area (HOTA) algorithm useful to approximate marginal posterior distributions, and related quantities. Compared to standard Markov chain Monte Carlo methods, the main advantage of the HOTA algorithm is that it gives independent samples at a negligible computational cost. The relevant computations are illustrated by two examples.  相似文献   
4.
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China.  相似文献   
5.
This paper is concerned with the Bayesian estimation parameters of the stochastic SIR (Susceptible-Infective-Removed) epidemic model from the trajectory data. Specifically, the data from the count of both infectives and susceptibles is assumed to be available on some time grid as the epidemic progresses. The diffusion approximation of the appropriate jump process is then used to estimate missing data between every pair of observation times. If the time step of imputations is small enough, we derive the posterior distributions of the infection and recovery rates using the Milstein scheme. The paper also presents Markov-chain Monte Carlo (MCMC) simulation that demonstrates that the method provides accurate estimates, as illustrated by the synthetic data from SIR epidemic model and the real data.  相似文献   
6.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   
7.
Bayesian item response theory models have been widely used in different research fields. They support measuring constructs and modeling relationships between constructs, while accounting for complex test situations (e.g., complex sampling designs, missing data, heterogenous population). Advantages of this flexible modeling framework together with powerful simulation-based estimation techniques are discussed. Furthermore, it is shown how the Bayes factor can be used to test relevant hypotheses in assessment using the College Basic Academic Subjects Examination (CBASE) data.  相似文献   
8.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   
9.
随着中国经济的高速增长,中国区域间贫富差距也日渐加大,在此背景下对中国城镇居民收入是否收敛进行检验并探究其影响原因十分必要。基于一个包括物质资本和人力资本投入的新古典增长模型,根据中国31个省市的1987—2013年数据,利用SDM模型和贝叶斯MCMC统计分析方法,研究城镇居民收入的收敛性问题,结果发现:中国城镇居民人均收入具有显著的空间差异,且在1987—2008年为发散、在2008—2013年以及1987—2013年为存在β收敛的变化趋势;物质资本对中国城镇居民收入增长的β收敛具有正向促进作用,而人力资本对其具有反向促进作用,增加物质资本投入有利于缩小地区收入差距,二者的不匹配可能是导致中国收入增长差距的原因。  相似文献   
10.
Linear mixed models have been widely used to analyze repeated measures data which arise in many studies. In most applications, it is assumed that both the random effects and the within-subjects errors are normally distributed. This can be extremely restrictive, obscuring important features of within-and among-subject variations. Here, quantile regression in the Bayesian framework for the linear mixed models is described to carry out the robust inferences. We also relax the normality assumption for the random effects by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in the linear mixed models. For posterior inference, we propose a Gibbs sampling algorithm based on a mixture representation of the asymmetric Laplace distribution and multivariate skew-normal distribution. The procedures are demonstrated by both simulated and real data examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号