首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   485篇
  免费   10篇
  国内免费   8篇
管理学   53篇
人口学   1篇
丛书文集   8篇
综合类   70篇
社会学   4篇
统计学   367篇
  2023年   5篇
  2022年   5篇
  2021年   5篇
  2020年   4篇
  2019年   16篇
  2018年   29篇
  2017年   52篇
  2016年   16篇
  2015年   18篇
  2014年   27篇
  2013年   75篇
  2012年   47篇
  2011年   23篇
  2010年   20篇
  2009年   17篇
  2008年   15篇
  2007年   30篇
  2006年   20篇
  2005年   10篇
  2004年   11篇
  2003年   18篇
  2002年   8篇
  2001年   8篇
  2000年   5篇
  1999年   3篇
  1998年   7篇
  1997年   1篇
  1996年   2篇
  1995年   4篇
  1991年   1篇
  1990年   1篇
排序方式: 共有503条查询结果,搜索用时 15 毫秒
31.
  总被引:1,自引:0,他引:1  
基于非参数方法识别中国股指期货市场日内价格跳跃行为,改进HAR-CJ波动模型分析期货开盘跳跃行为的内在机制和溢出效应,研究发现:股指期货交易日内开盘跳跃行为包括隔夜不确定性收益及其在日内波动延续;股指期货日内开盘的一般性跳跃行为可由反映市场预期的价格波动行为解释;股指期货开盘跳跃相较盘中密集分布跳跃行为融入市场速度更快;股指期货开盘跳跃行为对交易日价格波动和股票现货市场的溢出效应明显.在此基础上,就机构投资者和市场监管者如何加强风险管理提出相关对策建议.  相似文献   
32.
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real‐life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
33.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   
34.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   
35.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
36.
In this study, we propose a prior on restricted Vector Autoregressive (VAR) models. The prior setting permits efficient Markov Chain Monte Carlo (MCMC) sampling from the posterior of the VAR parameters and estimation of the Bayes factor. Numerical simulations show that when the sample size is small, the Bayes factor is more effective in selecting the correct model than the commonly used Schwarz criterion. We conduct Bayesian hypothesis testing of VAR models on the macroeconomic, state-, and sector-specific effects of employment growth.  相似文献   
37.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
38.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   
39.
为了有效揭示上证不同交易周期收益率与波动的关系,文章采用极大重叠离散小渡变换,将收益率分解在不同的交易周期上,建立各自SV-M模型,考察各周期收益率与波动的关系及其他参数随交易周期的变化情况.实证表明,不同交易周期,收益率与渡动的关系存在明显差异;且随交易周期的增长,收益率的相关性及波动持续性逐渐增强.  相似文献   
40.
Any continuous bivariate distribution can be expressed in terms of its margins and a unique copula. In the case of extreme‐value distributions, the copula is characterized by a dependence function while each margin depends on three parameters. The authors propose a Bayesian approach for the simultaneous estimation of the dependence function and the parameters defining the margins. They describe a nonparametric model for the dependence function and a reversible jump Markov chain Monte Carlo algorithm for the computation of the Bayesian estimator. They show through simulations that their estimator has a smaller mean integrated squared error than classical nonparametric estimators, especially in small samples. They illustrate their approach on a hydrological data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号