首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   106篇
  免费   5篇
管理学   17篇
综合类   10篇
社会学   2篇
统计学   82篇
  2023年   4篇
  2021年   2篇
  2020年   2篇
  2019年   7篇
  2018年   9篇
  2017年   8篇
  2016年   5篇
  2015年   5篇
  2014年   2篇
  2013年   18篇
  2012年   6篇
  2011年   5篇
  2010年   5篇
  2009年   4篇
  2008年   4篇
  2007年   3篇
  2006年   3篇
  2005年   3篇
  2004年   2篇
  2002年   3篇
  2001年   1篇
  2000年   2篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1989年   1篇
排序方式: 共有111条查询结果,搜索用时 15 毫秒
41.
In an online prediction context, the authors introduce a new class of mongrel criteria that allow for the weighing of candidate models and the combination of their predictions based both on model‐based and empirical measures of their performance. They present simulation results which show that model averaging using the mongrel‐derived weights leads, in small samples, to predictions that are more accurate than that obtained by Bayesian weight updating, provided that none of the candidate models is too distant from the data generator.  相似文献   
42.
研究了含分数阶项的二自由度悬架系统,利用改进的平均法、拉氏变换法、谐波平衡法和复频域法得到了简谐激励下系统响应的解析解,比较了解析解和数值解,二者逼近的精度很高,证明了解析解的准确性。分析了分数阶参数对悬架系统的动力学行为的影响,发现含分数阶微分悬架系统响应稳态幅值能够大幅降低,其动力学性能得到极大提高。  相似文献   
43.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   
44.
We address the task of choosing prior weights for models that are to be used for weighted model averaging. Models that are very similar should usually be given smaller weights than models that are quite distinct. Otherwise, the importance of a model in the weighted average could be increased by augmenting the set of models with duplicates of the model or virtual duplicates of it. Similarly, the importance of a particular model feature (a certain covariate, say) could be exaggerated by including many models with that feature. Ways of forming a correlation matrix that reflects the similarity between models are suggested. Then, weighting schemes are proposed that assign prior weights to models on the basis of this matrix. The weighting schemes give smaller weights to models that are more highly correlated. Other desirable properties of a weighting scheme are identified, and we examine the extent to which these properties are held by the proposed methods. The weighting schemes are applied to real data, and prior weights, posterior weights and Bayesian model averages are determined. For these data, empirical Bayes methods were used to form the correlation matrices that yield the prior weights. Predictive variances are examined, as empirical Bayes methods can result in unrealistically small variances.  相似文献   
45.
46.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   
47.
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean-squared error (MSE) properties, the selection of an effective prior is a difficult task. To address this problem, we propose empirical Bayes (EB) prior selection methods for various error distributions including the normal and the heavier-tailed Student t -distributions. Under such EB prior distributions, we obtain threshold shrinkage estimators based on model selection, and multiple-shrinkage estimators based on model averaging. These EB estimators are seen to be computationally competitive with standard classical thresholding methods, and to be robust to outliers in both the data and wavelet domains. Simulated and real examples are used to illustrate the flexibility and improved MSE performance of these methods in a wide variety of settings.  相似文献   
48.
The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coefficients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov chain Monte Carlo approach to sampling from the known posterior distribution. Problems with hundreds of regressor variables become quite feasible. We give a simple method of assigning the hyperparameters of the prior distribution. The posterior predictive distribution is derived and the approach illustrated on compositional analysis of data involving three sugars with 160 near infrared absorbances as regressors.  相似文献   
49.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   
50.
Summary.  Tumour multiplicity is a frequently measured phenotype in animal studies of cancer biology. Poisson variation of this measurement represents a biological and statistical reference point that is usually violated, even in highly controlled experiments, owing to sources of variation in the stochastic process of tumour formation. A recent experiment on murine intestinal tumours presented conditions which seem to generate Poisson-distributed tumour counts. If valid, this would support a claim about mechanisms by which the adenomatous polyposis coli gene is inactivated during tumour initiation. In considering hypothesis testing strategies, model choice and Bayesian approaches, we quantify the positive evidence favouring Poisson variation in this experiment. Statistical techniques used include likelihood ratio testing, the Bayes and Akaike information criteria, negative binomial modelling, reversible jump Markov chain Monte Carlo methods and posterior predictive checking. The posterior approximation that is based on the Bayes information criterion is found to be quite accurate in this small n case-study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号