首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
统计学   4篇
  2018年   1篇
  2015年   2篇
  2004年   1篇
排序方式: 共有4条查询结果,搜索用时 0 毫秒
1
1.
Bayesian networks for imputation   总被引:1,自引:0,他引:1  
Summary.  Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census.  相似文献   
2.
3.
Many areas of statistical modeling are plagued by the “curse of dimensionality,” in which there are more variables than observations. This is especially true when developing functional regression models where the independent dataset is some type of spectral decomposition, such as data from near-infrared spectroscopy. While we could develop a very complex model by simply taking enough samples (such that n > p), this could prove impossible or prohibitively expensive. In addition, a regression model developed like this could turn out to be highly inefficient, as spectral data usually exhibit high multicollinearity. In this article, we propose a two-part algorithm for selecting an effective and efficient functional regression model. Our algorithm begins by evaluating a subset of discrete wavelet transformations, allowing for variation in both wavelet and filter number. Next, we perform an intermediate processing step to remove variables with low correlation to the response data. Finally, we use the genetic algorithm to perform a stochastic search through the subset regression model space, driven by an information-theoretic objective function. We allow our algorithm to develop the regression model for each response variable independently, so as to optimally model each variable. We demonstrate our method on the familiar biscuit dough dataset, which has been used in a similar context by several researchers. Our results demonstrate both the flexibility and the power of our algorithm. For each response variable, a different subset model is selected, and different wavelet transformations are used. The models developed by our algorithm show an improvement, as measured by lower mean error, over results in the published literature.  相似文献   
4.
Understanding patterns in the frequency of extreme natural events, such as earthquakes, is important as it helps in the prediction of their future occurrence and hence provides better civil protection. Distributions describing these events are known to be heavy tailed and positive skew making standard distributions unsuitable for modelling the frequency of such events. The Birnbaum–Saunders distribution and its extreme value version have been widely studied and applied due to their attractive properties. We derive L-moment equations for these distributions and propose novel methods for parameter estimation, goodness-of-fit assessment and model selection. A simulation study is conducted to evaluate the performance of the L-moment estimators, which is compared to that of the maximum likelihood estimators, demonstrating the superiority of the proposed methods. To illustrate these methods in a practical application, a data analysis of real-world earthquake magnitudes, obtained from the global centroid moment tensor catalogue during 1962–2015, is carried out. This application identifies the extreme value Birnbaum–Saunders distribution as a better model than classic extreme value distributions for describing seismic events.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号