首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   133篇
  免费   8篇
管理学   1篇
综合类   2篇
社会学   2篇
统计学   136篇
  2023年   1篇
  2022年   1篇
  2021年   3篇
  2020年   2篇
  2019年   10篇
  2018年   8篇
  2017年   10篇
  2016年   3篇
  2015年   7篇
  2014年   5篇
  2013年   37篇
  2012年   13篇
  2011年   2篇
  2010年   5篇
  2009年   4篇
  2008年   3篇
  2007年   2篇
  2006年   3篇
  2005年   3篇
  2004年   7篇
  2003年   3篇
  2002年   1篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1992年   1篇
  1987年   1篇
  1985年   1篇
  1981年   1篇
排序方式: 共有141条查询结果,搜索用时 15 毫秒
1.
Bayesian networks for imputation   总被引:1,自引:0,他引:1  
Summary.  Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census.  相似文献   
2.
The evaluation of DNA evidence in pedigrees requiring population inference   总被引:1,自引:0,他引:1  
Summary. The evaluation of nuclear DNA evidence for identification purposes is performed here taking account of the uncertainty about population parameters. Graphical models are used to detail the hypotheses being debated in a trial with the aim of obtaining a directed acyclic graph. Graphs also clarify the set of evidence that contributes to population inferences and they also describe the conditional independence structure of DNA evidence. Numerical illustrations are provided by re-examining three case-studies taken from the literature. Our calculations of the weight of evidence differ from those given by the authors of case-studies in that they reveal more conservative values.  相似文献   
3.
4.
Covariance matrices play an important role in many multivariate techniques and hence a good covariance estimation is crucial in this kind of analysis. In many applications a sparse covariance matrix is expected due to the nature of the data or for simple interpretation. Hard thresholding, soft thresholding, and generalized thresholding were therefore developed to this end. However, these estimators do not always yield well-conditioned covariance estimates. To have sparse and well-conditioned estimates, we propose doubly shrinkage estimators: shrinking small covariances towards zero and then shrinking covariance matrix towards a diagonal matrix. Additionally, a richness index is defined to evaluate how rich a covariance matrix is. According to our simulations, the richness index serves as a good indicator to choose relevant covariance estimator.  相似文献   
5.
A fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. This becomes even more challenging when the data contain gross outliers or unusual observations. However, in practice the true covariates are not known in advance, nor is the smoothness of the functional form. A robust model selection approach through which we can choose the relevant covariates components and estimate the smoothing function may represent an appealing tool to the solution. A weighted signed-rank estimation and variable selection under the adaptive lasso for semi-parametric partial additive models is considered in this paper. B-spline is used to estimate the unknown additive nonparametric function. It is shown that despite using B-spline to estimate the unknown additive nonparametric function, the proposed estimator has an oracle property. The robustness of the weighted signed-rank approach for data with heavy-tail, contaminated errors, and data containing high-leverage points are validated via finite sample simulations. A practical application to an economic study is provided using an updated Canadian household gasoline consumption data.  相似文献   
6.
In many practical applications, high-dimensional regression analyses have to take into account measurement error in the covariates. It is thus necessary to extend regularization methods, that can handle the situation where the number of covariates p largely exceed the sample size n, to the case in which covariates are also mismeasured. A variety of methods are available in this context, but many of them rely on knowledge about the measurement error and the structure of its covariance matrix. In this paper, we set the goal to compare some of these methods, focusing on situations relevant for practical applications. In particular, we will evaluate these methods in setups in which the measurement error distribution and dependence structure are not known and have to be estimated from data. Our focus is on variable selection, and the evaluation is based on extensive simulations.  相似文献   
7.
In the paper we consider minimisation of U-statistics with the weighted Lasso penalty and investigate their asymptotic properties in model selection and estimation. We prove that the use of appropriate weights in the penalty leads to the procedure that behaves like the oracle that knows the true model in advance, i.e. it is model selection consistent and estimates nonzero parameters with the standard rate. For the unweighted Lasso penalty, we obtain sufficient and necessary conditions for model selection consistency of estimators. The obtained results strongly based on the convexity of the loss function that is the main assumption of the paper. Our theorems can be applied to the ranking problem as well as generalised regression models. Thus, using U-statistics we can study more complex models (better describing real problems) than usually investigated linear or generalised linear models.  相似文献   
8.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   
9.
As no single classification method outperforms other classification methods under all circumstances, decision-makers may solve a classification problem using several classification methods and examine their performance for classification purposes in the learning set. Based on this performance, better classification methods might be adopted and poor methods might be avoided. However, which single classification method is the best to predict the classification of new observations is still not clear, especially when some methods offer similar classification performance in the learning set. In this article we present various regression and classical methods, which combine several classification methods to predict the classification of new observations. The quality of the combined classifiers is examined on some real data. Nonparametric regression is the best method of combining classifiers.  相似文献   
10.
When presented as graphical illustrations, regression surface confidence bands for linear statistical models quickly convey detailed information about analysis results. A taut confidence band is a compact set of curves which are estimation candidates for the unobservable, fixed regression curve. The bounds of the band are usually plotted with the estimated regression curve and may be overlaid by a scatter-plot of the data to provide an integrated visual impression. Finite-interval confidence bands offer the advantages of clearer interpretation and improved efficiency and avoid visual ambiguities inherent to infinite-interval bands. The definitive characteristic of a finite-interval confidence band is that it is only necessary to plot it over a finite interval in order to visually communicate all its information. In contrast, visual representations of infinite-interval bands are not fully informative and can be misleading. When an infinite-interval band is plotted, and therefore truncated, substantial information given by its asymptotic behavior is lost. Many curves that are clearly within the plotted portion of the infinite interval confidence band eventually cross a boundary. In practice, a finite-interval band can always be easily obtained from any infinite-interval band. This article focuses on interpretational considerations of symmetric confidence bands as graphical devices.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号