首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21篇
  免费   1篇
综合类   3篇
统计学   19篇
  2022年   1篇
  2019年   2篇
  2018年   2篇
  2017年   2篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2013年   4篇
  2012年   5篇
  2011年   1篇
  2010年   1篇
  2003年   1篇
排序方式: 共有22条查询结果,搜索用时 0 毫秒
1.
By releasing the unbiasedness condition, we often obtain more accurate estimators due to the bias–variance trade-off. In this paper, we propose a class of shrinkage proportion estimators which show improved performance over the sample proportion. We provide the “optimal” amount of shrinkage. The advantage of the proposed estimators is given theoretically as well as explored empirically by simulation studies and real data analyses.  相似文献   
2.
The proportional odds model (POM) is commonly used in regression analysis to predict the outcome for an ordinal response variable. The maximum likelihood estimation (MLE) approach is typically used to obtain the parameter estimates. The likelihood estimates do not exist when the number of parameters, p, is greater than the number of observations n. The MLE also does not exist if there are no overlapping observations in the data. In a situation where the number of parameters is less than the sample size but p is approaching to n, the likelihood estimates may not exist, and if they exist they may have quite large standard errors. An estimation method is proposed to address the last two issues, i.e. complete separation and the case when p approaches n, but not the case when p>n. The proposed method does not use any penalty term but uses pseudo-observations to regularize the observed responses by downgrading their effect so that they become close to the underlying probabilities. The estimates can be computed easily with all commonly used statistical packages supporting the fitting of POMs with weights. Estimates are compared with MLE in a simulation study and an application to the real data.  相似文献   
3.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   
4.
Functional logistic regression is becoming more popular as there are many situations where we are interested in the relation between functional covariates (as input) and a binary response (as output). Several approaches have been advocated, and this paper goes into detail about three of them: dimension reduction via functional principal component analysis, penalized functional regression, and wavelet expansions in combination with Least Absolute Shrinking and Selection Operator penalization. We discuss the performance of the three methods on simulated data and also apply the methods to data regarding lameness detection for horses. Emphasis is on classification performance, but we also discuss estimation of the unknown parameter function.  相似文献   
5.
In modern football, various variables as, for example, the distance a team runs or its percentage of ball possession, are collected throughout a match. However, there is a lack of methods to make use of these on-field variables simultaneously and to connect them with the final result of the match. This paper considers data from the German Bundesliga season 2015/2016. The objective is to identify the on-field variables that are connected to the sportive success or failure of the single teams. An extended Bradley–Terry model for football matches is proposed that is able to take into account on-field covariates. Penalty terms are used to reduce the complexity of the model and to find clusters of teams with equal covariate effects. The model identifies the running distance to be the on-field covariate that is most strongly connected to the match outcome.  相似文献   
6.
This paper proposes a Bayesian integrative analysis method for linking multi-fidelity computer experiments. Instead of assuming covariance structures of multivariate Gaussian process models, we handle the outputs from different levels of accuracy as independent processes and link them via a penalization method that controls the distance between their overall trends. Based on the priors induced by the penalty, we build Bayesian prediction models for the output at the highest accuracy. Simulated and real examples show that the proposed method is better than existing methods in terms of prediction accuracy for many cases.  相似文献   
7.
In this paper, we consider a mixed compound Poisson process, that is, a random sum of independent and identically distributed (i.i.d.) random variables where the number of terms is a Poisson process with random intensity. We study nonparametric estimators of the jump density by specific deconvolution methods. Firstly, assuming that the random intensity has exponential distribution with unknown expectation, we propose two types of estimators based on the observation of an i.i.d. sample. Risks bounds and adaptive procedures are provided. Then, with no assumption on the distribution of the random intensity, we propose two non‐parametric estimators of the jump density based on the joint observation of the number of jumps and the random sum of jumps. Risks bounds are provided, leading to unusual rates for one of the two estimators. The methods are implemented and compared via simulations.  相似文献   
8.
In cancer diagnosis studies, high‐throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the ‘large d, small n’ feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single data sets can be unsatisfactory. A cost‐effective remedy is to conduct integrative analysis of multiple heterogeneous data sets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer data sets, show satisfactory performance of the proposed methods.  相似文献   
9.
本文借鉴现代考古学的方法,以特定的青铜器器型作为定点标准,考证《诗经》中位列第三的《周南.卷耳》篇形成于殷末一周初时段。这一结果,与近年考古发现及传世文献的记录一道,形成坚实的证据网链,确证中国第一部诗歌总集--《诗经》的起源时间为殷末-周初时期。  相似文献   
10.
In multi-category response models, categories are often ordered. In the case of ordinal response models, the usual likelihood approach becomes unstable with ill-conditioned predictor space or when the number of parameters to be estimated is large relative to the sample size. The likelihood estimates do not exist when the number of observations is less than the number of parameters. The same problem arises if constraint on the order of intercept values is not met during the iterative procedure. Proportional odds models (POMs) are most commonly used for ordinal responses. In this paper, penalized likelihood with quadratic penalty is used to address these issues with a special focus on POMs. To avoid large differences between two parameter values corresponding to the consecutive categories of an ordinal predictor, the differences between the parameters of two adjacent categories should be penalized. The considered penalized-likelihood function penalizes the parameter estimates or differences between the parameter estimates according to the type of predictors. Mean-squared error for parameter estimates, deviance of fitted probabilities and prediction error for ridge regression are compared with usual likelihood estimates in a simulation study and an application.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号