首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we study a discrete interaction risk model with delayed claims and stochastic incomes in the framework of the compound binomial model. A generalized Gerber-Shiu discounted penalty function is proposed to analyse this risk model in which the interest rates follow a Markov chain with finite state space. We derive an explicit expression for the generating function of this Gerber-Shiu discounted penalty function. Furthermore, we derive a recursive formula and a defective renewal equation for the original Gerber-Shiu discounted penalty function. As an application, the joint distributions of the surplus one period prior to ruin and the deficit at ruin, as well as the probabilities of ruin are obtained. Finally, some numerical illustrations from a specific example are also given.  相似文献   

2.
3.
Accurate estimation of an underlying function and its derivatives is one of the central problems in statistics. Parametric forms are often proposed based on the expert opinion or prior knowledge of the underlying function. However, these strict parametric assumptions may result in biased estimates when they are not completely accurate. Meanwhile, nonparametric smoothing methods, which do not impose any parametric form, are quite flexible. We propose a parametric penalized spline smoothing method, which has the same flexibility as the nonparametric smoothing methods. It also uses the prior knowledge of the underlying function by defining an additional penalty term using the distance of the fitted function to the assumed parametric function. Our simulation studies show that the parametric penalized spline smoothing method can obtain more accurate estimates of the function and its derivatives than the penalized spline smoothing method. The parametric penalized spline smoothing method is also demonstrated by estimating the human height function and its derivatives from the real data.  相似文献   

4.
Spatially-adaptive Penalties for Spline Fitting   总被引:2,自引:0,他引:2  
The paper studies spline fitting with a roughness penalty that adapts to spatial heterogeneity in the regression function. The estimates are p th degree piecewise polynomials with p − 1 continuous derivatives. A large and fixed number of knots is used and smoothing is achieved by putting a quadratic penalty on the jumps of the p th derivative at the knots. To be spatially adaptive, the logarithm of the penalty is itself a linear spline but with relatively few knots and with values at the knots chosen to minimize the generalized cross validation (GCV) criterion. This locally-adaptive spline estimator is compared with other spline estimators in the literature such as cubic smoothing splines and knot-selection techniques for least squares regression. Our estimator can be interpreted as an empirical Bayes estimate for a prior allowing spatial heterogeneity. In cases of spatially heterogeneous regression functions, empirical Bayes confidence intervals using this prior achieve better pointwise coverage probabilities than confidence intervals based on a global-penalty parameter. The method is developed first for univariate models and then extended to additive models.  相似文献   

5.
Abstract

The compound Poisson Omega model is considered in the presence of a three-step premium rate. Firstly, the integral equations and the integro-differential equations for the Gerber-Shiu expected discounted penalty function are derived. Secondly, the integro-differential equations for the Gerber-Shiu expected discounted penalty function are determined in three different initial conditions. The results are then used to find the bankruptcy probability. Finally, the special cases where the claim size distribution is exponential be discussed in some detail in order to illustrate the effect of the model with three-step premium rate.  相似文献   

6.
Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop efficient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.  相似文献   

7.
The paper presents a new method for flexible fitting of D-vines. Pair-copulas are estimated semi-parametrically using penalized Bernstein polynomials or constant and linear B-splines, respectively, as spline bases in each knot of the D-vine throughout each level. A penalty induce smoothness of the fit while the high dimensional spline basis guarantees flexibility. To ensure uniform univariate margins of each pair-copula, linear constraints are placed on the spline coefficients and quadratic programming is used to fit the model. The amount of penalizations for each pair-copula is driven by a penalty parameter which is selected in a numerically efficient way. Simulations and practical examples accompany the presentation.  相似文献   

8.
We propose the Laplace Error Penalty (LEP) function for variable selection in high‐dimensional regression. Unlike penalty functions using piecewise splines construction, the LEP is constructed as an exponential function with two tuning parameters and is infinitely differentiable everywhere except at the origin. With this construction, the LEP‐based procedure acquires extra flexibility in variable selection, admits a unified derivative formula in optimization and is able to approximate the L0 penalty as close as possible. We show that the LEP procedure can identify relevant predictors in exponentially high‐dimensional regression with normal errors. We also establish the oracle property for the LEP estimator. Although not being convex, the LEP yields a convex penalized least squares function under mild conditions if p is no greater than n. A coordinate descent majorization‐minimization algorithm is introduced to implement the LEP procedure. In simulations and a real data analysis, the LEP methodology performs favorably among competitive procedures.  相似文献   

9.
Due to the irregularity of finite mixture models, the commonly used likelihood-ratio statistics often have complicated limiting distributions. We propose to add a particular type of penalty function to the log-likelihood function. The resulting penalized likelihood-ratio statistics have simple limiting distributions when applied to finite mixture models with multinomial observations. The method is especially effective in addressing the problems discussed by Chernoff and Lander (1995). The theory developed and simulations conducted show that the penalized likelihood method can give very good results, better than the well-known C(α) procedure, for example. The paper does not, however, fully explore the choice of penalty function and weight. The full potential of the new procedure is to be explored in the future.  相似文献   

10.
By introducing the idea of thresholding function matching, it is illustrated that both bridge penalty and log penalty can be transformed so as to circumvent certain difficulties in numerical computation and the definition of local minimality. The fact that both bridge penalty and log penalty have derivatives going to infinity at zero. This hinders their applications in statistics although it is reported in the literature that they allow recovery of sparse structure in the data under some conditions. It is illustrated in the simulation studies that in the variable selection problems, penalized likelihood estimation based on the transformed penalty obtained by the proposed thresholding function matching method outperform those based on many other state-of-art penalties, particularly when the covariates are strongly correlated. The one-to-one correspondence between the transformed penalties and their thresholding functions are also established.  相似文献   

11.
Ordinary differential equations (ODEs) are popular tools for modeling complicated dynamic systems in many areas. When multiple replicates of measurements are available for the dynamic process, it is of great interest to estimate mixed-effects in the ODE model for the process. We propose a semiparametric method to estimate mixed-effects ODE models. Rather than using the ODE numeric solution directly, which requires providing initial conditions, this method estimates a spline function to approximate the dynamic process using smoothing splines. A roughness penalty term is defined using the ODEs, which measures the fidelity of the spline function to the ODEs. The smoothing parameter, which controls the trade-off between fitting the data and maintaining fidelity to the ODEs, can be specified by users or selected objectively by generalized cross validation. The spline coefficients, the ODE random effects, and the ODE fixed effects are estimated in three nested levels of optimization. Two simulation studies show that the proposed method obtains good estimates for mixed-effects ODE models. The semiparametric method is demonstrated with an application of a pharmacokinetic model in a study of HIV combination therapy.  相似文献   

12.
Detecting the number of signals and estimating the parameters of the signals is an important problem in signal processing. Quite a number of papers appeared in the last twenty years regarding the estimation of the parameters of the sinusoidal components but not that much of attention has been given in estimating the number of terms present in a sinusoidal signal. Fuchs developed a criterion based on the perturbation analysis of the data auto correlation matrix to estimate the number of sinusoids, which is in some sense a subjective-based method. Recently Reddy and Biradar proposed two criteria based on AIC and MDL and developed an analytical framework for analyzing the performance of these criteria. In this paper we develop a method using the extended order modelling and singular value decomposition technique similar to that of Reddy and Biradar. We use penalty function technique but instead of using any fixed penalty function like AIC or MDL, a class of penalty functions satisfying some special properties has been used. We prove that any penalty function from that special class will give consistent estimate under the assumptions that the error random variables are independent and identically distributed with mean zero and finite variance. We also obtain the probabilities of wrong detection for any particular penalty function under somewhat weaker assumptions than that of Reddy and Biradar of Kaveh et al. It gives some idea to choose the proper penalty function for any particular model. Simulations are performed to verify the usefulness of the analysis and to compare our methods with the existing ones.  相似文献   

13.
Huang J  Ma S  Li H  Zhang CH 《Annals of statistics》2011,39(4):2021-2046
We propose a new penalized method for variable selection and estimation that explicitly incorporates the correlation patterns among predictors. This method is based on a combination of the minimax concave penalty and Laplacian quadratic associated with a graph as the penalty function. We call it the sparse Laplacian shrinkage (SLS) method. The SLS uses the minimax concave penalty for encouraging sparsity and Laplacian quadratic penalty for promoting smoothness among coefficients associated with the correlated predictors. The SLS has a generalized grouping property with respect to the graph represented by the Laplacian quadratic. We show that the SLS possesses an oracle property in the sense that it is selection consistent and equal to the oracle Laplacian shrinkage estimator with high probability. This result holds in sparse, high-dimensional settings with p ? n under reasonable conditions. We derive a coordinate descent algorithm for computing the SLS estimates. Simulation studies are conducted to evaluate the performance of the SLS method and a real data example is used to illustrate its application.  相似文献   

14.
ABSTRACT

We aim at analysing geostatistical and areal data observed over irregularly shaped spatial domains and having a distribution within the exponential family. We propose a generalized additive model that allows to account for spatially varying covariate information. The model is fitted by maximizing a penalized log-likelihood function, with a roughness penalty term that involves a differential quantity of the spatial field, computed over the domain of interest. Efficient estimation of the spatial field is achieved resorting to the finite element method, which provides a basis for piecewise polynomial surfaces. The proposed model is illustrated by an application to the study of criminality in the city of Portland, OR, USA.  相似文献   

15.
Cox’s proportional hazards model is the most common way to analyze survival data. The model can be extended in the presence of collinearity to include a ridge penalty, or in cases where a very large number of coefficients (e.g. with microarray data) has to be estimated. To maximize the penalized likelihood, optimal weights of the ridge penalty have to be obtained. However, there is no definite rule for choosing the penalty weight. One approach suggests maximization of the weights by maximizing the leave-one-out cross validated partial likelihood, however this is time consuming and computationally expensive, especially in large datasets. We suggest modelling survival data through a Poisson model. Using this approach, the log-likelihood of a Poisson model is maximized by standard iterative weighted least squares. We will illustrate this simple approach, which includes smoothing of the hazard function and move on to include a ridge term in the likelihood. We will then maximize the likelihood by considering tools from generalized mixed linear models. We will show that the optimal value of the penalty is found simply by computing the hat matrix of the system of linear equations and dividing its trace by a product of the estimated coefficients.  相似文献   

16.
The purpose of this study is to highlight the application of sparse logistic regression models in dealing with prediction of tumour pathological subtypes based on lung cancer patients'' genomic information. We consider sparse logistic regression models to deal with the high dimensionality and correlation between genomic regions. In a hierarchical likelihood (HL) method, it is assumed that the random effects follow a normal distribution and its variance is assumed to follow a gamma distribution. This formulation considers ridge and lasso penalties as special cases. We extend the HL penalty to include a ridge penalty (called ‘HLnet’) in a similar principle of the elastic net penalty, which is constructed from lasso penalty. The results indicate that the HL penalty creates more sparse estimates than lasso penalty with comparable prediction performance, while HLnet and elastic net penalties have the best prediction performance in real data. We illustrate the methods in a lung cancer study.  相似文献   

17.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration.  相似文献   

18.
The smooth integration of counting and absolute deviation (SICA) penalty has been demonstrated theoretically and practically to be effective in non-convex penalization for variable selection. However, solving the non-convex optimization problem associated with the SICA penalty when the number of variables exceeds the sample size remains to be enriched due to the singularity at the origin and the non-convexity of the SICA penalty function. In this paper, we develop an efficient and accurate alternating direction method of multipliers with continuation algorithm for solving the SICA-penalized least squares problem in high dimensions. We establish the convergence property of the proposed algorithm under some mild regularity conditions and study the corresponding Karush–Kuhn–Tucker optimality condition. A high-dimensional Bayesian information criterion is developed to select the optimal tuning parameters. We conduct extensive simulations studies to evaluate the efficiency and accuracy of the proposed algorithm, while its practical usefulness is further illustrated with a high-dimensional microarray study.  相似文献   

19.
In Wu and Zen (1999), a linear model selection procedure based on M-estimation is proposed, which includes many classical model selection criteria as its special cases, and it is shown that the selection procedure is strongly consistent for a variety of penalty functions. In this paper, we will investigate its small sample performances for some choices of fixed penalty functions. It can be seen that the performance varies with the choice of the penalty. Hence, a randomized penalty based on observed data is proposed, which preserves the consistency property and provides improved performance over a fixed choice of penalty functions.  相似文献   

20.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号