首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We construct a specific form of piecewise distortion function which can distort a random risk to its expectile. After analyzing this kind of distortion functions, we define a class of distortion functions which are generated from random variables. The consistent estimation of the expectile distortion parameter is given by the maximum empirical likelihood method. The expectile distortion not only inherits the good properties of concave distortion measures but also has its own advantages. Since that, we discuss the potential usage of this measure and imagine a new premium principle based on the non self form of this measure.  相似文献   

2.
During their follow-up, patients with cancer can experience several types of recurrent events and can also die. Over the last decades, several joint models have been proposed to deal with recurrent events with dependent terminal event. Most of them require the proportional hazard assumption. In the case of long follow-up, this assumption could be violated. We propose a joint frailty model for two types of recurrent events and a dependent terminal event to account for potential dependencies between events with potentially time-varying coefficients. For that, regression splines are used to model the time-varying coefficients. Baseline hazard functions (BHF) are estimated with piecewise constant functions or with cubic M-Splines functions. The maximum likelihood estimation method provides parameter estimates. Likelihood ratio tests are performed to test the time dependency and the statistical association of the covariates. This model was driven by breast cancer data where the maximum follow-up was close to 20 years.  相似文献   

3.
Non-parametric Bayesian Estimation of a Spatial Poisson Intensity   总被引:5,自引:0,他引:5  
A method introduced by Arjas & Gasbarra (1994) and later modified by Arjas & Heikkinen (1997) for the non-parametric Bayesian estimation of an intensity on the real line is generalized to cover spatial processes. The method is based on a model approximation where the approximating intensities have the structure of a piecewise constant function. Random step functions on the plane are generated using Voronoi tessellations of random point patterns. Smoothing between nearby intensity values is applied by means of a Markov random field prior in the spirit of Bayesian image analysis. The performance of the method is illustrated in examples with both real and simulated data.  相似文献   

4.
In this work, we derive the copulas related to vectors obtained from the so-called chaotic stochastic processes. These are defined by the iteration of certain piecewise monotone functions of the interval [0, 1] to some initial random variable. We study some of its properties and present some examples. Since often these types of copulas do not have closed formulas, we provide a general approximation method which converges uniformly to the true copula. Our results cover a wide class of processes, including the so-called Manneville–Pomeau processes. The general theory is applied to the parametric estimation in certain chaotic processes. A Monte Carlo simulation study is also presented.  相似文献   

5.
In this article maximum likelihood techniques for estimating consumer demand functions when budget constraints are piecewise linear are exposited and surveyed. Consumer demand functions are formally derived under such constraints, and it is shown that the functions are themselves nonlinear as a result. The econometric problems in estimating such functions are exposited, and the importance of the stochastic specification is stressed, in particular the specification of both unobserved heterogeneity of preferences and measurement error. Econometric issues in estimation and testing are discussed, and the results of the studies that have been conducted to date are surveyed.  相似文献   

6.
We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we show that this support is the space of all continuous functions when the usual covariance kernels are chosen and an appropriate prior is used on the smoothing parameters of the covariance kernel. We then show that a large class of Gaussian process priors achieve weak as well as strong posterior consistency (under some regularity conditions) at true densities that are either continuous or piecewise continuous.  相似文献   

7.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

8.
Summary.  Hansen, Kooperberg and Sardy introduced a family of continuous, piecewise linear functions defined over adaptively selected triangulations of the plane as a general approach to statistical modelling of bivariate densities and regression and hazard functions. These triograms enjoy a natural affine equivariance that offers distinct advantages over competing tensor product methods that are more commonly used in statistical applications. Triograms employ basis functions consisting of linear 'tent functions' defined with respect to a triangulation of a given planar domain. As in knot selection for univariate splines, Hansen and colleagues adopted the regression spline approach of Stone. Vertices of the triangulation are introduced or removed sequentially in an effort to balance fidelity to the data and parsimony. We explore a smoothing spline variant of the triogram model based on a roughness penalty adapted to the piecewise linear structure of the triogram model. We show that the roughness penalty proposed may be interpreted as a total variation penalty on the gradient of the fitted function. The methods are illustrated with real and artificial examples, including an application to estimated quantile surfaces of land value in the Chicago metropolitan area.  相似文献   

9.
In this paper, we consider partially linear additive models with an unknown link function, which include single‐index models and additive models as special cases. We use polynomial spline method for estimating the unknown link function as well as the component functions in the additive part. We establish that convergence rates for all nonparametric functions are the same as in one‐dimensional nonparametric regression. For a faster rate of the parametric part, we need to define appropriate ‘projection’ that is more complicated than that defined previously for partially linear additive models. Compared to previous approaches, a distinct advantage of our estimation approach in implementation is that estimation directly reduces estimation in the single‐index model and can thus deal with much larger dimensional problems than previous approaches for additive models with unknown link functions. Simulations and a real dataset are used to illustrate the proposed model.  相似文献   

10.
ABSTRACT

We study the estimation of a hazard rate function based on censored data by non-linear wavelet method. We provide an asymptotic formula for the mean integrated squared error (MISE) of nonlinear wavelet-based hazard rate estimators under randomly censored data. We show this MISE formula, when the underlying hazard rate function and censoring distribution function are only piecewise smooth, has the same expansion as analogous kernel estimators, a feature not available for the kernel estimators. In addition, we establish an asymptotic normality of the nonlinear wavelet estimator.  相似文献   

11.
This paper focuses on efficient estimation, optimal rates of convergence and effective algorithms in the partly linear additive hazards regression model with current status data. We use polynomial splines to estimate both cumulative baseline hazard function with monotonicity constraint and nonparametric regression functions with no such constraint. We propose a simultaneous sieve maximum likelihood estimation for regression parameters and nuisance parameters and show that the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound. In addition, we show that rates of convergence for the estimators of nonparametric functions are optimal. We implement the proposed estimation through a backfitting algorithm on generalized linear models. We conduct simulation studies to examine the finite‐sample performance of the proposed estimation method and present an analysis of renal function recovery data for illustration.  相似文献   

12.
The hazard function plays an important role in reliability or survival studies since it describes the instantaneous risk of failure of items at a time point, given that they have not failed before. In some real life applications, abrupt changes in the hazard function are observed due to overhauls, major operations or specific maintenance activities. In such situations it is of interest to detect the location where such a change occurs and estimate the size of the change. In this paper we consider the problem of estimating a single change point in a piecewise constant hazard function when the observed variables are subject to random censoring. We suggest an estimation procedure that is based on certain structural properties and on least squares ideas. A simulation study is carried out to compare the performance of this estimator with two estimators available in the literature: an estimator based on a functional of the Nelson-Aalen estimator and a maximum likelihood estimator. The proposed least squares estimator tums out to be less biased than the other two estimators, but has a larger variance. We illustrate the estimation method on some real data sets.  相似文献   

13.
A method of estimating a variety of curves by a sequence of piecewise polynomials is proposed, motivated by a Bayesian model and an appropriate summary of the resulting posterior distribution. A joint distribution is set up over both the number and the position of the knots defining the piecewise polynomials. Throughout we use reversible jump Markov chain Monte Carlo methods to compute the posteriors. The methodology has been successful in giving good estimates for 'smooth' functions (i.e. continuous and differentiable) as well as functions which are not differentiable, and perhaps not even continuous, at a finite number of points. The methodology is extended to deal with generalized additive models.  相似文献   

14.
The Barrodale and Roberts algorithm for least absolute value (LAV) regression and the algorithm proposed by Bartels and Conn both have the advantage that they are often able to skip across points at which the conventional simplex-method algorithms for LAV regression would be required to carry out an (expensive) pivot operation.

We indicate here that this advantage holds in the Bartels-Conn approach for a wider class of problems: the minimization of piecewise linear functions. We show how LAV regression, restricted LAV regression, general linear programming and least maximum absolute value regression can all be easily expressed as piecewise linear minimization problems.  相似文献   

15.
ABSTRACT

We aim at analysing geostatistical and areal data observed over irregularly shaped spatial domains and having a distribution within the exponential family. We propose a generalized additive model that allows to account for spatially varying covariate information. The model is fitted by maximizing a penalized log-likelihood function, with a roughness penalty term that involves a differential quantity of the spatial field, computed over the domain of interest. Efficient estimation of the spatial field is achieved resorting to the finite element method, which provides a basis for piecewise polynomial surfaces. The proposed model is illustrated by an application to the study of criminality in the city of Portland, OR, USA.  相似文献   

16.
This paper considers the problem of selecting optimal bandwidths for variable (sample‐point adaptive) kernel density estimation. A data‐driven variable bandwidth selector is proposed, based on the idea of approximating the log‐bandwidth function by a cubic spline. This cubic spline is optimized with respect to a cross‐validation criterion. The proposed method can be interpreted as a selector for either integrated squared error (ISE) or mean integrated squared error (MISE) optimal bandwidths. This leads to reflection upon some of the differences between ISE and MISE as error criteria for variable kernel estimation. Results from simulation studies indicate that the proposed method outperforms a fixed kernel estimator (in terms of ISE) when the target density has a combination of sharp modes and regions of smooth undulation. Moreover, some detailed data analyses suggest that the gains in ISE may understate the improvements in visual appeal obtained using the proposed variable kernel estimator. These numerical studies also show that the proposed estimator outperforms existing variable kernel density estimators implemented using piecewise constant bandwidth functions.  相似文献   

17.
We study estimation and hypothesis testing in single‐index panel data models with individual effects. Through regressing the individual effects on the covariates linearly, we convert the estimation problem in single‐index panel data models to that in partially linear single‐index models. The conversion is valid regardless of the individual effects being random or fixed. We propose an estimating equation approach, which has a desirable double robustness property. We show that our method is applicable in single‐index panel data models with heterogeneous link functions. We further design a chi‐squared test to evaluate whether the individual effects are random or fixed. We conduct simulations to demonstrate the finite sample performance of the method and conduct a data analysis to illustrate its usefulness.  相似文献   

18.
We propose an estimation method that incorporates the correlation/covariance structure between repeated measurements in covariate-adjusted regression models for distorted longitudinal data. In this distorted data setting, neither the longitudinal response nor (possibly time-varying) predictors are directly observable. The unobserved response and predictors are assumed to be distorted/contaminated by unknown functions of a common observable confounder. The proposed estimation methodology adjusts for the distortion effects both in estimation of the covariance structure and in the regression parameters using generalized least squares. The finite-sample performance of the proposed estimators is studied numerically by means of simulations. The consistency and convergence rates of the proposed estimators are also established. The proposed method is illustrated with an application to data from a longitudinal study of cognitive and social development in children.  相似文献   

19.
This article introduces a fast cross-validation algorithm that performs wavelet shrinkage on data sets of arbitrary size and irregular design and also simultaneously selects good values of the primary resolution and number of vanishing moments.We demonstrate the utility of our method by suggesting alternative estimates of the conditional mean of the well-known Ethanol data set. Our alternative estimates outperform the Kovac-Silverman method with a global variance estimate by 25% because of the careful selection of number of vanishing moments and primary resolution. Our alternative estimates are simpler than, and competitive with, results based on the Kovac-Silverman algorithm equipped with a local variance estimate.We include a detailed simulation study that illustrates how our cross-validation method successfully picks good values of the primary resolution and number of vanishing moments for unknown functions based on Walsh functions (to test the response to changing primary resolution) and piecewise polynomials with zero or one derivative (to test the response to function smoothness).  相似文献   

20.
We consider a semi-parametric approach to perform the joint segmentation of multiple series sharing a common functional part. We propose an iterative procedure based on Dynamic Programming for the segmentation part and Lasso estimators for the functional part. Our Lasso procedure, based on the dictionary approach, allows us to both estimate smooth functions and functions with local irregularity, which permits more flexibility than previous proposed methods. This yields to a better estimation of the functional part and improvements in the segmentation. The performance of our method is assessed using simulated data and real data from agriculture and geodetic studies. Our estimation procedure results to be a reliable tool to detect changes and to obtain an interpretable estimation of the functional part of the model in terms of known functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号