首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Kaplan–Meier (KM) estimator is ubiquitously used for estimating survival functions, but it provides only a discrete approximation at the observation times and does not deliver a proper distribution if the largest observation is censored. Using KM as a starting point, we devise an empirical saddlepoint approximation‐based method for producing a smooth survival function that is unencumbered by choice of tuning parameters. The procedure inverts the moment generating function (MGF) defined through a Riemann–Stieltjes integral with respect to an underlying mixed probability measure consisting of the discrete KM mass function weights and an absolutely continuous exponential right‐tail completion. Uniform consistency, and weak and strong convergence results are established for the resulting MGF and its derivatives, thus validating their usage as inputs into the saddlepoint routines. Relevant asymptotic results are also derived for the density and distribution function estimates. The performance of the resulting survival approximations is examined in simulation studies, which demonstrate a favourable comparison with the log spline method (Kooperberg & Stone, 1992) in small sample settings. For smoothing survival functions we argue that the methodology has no immediate competitors in its class, and we illustrate its application on several real data sets. The Canadian Journal of Statistics 47: 238–261; 2019 © 2019 Statistical Society of Canada  相似文献   

2.
We propose a new summary statistic for inhomogeneous intensity‐reweighted moment stationarity spatio‐temporal point processes. The statistic is defined in terms of the n‐point correlation functions of the point process, and it generalizes the J‐function when stationarity is assumed. We show that our statistic can be represented in terms of the generating functional and that it is related to the spatio‐temporal K‐function. We further discuss its explicit form under some specific model assumptions and derive ratio‐unbiased estimators. We finally illustrate the use of our statistic in practice. © 2014 Board of the Foundation of the Scandinavian Journal of Statistics  相似文献   

3.
Abstract

A method for obtaining bootstrapping replicates for one-dimensional point processes is presented. The method involves estimating the conditional intensity of the process and computing residuals. The residuals are bootstrapped using a block bootstrap and used, together with the conditional intensity, to define the bootstrap realizations. The method is applied to the estimation of the cross-intensity function for data arising from a reaction time experiment.  相似文献   

4.
Estimators derived from the expectation‐maximization (EM) algorithm are not robust since they are based on the maximization of the likelihood function. We propose an iterative proximal‐point algorithm based on the EM algorithm to minimize a divergence criterion between a mixture model and the unknown distribution that generates the data. The algorithm estimates in each iteration the proportions and the parameters of the mixture components in two separate steps. Resulting estimators are generally robust against outliers and misspecification of the model. Convergence properties of our algorithm are studied. The convergence of the introduced algorithm is discussed on a two‐component Weibull mixture entailing a condition on the initialization of the EM algorithm in order for the latter to converge. Simulations on Gaussian and Weibull mixture models using different statistical divergences are provided to confirm the validity of our work and the robustness of the resulting estimators against outliers in comparison to the EM algorithm. An application to a dataset of velocities of galaxies is also presented. The Canadian Journal of Statistics 47: 392–408; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
Two-step estimation for inhomogeneous spatial point processes   总被引:1,自引:0,他引:1  
Summary.  The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties ( K -function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests.  相似文献   

6.
Regression functions may have a change or discontinuity point in the ν th derivative function at an unknown location. This paper considers a method of estimating the location and the jump size of the change point based on the local polynomial fits with one‐sided kernels when the design points are random. It shows that the estimator of the location of the change point achieves the rate n?1/(2ν+1) when ν is even. On the other hand, when ν is odd, it converges faster than the rate n?1/(2ν+1) due to a property of one‐sided kernels. Computer simulation demonstrates the improved performance of the method over the existing ones.  相似文献   

7.
In environmetrics, interest often centres around the development of models and methods for making inference on observed point patterns assumed to be generated by latent spatial or spatio‐temporal processes, which may have a hierarchical structure. In this research, motivated by the analysis of spatio‐temporal storm cell data, we generalize the Neyman–Scott parent–child process to account for hierarchical clustering. This is accomplished by allowing the parents to follow a log‐Gaussian Cox process thereby incorporating correlation and facilitating inference at all levels of the hierarchy. This approach is applied to monthly storm cell data from the Bismarck, North Dakota radar station from April through August 2003 and we compare these results to simpler cluster processes to demonstrate the advantages of accounting for both levels of correlation present in these hierarchically clustered point patterns. The Canadian Journal of Statistics 47: 46–64; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
In this paper, we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by Es, Gugushvili, and Spreij [(2008), ‘Deconvolution for an atomic distribution’, Electronic Journal of Statistics, 2, 265–297] and Gugushvili, Es, and Spreij [(2011), ‘Deconvolution for an atomic distribution: rates of convergence’, Journal of Nonparametric Statistics, 23, 1003–1029]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some convergence rates that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on the simulated and real biological data.  相似文献   

9.
Missing observations due to non‐response are commonly encountered in data collected from sample surveys. The focus of this article is on item non‐response which is often handled by filling in (or imputing) missing values using the observed responses (donors). Random imputation (single or fractional) is used within homogeneous imputation classes that are formed on the basis of categorical auxiliary variables observed on all the sampled units. A uniform response rate within classes is assumed, but that rate is allowed to vary across classes. We construct confidence intervals (CIs) for a population parameter that is defined as the solution to a smooth estimating equation with data collected using stratified simple random sampling. The imputation classes are assumed to be formed across strata. Fractional imputation with a fixed number of random draws is used to obtain an imputed estimating function. An empirical likelihood inference method under the fractional imputation is proposed and its asymptotic properties are derived. Two asymptotically correct bootstrap methods are developed for constructing the desired CIs. In a simulation study, the proposed bootstrap methods are shown to outperform traditional bootstrap methods and some non‐bootstrap competitors under various simulation settings. The Canadian Journal of Statistics 47: 281–301; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
In this article, we propose a novel approach to fit a functional linear regression in which both the response and the predictor are functions. We consider the case where the response and the predictor processes are both sparsely sampled at random time points and are contaminated with random errors. In addition, the random times are allowed to be different for the measurements of the predictor and the response functions. The aforementioned situation often occurs in longitudinal data settings. To estimate the covariance and the cross‐covariance functions, we use a regularization method over a reproducing kernel Hilbert space. The estimate of the cross‐covariance function is used to obtain estimates of the regression coefficient function and of the functional singular components. We derive the convergence rates of the proposed cross‐covariance, the regression coefficient, and the singular component function estimators. Furthermore, we show that, under some regularity conditions, the estimator of the coefficient function has a minimax optimal rate. We conduct a simulation study and demonstrate merits of the proposed method by comparing it to some other existing methods in the literature. We illustrate the method by an example of an application to a real‐world air quality dataset. The Canadian Journal of Statistics 47: 524–559; 2019 © 2019 Statistical Society of Canada  相似文献   

11.
With reference to a specific dataset, we consider how to perform a flexible non‐parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location‐dependent first‐order term and pairwise interaction only. A priori we assume that the first‐order term is a shot noise process, and that the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior distribution using a Metropolis–Hastings algorithm in the ‘conventional’ way involves evaluating ratios of unknown normalizing constants. We avoid this problem by applying a recently introduced auxiliary variable technique. In the present setting, the auxiliary variable used is an example of a partially ordered Markov point process model.  相似文献   

12.
In this paper, we consider the problem of adaptive density or survival function estimation in an additive model defined by Z=X+Y with X independent of Y, when both random variables are non‐negative. This model is relevant, for instance, in reliability fields where we are interested in the failure time of a certain material that cannot be isolated from the system it belongs. Our goal is to recover the distribution of X (density or survival function) through n observations of Z, assuming that the distribution of Y is known. This issue can be seen as the classical statistical problem of deconvolution that has been tackled in many cases using Fourier‐type approaches. Nonetheless, in the present case, the random variables have the particularity to be supported. Knowing that, we propose a new angle of attack by building a projection estimator with an appropriate Laguerre basis. We present upper bounds on the mean squared integrated risk of our density and survival function estimators. We then describe a non‐parametric data‐driven strategy for selecting a relevant projection space. The procedures are illustrated with simulated data and compared with the performances of a more classical deconvolution setting using a Fourier approach. Our procedure achieves faster convergence rates than Fourier methods for estimating these functions.  相似文献   

13.
DISTRIBUTIONAL CHARACTERIZATIONS THROUGH SCALING RELATIONS   总被引:2,自引:0,他引:2  
Investigated here are aspects of the relation between the laws of X and Y where X is represented as a randomly scaled version of Y. In the case that the scaling has a beta law, the law of Y is expressed in terms of the law of X. Common continuous distributions are characterized using this beta scaling law, and choosing the distribution function of Y as a weighted version of the distribution function of X, where the weight is a power function. It is shown, without any restriction on the law of the scaling, but using a one‐parameter family of weights which includes the power weights, that characterizations can be expressed in terms of known results for the power weights. Characterizations in the case where the distribution function of Y is a positive power of the distribution function of X are examined in two special cases. Finally, conditions are given for existence of inverses of the length‐bias and stationary‐excess operators.  相似文献   

14.
In most software reliability models which utilize the nonhomogeneous Poisson process (NHPP), the intensity function for the counting process is usually assumed to be continuous and monotone. However, on account of various practical reasons, there may exist some change points in the intensity function and thus the assumption of continuous and monotone intensity function may be unrealistic in many real situations. In this article, the Bayesian change-point approach using beta-mixtures for modeling the intensity function with possible change points is proposed. The hidden Markov model with non constant transition probabilities is applied to the beta-mixture for detecting the change points of the parameters. The estimation and interpretation of the model is illustrated using the Naval Tactical Data System (NTDS) data. The proposed change point model will be also compared with the competing models via marginal likelihood. It can be seen that the proposed model has the highest marginal likelihood and outperforms the competing models.  相似文献   

15.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   

16.
We establish a central limit theorem for multivariate summary statistics of nonstationary α‐mixing spatial point processes and a subsampling estimator of the covariance matrix of such statistics. The central limit theorem is crucial for establishing asymptotic properties of estimators in statistics for spatial point processes. The covariance matrix subsampling estimator is flexible and model free. It is needed, for example, to construct confidence intervals and ellipsoids based on asymptotic normality of estimators. We also provide a simulation study investigating an application of our results to estimating functions.  相似文献   

17.
In longitudinal studies, observation times are often irregular and subject‐specific. Frequently they are related to the outcome measure or other variables that are associated with the outcome measure but undesirable to condition upon in the model for outcome. Regression analyses that are unadjusted for outcome‐dependent follow‐up then yield biased estimates. The authors propose a class of inverse‐intensity rate‐ratio weighted estimators in generalized linear models that adjust for outcome‐dependent follow‐up. The estimators, based on estimating equations, are very simple and easily computed; they can be used under mixtures of continuous and discrete observation times. The predictors of observation times can be past observed outcomes, cumulative values of outcome‐model covariates and other factors associated with the outcome. The authors validate their approach through simulations and they illustrate it using data from a supported housing program from the US federal government.  相似文献   

18.
We treat the change point problem in ergodic diffusion processes from discrete observations. Tonaki et al. (2021a) proposed adaptive tests for detecting changes in the diffusion and drift parameters in ergodic diffusion process models. When any change in the diffusion or drift parameter is detected by this or any other method, the next question to consider is where the change point is located. Therefore, we propose the method to estimate the change point of the parameter for two cases: the case where there is a change in the diffusion parameter, and the case where there is no change in the diffusion parameter but a change in the drift parameter. Furthermore, we present rates of convergence and distributional results of the change point estimators. Some examples and simulation results are also given.  相似文献   

19.
Abstract. We study point patterns of events that occur on a network of lines, such as road accidents recorded on a road network. Okabe and Yamada developed a ‘network K function’, analogous to Ripley's K function, for analysis of such data. However, values of the network K‐function depend on the network geometry, making interpretation difficult. In this study we propose a correction of the network K‐function that intrinsically compensates for the network geometry. This geometrical correction restores many natural and desirable properties of K, including its direct relationship to the pair correlation function. For a completely random point pattern, on any network, the corrected network K‐function is the identity. The corrected estimator is intrinsically corrected for edge effects and has approximately constant variance. We obtain exact and asymptotic expressions for the bias and variance of under complete randomness. We extend these results to an ‘inhomogeneous’ network K‐function which compensates for a spatially varying intensity of points. We demonstrate applications to ecology (webs of the urban wall spider Oecobius navus) and criminology (street crime in Chicago).  相似文献   

20.
Weighted log‐rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics‐guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号