首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The authors propose a two‐state continuous‐time semi‐Markov model for an unobservable alternating binary process. Another process is observed at discrete time points that may misclassify the true state of the process of interest. To estimate the model's parameters, the authors propose a minimum Pearson chi‐square type estimating approach based on approximated joint probabilities when the true process is in equilibrium. Three consecutive observations are required to have sufficient degrees of freedom to perform estimation. The methodology is demonstrated on parasitic infection data with exponential and gamma sojourn time distributions.  相似文献   

2.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

3.
The study of differences among groups is an interesting statistical topic in many applied fields. It is very common in this context to have data that are subject to mechanisms of loss of information, such as censoring and truncation. In the setting of a two‐sample problem with data subject to left truncation and right censoring, we develop an empirical likelihood method to do inference for the relative distribution. We obtain a nonparametric generalization of Wilks' theorem and construct nonparametric pointwise confidence intervals for the relative distribution. Finally, we analyse the coverage probability and length of these confidence intervals through a simulation study and illustrate their use with a real data set on gastric cancer. The Canadian Journal of Statistics 38: 453–473; 2010 © 2010 Statistical Society of Canada  相似文献   

4.
This paper is concerned with the analysis of a time series comprising the eruption inter‐arrival times of the Old Faithful geyser in 2009. The series is much longer than other well‐documented ones and thus gives a more comprehensive insight into the dynamics of the geyser. Basic hidden Markov models with gamma state‐dependent distributions and several extensions are implemented. In order to better capture the stochastic dynamics exhibited by Old Faithful, the different non‐standard models under consideration seek to increase the flexibility of the basic models in various ways: (i) by allowing non‐geometric distributions for the times spent in the different states; (ii) by increasing the memory of the underlying Markov chain, with or without assuming additional structure implied by mixture transition distribution models; and (iii) by incorporating feedback from the observation process on the latent process. In each case it is shown how the likelihood can be formulated as a matrix product which can be conveniently maximized numerically.  相似文献   

5.
Abstract. Motivated by applications of Poisson processes for modelling periodic time‐varying phenomena, we study a semi‐parametric estimator of the period of cyclic intensity function of a non‐homogeneous Poisson process. There are no parametric assumptions on the intensity function which is treated as an infinite dimensional nuisance parameter. We propose a new family of estimators for the period of the intensity function, address the identifiability and consistency issues and present simulations which demonstrate good performance of the proposed estimation procedure in practice. We compare our method to competing methods on synthetic data and apply it to a real data set from a call center.  相似文献   

6.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

7.
With reference to a specific dataset, we consider how to perform a flexible non‐parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location‐dependent first‐order term and pairwise interaction only. A priori we assume that the first‐order term is a shot noise process, and that the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior distribution using a Metropolis–Hastings algorithm in the ‘conventional’ way involves evaluating ratios of unknown normalizing constants. We avoid this problem by applying a recently introduced auxiliary variable technique. In the present setting, the auxiliary variable used is an example of a partially ordered Markov point process model.  相似文献   

8.
Occasionally, investigators collect auxiliary marks at the time of failure in a clinical study. Because the failure event may be censored at the end of the follow‐up period, these marked endpoints are subject to induced censoring. We propose two new families of two‐sample tests for the null hypothesis of no difference in mark‐scale distribution that allows for arbitrary associations between mark and time. One family of proposed tests is a nonparametric extension of an existing semi‐parametric linear test of the same null hypothesis while a second family of tests is based on novel marked rank processes. Simulation studies indicate that the proposed tests have the desired size and possess adequate statistical power to reject the null hypothesis under a simple change of location in the marginal mark distribution. When the marginal mark distribution has heavy tails, the proposed rank‐based tests can be nearly twice as powerful as linear tests.  相似文献   

9.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

10.
The hidden Markov model regression (HMMR) has been popularly used in many fields such as gene expression and activity recognition. However, the traditional HMMR requires the strong linearity assumption for the emission model. In this article, we propose a hidden Markov model with non-parametric regression (HMM-NR), where the mean and variance of emission model are unknown smooth functions. The new semiparametric model might greatly reduce the modeling bias and thus enhance the applicability of the traditional hidden Markov model regression. We propose an estimation procedure for the transition probability matrix and the non-parametric mean and variance functions by combining the ideas of the EM algorithm and the kernel regression. Simulation studies and a real data set application are used to demonstrate the effectiveness of the new estimation procedure.  相似文献   

11.
Let ( X , Y ) be a random vector, where Y denotes the variable of interest possibly subject to random right censoring, and X is a covariate. We construct confidence intervals and bands for the conditional survival and quantile function of Y given X using a non-parametric likelihood ratio approach. This approach was introduced by Thomas & Grunkemeier (1975 ), who estimated confidence intervals of survival probabilities based on right censored data. The method is appealing for several reasons: it always produces intervals inside [0, 1], it does not involve variance estimation, and can produce asymmetric intervals. Asymptotic results for the confidence intervals and bands are obtained, as well as simulation results, in which the performance of the likelihood ratio intervals and bands is compared with that of the normal approximation method. We also propose a bandwidth selection procedure based on the bootstrap and apply the technique on a real data set.  相似文献   

12.
Abstract. A right‐censored version of a U ‐statistic with a kernel of degree m 1 is introduced by the principle of a mean preserving reweighting scheme which is also applicable when the dependence between failure times and the censoring variable is explainable through observable covariates. Its asymptotic normality and an expression of its standard error are obtained through a martingale argument. We study the performances of our U ‐statistic by simulation and compare them with theoretical results. A doubly robust version of this reweighted U ‐statistic is also introduced to gain efficiency under correct models while preserving consistency in the face of model mis‐specifications. Using a Kendall's kernel, we obtain a test statistic for testing homogeneity of failure times for multiple failure causes in a multiple decrement model. The performance of the proposed test is studied through simulations. Its usefulness is also illustrated by applying it to a real data set on graft‐versus‐host‐disease.  相似文献   

13.
In this paper, we use a particular piecewise deterministic Markov process (PDMP) to model the evolution of a degradation mechanism that may arise in various structural components, namely, the fatigue crack growth. We first derive some probability results on the stochastic dynamics with the help of Markov renewal theory: a closed-form solution for the transition function of the PDMP is given. Then, we investigate some methods to estimate the parameters of the dynamical system, involving Bogolyubov's averaging principle and maximum likelihood estimation for the infinitesimal generator of the underlying jump Markov process. Numerical applications on a real crack data set are given.  相似文献   

14.
The Ising model is one of the simplest and most famous models of interacting systems. It was originally proposed to model ferromagnetic interactions in statistical physics and is now widely used to model spatial processes in many areas such as ecology, sociology, and genetics, usually without testing its goodness of fit. Here, we propose various test statistics and an exact goodness‐of‐fit test for the finite‐lattice Ising model. The theory of Markov bases has been developed in algebraic statistics for exact goodness‐of‐fit testing using a Monte Carlo approach. However, finding a Markov basis is often computationally intractable. Thus, we develop a Monte Carlo method for exact goodness‐of‐fit testing for the Ising model that avoids computing a Markov basis and also leads to a better connectivity of the Markov chain and hence to a faster convergence. We show how this method can be applied to analyze the spatial organization of receptors on the cell membrane.  相似文献   

15.
There exists a recent study where dynamic mixed‐effects regression models for count data have been extended to a semi‐parametric context. However, when one deals with other discrete data such as binary responses, the results based on count data models are not directly applicable. In this paper, we therefore begin with existing binary dynamic mixed models and generalise them to the semi‐parametric context. For inference, we use a new semi‐parametric conditional quasi‐likelihood (SCQL) approach for the estimation of the non‐parametric function involved in the semi‐parametric model, and a semi‐parametric generalised quasi‐likelihood (SGQL) approach for the estimation of the main regression, dynamic dependence and random effects variance parameters. A semi‐parametric maximum likelihood (SML) approach is also used as a comparison to the SGQL approach. The properties of the estimators are examined both asymptotically and empirically. More specifically, the consistency of the estimators is established and finite sample performances of the estimators are examined through an intensive simulation study.  相似文献   

16.
The quantile residual lifetime function provides comprehensive quantitative measures for residual life, especially when the distribution of the latter is skewed or heavy‐tailed and/or when the data contain outliers. In this paper, we propose a general class of semiparametric quantile residual life models for length‐biased right‐censored data. We use the inverse probability weighted method to correct the bias due to length‐biased sampling and informative censoring. Two estimating equations corresponding to the quantile regressions are constructed in two separate steps to obtain an efficient estimator. Consistency and asymptotic normality of the estimator are established. The main difficulty in implementing our proposed method is that the estimating equations associated with the quantiles are nondifferentiable, and we apply the majorize–minimize algorithm and estimate the asymptotic covariance using an efficient resampling method. We use simulation studies to evaluate the proposed method and illustrate its application by a real‐data example.  相似文献   

17.
We propose a new summary statistic for inhomogeneous intensity‐reweighted moment stationarity spatio‐temporal point processes. The statistic is defined in terms of the n‐point correlation functions of the point process, and it generalizes the J‐function when stationarity is assumed. We show that our statistic can be represented in terms of the generating functional and that it is related to the spatio‐temporal K‐function. We further discuss its explicit form under some specific model assumptions and derive ratio‐unbiased estimators. We finally illustrate the use of our statistic in practice. © 2014 Board of the Foundation of the Scandinavian Journal of Statistics  相似文献   

18.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

19.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents a non‐parametric method for estimating the conditional density associated to the jump rate of a piecewise‐deterministic Markov process. In our framework, the estimation needs only one observation of the process within a long time interval. Our method relies on a generalization of Aalen's multiplicative intensity model. We prove the uniform consistency of our estimator, under some reasonable assumptions related to the primitive characteristics of the process. A simulation study illustrates the behaviour of our estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号