首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multi-stage time evolving models are common statistical models for biological systems, especially insect populations. In stage-duration distribution models, parameter estimation for the models use the Laplace transform method. This method involves assumptions such as known constant shapes, known constant rates or the same overall hazard rate for all stages. These assumptions are strong and restrictive. The main aim of this paper is to weaken these assumptions by using a Bayesian approach. In particular, a Metropolis-Hastings algorithm based on deterministic transformations is used to estimate parameters. We will use two models, one which has no hazard rates, and the other has stage-wise constant hazard rates. These methods are validated in simulation studies followed by a case study of cattle parasites. The results show that the proposed methods are able to estimate the parameters comparably well, as opposed to using the Laplace transform methods.  相似文献   

2.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

3.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

4.
The competing risks model is useful in settings in which individuals/units may die/fail for different reasons. The cause specific hazard rates are taken to be piecewise constant functions. A complication arises when some of the failures are masked within a group of possible causes. Traditionally, statistical inference is performed under the assumption that the failure causes act independently on each item. In this paper we propose an EM-based approach which allows for dependent competing risks and produces estimators for the sub-distribution functions. We also discuss identifiability of parameters if none of the masked items have their cause of failure clarified in a second stage analysis (e.g. autopsy). The procedures proposed are illustrated with two datasets.  相似文献   

5.
Right‐censored and length‐biased failure time data arise in many fields including cross‐sectional prevalent cohort studies, and their analysis has recently attracted a great deal of attention. It is well‐known that for regression analysis of failure time data, two commonly used approaches are hazard‐based and quantile‐based procedures, and most of the existing methods are the hazard‐based ones. In this paper, we consider quantile regression analysis of right‐censored and length‐biased data and present a semiparametric varying‐coefficient partially linear model. For estimation of regression parameters, a three‐stage procedure that makes use of the inverse probability weighted technique is developed, and the asymptotic properties of the resulting estimators are established. In addition, the approach allows the dependence of the censoring variable on covariates, while most of the existing methods assume the independence between censoring variables and covariates. A simulation study is conducted and suggests that the proposed approach works well in practical situations. Also, an illustrative example is provided.  相似文献   

6.
In early clinical development of new medicines, a single‐arm study with a limited number of patients is often used to provide a preliminary assessment of a response rate. A multi‐stage design may be indicated, especially when the first stage should only include very few patients so as to enable rapid identification of an ineffective drug. We used decision rules based on several types of nominal confidence intervals to evaluate a three‐stage design for a study that includes at most 30 patients. For each decision rule, we used exact binomial calculations to determine the probability of continuing to further stages as well as to evaluate Type I and Type II error rates. Examples are provided to illustrate the methods for evaluating alternative decision rules and to provide guidance on how to extend the methods to situations with modifications to the number of stages or number of patients per stage in the study design. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
For estimating area‐specific parameters (quantities) in a finite population, a mixed‐model prediction approach is attractive. However, this approach strongly depends on the normality assumption of the response values, although we often encounter a non‐normal case in practice. In such a case, transforming observations to make them suitable for normality assumption is a useful tool, but the problem of selecting a suitable transformation still remains open. To overcome the difficulty, we here propose a new empirical best predicting method by using a parametric family of transformations to estimate a suitable transformation based on the data. We suggest a simple estimating method for transformation parameters based on the profile likelihood function, which achieves consistency under some conditions on transformation functions. For measuring the variability of point prediction, we construct an empirical Bayes confidence interval of the population parameter of interest. Through simulation studies, we investigate the numerical performance of the proposed methods. Finally, we apply the proposed method to synthetic income data in Spanish provinces in which the resulting estimates indicate that the commonly used log transformation would not be appropriate.  相似文献   

8.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

9.
The counting process formulation (Aalen, 1978) for the analysis of life time data is briefly reviewed. This formulation is used to arrive at a regression type model and a smooth estimate of the hazard function. In the regression model, the error terms are martingales and Nelson's estimator is the dependent variable. An optimal approach for estimating the parameters of the polynomial is considered, Asymptotic normality of the optimal estimate is proved and an illustrative example is given.  相似文献   

10.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

11.
The National Cancer Institute (NCI) suggests a sudden reduction in prostate cancer mortality rates, likely due to highly successful treatments and screening methods for early diagnosis. We are interested in understanding the impact of medical breakthroughs, treatments, or interventions, on the survival experience for a population. For this purpose, estimating the underlying hazard function, with possible time change points, would be of substantial interest, as it will provide a general picture of the survival trend and when this trend is disrupted. Increasing attention has been given to testing the assumption of a constant failure rate against a failure rate that changes at a single point in time. We expand the set of alternatives to allow for the consideration of multiple change-points, and propose a model selection algorithm using sequential testing for the piecewise constant hazard model. These methods are data driven and allow us to estimate not only the number of change points in the hazard function but where those changes occur. Such an analysis allows for better understanding of how changing medical practice affects the survival experience for a patient population. We test for change points in prostate cancer mortality rates using the NCI Surveillance, Epidemiology, and End Results dataset.  相似文献   

12.
Summary.  Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity.  相似文献   

13.
The hazard function plays an important role in cancer patient survival studies, as it quantifies the instantaneous risk of death of a patient at any given time. Often in cancer clinical trials, unimodal hazard functions are observed, and it is of interest to detect (estimate) the turning point (mode) of hazard function, as this may be an important measure in patient treatment strategies with cancer. Moreover, when patient cure is a possibility, estimating cure rates at different stages of cancer, in addition to their proportions, may provide a better summary of the effects of stages on survival rates. Therefore, the main objective of this paper is to consider the problem of estimating the mode of hazard function of patients at different stages of cervical cancer in the presence of long-term survivors. To this end, a mixture cure rate model is proposed using the log-logistic distribution. The model is conveniently parameterized through the mode of the hazard function, in which cancer stages can affect both the cured fraction and the mode. In addition, we discuss aspects of model inference through the maximum likelihood estimation method. A Monte Carlo simulation study assesses the coverage probability of asymptotic confidence intervals.  相似文献   

14.
Abstract. Frailty models with a non‐parametric baseline hazard are widely used for the analysis of survival data. However, their maximum likelihood estimators can be substantially biased in finite samples, because the number of nuisance parameters associated with the baseline hazard increases with the sample size. The penalized partial likelihood based on a first‐order Laplace approximation still has non‐negligible bias. However, the second‐order Laplace approximation to a modified marginal likelihood for a bias reduction is infeasible because of the presence of too many complicated terms. In this article, we find adequate modifications of these likelihood‐based methods by using the hierarchical likelihood.  相似文献   

15.
Incomplete data subject to non‐ignorable non‐response are often encountered in practice and have a non‐identifiability problem. A follow‐up sample is randomly selected from the set of non‐respondents to avoid the non‐identifiability problem and get complete responses. Glynn, Laird, & Rubin analyzed non‐ignorable missing data with a follow‐up sample under a pattern mixture model. In this article, maximum likelihood estimation of parameters of the categorical missing data is considered with a follow‐up sample under a selection model. To estimate the parameters with non‐ignorable missing data, the EM algorithm with weighting, proposed by Ibrahim, is used. That is, in the E‐step, the weighted mean is calculated using the fractional weights for imputed data. Variances are estimated using the approximated jacknife method. Simulation results are presented to compare the proposed method with previously presented methods.  相似文献   

16.
In this paper, we consider the estimation of both the parameters and the nonparametric link function in partially linear single‐index models for longitudinal data that may be unbalanced. In particular, a new three‐stage approach is proposed to estimate the nonparametric link function using marginal kernel regression and the parametric components with generalized estimating equations. The resulting estimators properly account for the within‐subject correlation. We show that the parameter estimators are asymptotically semiparametrically efficient. We also show that the asymptotic variance of the link function estimator is minimized when the working error covariance matrices are correctly specified. The new estimators are more efficient than estimators in the existing literature. These asymptotic results are obtained without assuming normality. The finite‐sample performance of the proposed method is demonstrated by simulation studies. In addition, two real‐data examples are analyzed to illustrate the methodology.  相似文献   

17.
This paper deals with the problem of predicting the real‐valued response variable using explanatory variables containing both multivariate random variable and random curve. The proposed functional partial linear single‐index model treats the multivariate random variable as linear part and the random curve as functional single‐index part, respectively. To estimate the non‐parametric link function, the functional single‐index and the parameters in the linear part, a two‐stage estimation procedure is proposed. Compared with existing semi‐parametric methods, the proposed approach requires no initial estimation and iteration. Asymptotical properties are established for both the parameters in the linear part and the functional single‐index. The convergence rate for the non‐parametric link function is also given. In addition, asymptotical normality of the error variance is obtained that facilitates the construction of confidence region and hypothesis testing for the unknown parameter. Numerical experiments including simulation studies and a real‐data analysis are conducted to evaluate the empirical performance of the proposed method.  相似文献   

18.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

19.
Abstract. Longitudinal data frequently occur in many studies, and longitudinal responses may be correlated with observation times. In this paper, we propose a new joint modelling for the analysis of longitudinal data with time‐dependent covariates and possibly informative observation times via two latent variables. For inference about regression parameters, estimating equation approaches are developed and asymptotic properties of the proposed estimators are established. In addition, a lack‐of‐fit test is presented for assessing the adequacy of the model. The proposed method performs well in finite‐sample simulation studies, and an application to a bladder tumour study is provided.  相似文献   

20.
The authors propose a two‐state continuous‐time semi‐Markov model for an unobservable alternating binary process. Another process is observed at discrete time points that may misclassify the true state of the process of interest. To estimate the model's parameters, the authors propose a minimum Pearson chi‐square type estimating approach based on approximated joint probabilities when the true process is in equilibrium. Three consecutive observations are required to have sufficient degrees of freedom to perform estimation. The methodology is demonstrated on parasitic infection data with exponential and gamma sojourn time distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号