首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For noninformative nonparametric estimation of finite population quantiles under simple random sampling, estimation based on the Polya posterior is similar to estimation based on the Bayesian approach developed by Ericson (J. Roy. Statist. Soc. Ser. B 31 (1969) 195) in that the Polya posterior distribution is the limit of Ericson's posterior distributions as the weight placed on the prior distribution diminishes. Furthermore, Polya posterior quantile estimates can be shown to be admissible under certain conditions. We demonstrate the admissibility of the sample median as an estimate of the population median under such a set of conditions. As with Ericson's Bayesian approach, Polya posterior-based interval estimates for population quantiles are asymptotically equivalent to the interval estimates obtained from standard frequentist approaches. In addition, for small to moderate sized populations, Polya posterior-based interval estimates for quantiles of a continuous characteristic of interest tend to agree with the standard frequentist interval estimates.  相似文献   

2.
With the advent of ever more effective second and third line cancer treatments and the growing use of 'crossover' trial designs in oncology, in which patients switch to the alternate randomized treatment upon disease progression, progression-free survival (PFS) is an increasingly important endpoint in oncologic drug development. However, several concerns exist regarding the use of PFS as a basis to compare treatments. Unlike survival, the exact time of progression is unknown, so progression times might be over-estimated and, consequently, bias may be introduced when comparing treatments. Further, it is not uncommon for randomized therapy to be stopped prior to progression being documented due to toxicity or the initiation of additional anti-cancer therapy; in such cases patients are frequently not followed further for progression and, consequently, are right-censored in the analysis. This article reviews these issues and concludes that concerns relating to the exact timing of progression are generally overstated, with analysis techniques and simple alternative endpoints available to either remove bias entirely or at least provide reassurance via supportive analyses that bias is not present. Further, it is concluded that the regularly recommended manoeuvre to censor PFS time at dropout due to toxicity or upon the initiation of additional anti-cancer therapy is likely to favour the more toxic, less efficacious treatment and so should be avoided whenever possible.  相似文献   

3.
We propose a profile conditional likelihood approach to handle missing covariates in the general semiparametric transformation regression model. The method estimates the marginal survival function by the Kaplan-Meier estimator, and then estimates the parameters of the survival model and the covariate distribution from a conditional likelihood, substituting the Kaplan-Meier estimator for the marginal survival function in the conditional likelihood. This method is simpler than full maximum likelihood approaches, and yields consistent and asymptotically normally distributed estimator of the regression parameter when censoring is independent of the covariates. The estimator demonstrates very high relative efficiency in simulations. When compared with complete-case analysis, the proposed estimator can be more efficient when the missing data are missing completely at random and can correct bias when the missing data are missing at random. The potential application of the proposed method to the generalized probit model with missing continuous covariates is also outlined.  相似文献   

4.
The Barker model provides researchers with an opportunity to use three types of data for mark-recapture analyses - recaptures, recoveries, and resightings. This model structure maximizes use of encounter data and increases the precision of parameter estimates, provided the researcher has large amounts of resighting data. However, to our knowledge, this model has not been used for any published ringing studies. Our objective here is to report our use of the Barker model in covariate-dependent analyses that we conducted in Program MARK. In particular, we wanted to describe our experimental study design and discuss our analytical approach plus some logistical constraints we encountered while conducting a study of the effects of growth and parasites on survival of juvenile Ross's Geese. Birds were marked just before fledging, alternately injected with antiparasite drugs or a control, and then were re-encountered during migration and breeding in following years. Although the Barker model estimates seven parameters, our objectives focused on annual survival only, thus we considered all other parameters as nuisance terms. Therefore, we simplified our model structures by maintaining biological complexity on survival, while retaining a very basic structure on nuisance parameters. These analyses were conducted in a two-step approach where we used the most parsimonious model from nuisance parameter analyses as our starting model for analyses of covariate effects. This analytical approach also allowed us to minimize the long CPU times associated with the use of covariates in earlier versions of Program MARK. Resightings made up about 80% of our encounter history data, and simulations demonstrated that precision and bias of parameter estimates were minimally affected by this distribution. Overall, the main source of bias was that smaller goslings were too small to retain neckbands, yet were the birds that we predicted would have the lowest survival probability and highest probability for parasite effects. Consequently, we considered our results conservative. The largest constraint of our study design was the inability to partition survival into biologically meaningful periods to provide insight into the timing and mechanisms of mortality.  相似文献   

5.
The gamma frailty model is a natural extension of the Cox proportional hazards model in survival analysis. Because the frailties are unobserved, an E-M approach is often used for estimation. Such an approach is shown to lead to finite sample underestimation of the frailty variance, with the corresponding regression parameters also being underestimated as a result. For the univariate case, we investigate the source of the bias with simulation studies and a complete enumeration. The rank-based E-M approach, we note, only identifies frailty through the order in which failures occur; additional frailty which is evident in the survival times is ignored, and as a result the frailty variance is underestimated. An adaption of the standard E-M approach is suggested, whereby the non-parametric Breslow estimate is replaced by a local likelihood formulation for the baseline hazard which allows the survival times themselves to enter the model. Simulations demonstrate that this approach substantially reduces the bias, even at small sample sizes. The method developed is applied to survival data from the North West Regional Leukaemia Register.  相似文献   

6.
Quasi-random sequences are known to give efficient numerical integration rules in many Bayesian statistical problems where the posterior distribution can be transformed into periodic functions on then-dimensional hypercube. From this idea we develop a quasi-random approach to the generation of resamples used for Monte Carlo approximations to bootstrap estimates of bias, variance and distribution functions. We demonstrate a major difference between quasi-random bootstrap resamples, which are generated by deterministic algorithms and have no true randomness, and the usual pseudo-random bootstrap resamples generated by the classical bootstrap approach. Various quasi-random approaches are considered and are shown via a simulation study to result in approximants that are competitive in terms of efficiency when compared with other bootstrap Monte Carlo procedures such as balanced and antithetic resampling.  相似文献   

7.
Estimation in Semiparametric Marginal Shared Gamma Frailty Models   总被引:1,自引:0,他引:1  
The semiparametric marginal shared frailty models in survival analysis have the non–parametric hazard functions multiplied by a random frailty in each cluster, and the survival times conditional on frailties are assumed to be independent. In addition, the marginal hazard functions have the same form as in the usual Cox proportional hazard models. In this paper, an approach based on maximum likelihood and expectation–maximization is applied to semiparametric marginal shared gamma frailty models, where the frailties are assumed to be gamma distributed with mean 1 and variance θ. The estimates of the fixed–effect parameters and their standard errors obtained using this approach are compared in terms of both bias and efficiency with those obtained using the extended marginal approach. Similarly, the standard errors of our frailty variance estimates are found to compare favourably with those obtained using other methods. The asymptotic distribution of the frailty variance estimates is shown to be a 50–50 mixture of a point mass at zero and a truncated normal random variable on the positive axis for θ0 = 0. Simulations demonstrate that, for θ0 < 0, it is approximately an x −(100 − x )%, 0 ≤ x ≤ 50, mixture between a point mass at zero and a truncated normal random variable on the positive axis for small samples and small values of θ0; otherwise, it is approximately normal.  相似文献   

8.
In dental implant research studies, events such as implant complications including pain or infection may be observed recurrently before failure events, i.e. the death of implants. It is natural to assume that recurrent events and failure events are correlated to each other, since they happen on the same implant (subject) and complication times have strong effects on the implant survival time. On the other hand, each patient may have more than one implant. Therefore these recurrent events or failure events are clustered since implant complication times or failure times within the same patient (cluster) are likely to be correlated. The overall implant survival times and recurrent complication times are both interesting to us. In this paper, a joint modelling approach is proposed for modelling complication events and dental implant survival times simultaneously. The proposed method uses a frailty process to model the correlation within cluster and the correlation within subjects. We use Bayesian methods to obtain estimates of the parameters. Performance of the joint models are shown via simulation studies and data analysis.  相似文献   

9.
In cost‐effectiveness analyses of drugs or health technologies, estimates of life years saved or quality‐adjusted life years saved are required. Randomised controlled trials can provide an estimate of the average treatment effect; for survival data, the treatment effect is the difference in mean survival. However, typically not all patients will have reached the endpoint of interest at the close‐out of a trial, making it difficult to estimate the difference in mean survival. In this situation, it is common to report the more readily estimable difference in median survival. Alternative approaches to estimating the mean have also been proposed. We conducted a simulation study to investigate the bias and precision of the three most commonly used sample measures of absolute survival gain – difference in median, restricted mean and extended mean survival – when used as estimates of the true mean difference, under different censoring proportions, while assuming a range of survival patterns, represented by Weibull survival distributions with constant, increasing and decreasing hazards. Our study showed that the three commonly used methods tended to underestimate the true treatment effect; consequently, the incremental cost‐effectiveness ratio (ICER) would be overestimated. Of the three methods, the least biased is the extended mean survival, which perhaps should be used as the point estimate of the treatment effect to be inputted into the ICER, while the other two approaches could be used in sensitivity analyses. More work on the trade‐offs between simple extrapolation using the exponential distribution and more complicated extrapolation using other methods would be valuable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, the Bayesian approach is applied to the estimation problem in the case of step stress partially accelerated life tests with two stress levels and type-I censoring. Gompertz distribution is considered as a lifetime model. The posterior means and posterior variances are derived using the squared-error loss function. The Bayes estimates cannot be obtained in explicit forms. Approximate Bayes estimates are computed using the method of Lindley [D.V. Lindley, Approximate Bayesian methods, Trabajos Estadistica 31 (1980), pp. 223–237]. The advantage of this proposed method is shown. The approximate Bayes estimates obtained under the assumption of non-informative priors are compared with their maximum likelihood counterparts using Monte Carlo simulation.  相似文献   

11.

Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first-order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

  相似文献   

12.
Summary.  Advances in understanding the biological underpinnings of many cancers have led increasingly to the use of molecularly targeted anticancer therapies. Because the platelet-derived growth factor receptor (PDGFR) has been implicated in the progression of prostate cancer bone metastases, it is of great interest to examine possible relationships between PDGFR inhibition and therapeutic outcomes. We analyse the association between change in activated PDGFR (phosphorylated PDGFR) and progression-free survival time based on large within-patient samples of cell-specific phosphorylated PDGFR values taken before and after treatment from each of 88 prostate cancer patients. To utilize these paired samples as covariate data in a regression model for progression-free survival time, and be cause the phosphorylated PDGFR distributions are bimodal, we first employ a Bayesian hierarchical mixture model to obtain a deconvolution of the pretreatment and post-treatment within-patient phosphorylated PDGFR distributions. We evaluate fits of the mixture model and a non-mixture model that ignores the bimodality by using a supnorm metric to compare the empirical distribution of each phosphorylated PDGFR data set with the corresponding fitted distribution under each model. Our results show that first using the mixture model to account for the bimodality of the within-patient phosphorylated PDGFR distributions, and then using the posterior within-patient component mean changes in phosphorylated PDGFR so obtained as covariates in the regression model for progression-free survival time, provides an improved estimation.  相似文献   

13.
For normally distributed data analyzed with linear models, it is well known that measurement error on an independent variable leads to attenuation of the effect of the independent variable on the dependent variable. However, for time‐to‐event variables such as progression‐free survival (PFS), the effect of the measurement variability in the underlying measurements defining the event is less well understood. We conducted a simulation study to evaluate the impact of measurement variability in tumor assessment on the treatment effect hazard ratio for PFS and on the median PFS time, for different tumor assessment frequencies. Our results show that scan measurement variability can cause attenuation of the treatment effect (i.e. the hazard ratio is closer to one) and that the extent of attenuation may be increased with more frequent scan assessments. This attenuation leads to inflation of the type II error. Therefore, scan measurement variability should be minimized as far as possible in order to reveal a treatment effect that is closest to the truth. In disease settings where the measurement variability is shown to be large, consideration may be given to inflating the sample size of the study to maintain statistical power. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. Unlike the operational reliability, storage reliability for certain types of products may not be always 100% at the beginning of storage since there are existing possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new combinatorial approach, the nonparametric measure for the estimates of the number of failed products and the current reliability at each testing time in storage, and the parametric measure for the estimates of the initial reliability and the failure rate based on the exponential reliability function, is proposed for estimating and predicting the storage reliability with possible initial failures. The proposed method has taken into consideration that the initial failure and the reliability testing data, before and during the storage process, are available for providing more accurate estimates of both initial failure probability and the probability of storage failures. When storage reliability prediction that is the main concern in this field should be made, the nonparametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is provided in this paper. Finally, numerical examples are given to illustrate the method. Furthermore, a detailed comparison between the proposed method and the traditional method, for examining the rationality of assessment and prediction on the storage reliability, is presented. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality.  相似文献   

15.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

16.
Skewness, like kurtosis, is a qualitative property of a distribution. A comparison of several measures of skewness of univariate distributions is carried out. Hampel's influence function is used to clarify the differences and similarities among these measures. A general concept of skewness as a location- and scale-free deformation of the probability mass of a symmetric distribution emerges. Positive skewness can be thought of as resulting from movement of mass at the right of the median from the center to the right tail of the distribution together with movement of mass at the left of the median from the left tail to the center of the distribution.  相似文献   

17.
In oncology, progression-free survival time, which is defined as the minimum of the times to disease progression or death, often is used to characterize treatment and covariate effects. We are motivated by the desire to estimate the progression time distribution on the basis of data from 780 paediatric patients with choroid plexus tumours, which are a rare brain cancer where disease progression always precedes death. In retrospective data on 674 patients, the times to death or censoring were recorded but progression times were missing. In a prospective study of 106 patients, both times were recorded but there were only 20 non-censored progression times and 10 non-censored survival times. Consequently, estimating the progression time distribution is complicated by the problems that, for most of the patients, either the survival time is known but the progression time is not known, or the survival time is right censored and it is not known whether the patient's disease progressed before censoring. For data with these missingness structures, we formulate a family of Bayesian parametric likelihoods and present methods for estimating the progression time distribution. The underlying idea is that estimating the association between the time to progression and subsequent survival time from patients having complete data provides a basis for utilizing covariates and partial event time data of other patients to infer their missing progression times. We illustrate the methodology by analysing the brain tumour data, and we also present a simulation study.  相似文献   

18.
ABSTRACT

Censoring frequently occurs in survival analysis but naturally observed lifetimes are not of a large size. Thus, inferences based on the popular maximum likelihood (ML) estimation which often give biased estimates should be corrected in the sense of bias. Here, we investigate the biases of ML estimates under the progressive type-II censoring scheme (pIIcs). We use a method proposed in Efron and Johnstone [Fisher's information in terms of the hazard rate. Technical Report No. 264, January 1987, Stanford University, Stanford, California; 1987] to derive general expressions for bias corrected ML estimates under the pIIcs. This requires derivation of the Fisher information matrix under the pIIcs. As an application, exact expressions are given for bias corrected ML estimates of the Weibull distribution under the pIIcs. The performance of the bias corrected ML estimates and ML estimates are compared by simulations and a real data application.  相似文献   

19.
Survivaldata may include two different sources of variation, namely variationover time and variation over units. If both of these variationsare present, neglecting one of them can cause serious bias inthe estimations. Here we present an approach for discrete durationdata that includes both time–varying and unit–specificeffects to model these two variations simultaneously. The approachis a combination of a dynamic survival model with dynamic time–varyingbaseline and covariate effects and a frailty model measuringunobserved heterogeneity with random effects varying independentlyover units. Estimation is based on posterior modes, i.e., wemaximize the joint posterior distribution of the unknown parametersto avoid numerical integration and simulation techniques, thatare necessary in a full Bayesian analysis. Estimation of unknownhyperparameters is achieved by an EM–type algorithm. Finally,the proposed method is applied to data of the Veteran's AdministrationLung Cancer Trial.  相似文献   

20.
Transition probabilities can be estimated when capture-recapture data are available from each stratum on every capture occasion using a conditional likelihood approach with the Arnason-Schwarz model. To decompose the fundamental transition probabilities into derived parameters, all movement probabilities must sum to 1 and all individuals in stratum r at time i must have the same probability of survival regardless of which stratum the individual is in at time i + 1. If movement occurs among strata at the end of a sampling interval, survival rates of individuals from the same stratum are likely to be equal. However, if movement occurs between sampling periods and survival rates of individuals from the same stratum are not the same, estimates of stratum survival can be confounded with estimates of movement causing both estimates to be biased. Monte Carlo simulations were made of a three-sample model for a population with two strata using SURVIV. When differences were created in transition-specific survival rates for survival rates from the same stratum, relative bias was <2% in estimates of stratum survival and capture rates but relative bias in movement rates was much higher and varied. The magnitude of the relative bias in the movement estimate depended on the relative difference between the transition-specific survival rates and the corresponding stratum survival rate. The direction of the bias in movement rate estimates was opposite to the direction of this difference. Increases in relative bias due to increasing heterogeneity in probabilities of survival, movement and capture were small except when survival and capture probabilities were positively correlated within individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号