首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Maria Kateri 《Statistics》2013,47(5):443-455
In this paper, we examine the relationships between log odds rate and various reliability measures such as hazard rate and reversed hazard rate in the context of repairable systems. We also prove characterization theorems for some families of distributions viz. Burr, Pearson and log exponential models. We discuss the properties and applications of log odds rate in weighted models. Further we extend the concept to the bivariate set up and study its properties.  相似文献   

2.
The authors study the empirical likelihood method for linear regression models. They show that when missing responses are imputed using least squares predictors, the empirical log‐likelihood ratio is asymptotically a weighted sum of chi‐square variables with unknown weights. They obtain an adjusted empirical log‐likelihood ratio which is asymptotically standard chi‐square and hence can be used to construct confidence regions. They also obtain a bootstrap empirical log‐likelihood ratio and use its distribution to approximate that of the empirical log‐likelihood ratio. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths of confidence intervals, and perform better than a normal approximation based method.  相似文献   

3.
The odds ratio (OR) is a measure of association used for analysing an I × J contingency table. The total number of ORs to check grows with I and J. Several statistical methods have been developed for summarising them. These methods begin from two different starting points, the I × J contingency table and the two‐way table composed by the ORs. In this paper we focus our attention on the relationship between these methods and point out that, for an exhaustive analysis of association through log ORs, it is necessary to consider all the outcomes of these methods. We also introduce some new methodological and graphical features. In order to illustrate previously used methodologies, we consider a data table of the cross‐classification of the colour of eyes and hair of 5387 children from Scotland. We point out how, through the log OR analysis, it is possible to extract useful information about the association between variables.  相似文献   

4.
The win ratio has been studied methodologically and applied in data analysis and in designing clinical trials. Researchers have pointed out that the results depend on follow‐up time and censoring time, which are sometimes used interchangeably. In this article, we distinguish between follow‐up time and censoring time, show theoretically the impact of censoring on the win ratio, and illustrate the impact of follow‐up time. We then point out that, if the treatment has long‐term benefit from a more important but less frequent endpoint (eg, death), the win ratio can show that benefit by following patients longer, avoiding masking by more frequent but less important outcomes, which occurs in conventional time‐to‐first‐event analyses. For the situation of nonproportional hazards, we demonstrate that the win ratio can be a good alternative to methods such as landmark survival rate, restricted mean survival time, and weighted log‐rank tests.  相似文献   

5.
In this paper, we review available methods for determination of the functional form of the relation between a covariate and the log hazard ratio for a Cox model. We pay special attention to the detection of influential observations to the extent that they influence the estimated functional form of the relation between a covariate and the log hazard ratio. Our paper is motivated by a data set from a cohort study of lung cancer and silica exposure, where the nonlinear shape of the estimated log hazard ratio for silica exposure plotted against cumulative exposure and hereafter referred to as the exposure–response curve was greatly affected by whether or not two individuals with the highest exposures were included in the analysis. Formal influence diagnostics did not identify these two individuals but did identify the three highest exposed cases. Removal of these three cases resulted in a biologically plausible exposure–response curve.  相似文献   

6.
ABSTRACT By studying the deviations between uniform empirical and quantile processes (the so-called Bahadur-Kiefer representations) of a stationary sequence in properly weighted sup-norm metrics, we find a general approach to obtaining weighted results for uniform quantile processes of stationary sequences. Consequently we are able to obtain weak convergence for weighted uniform quantile processes of stationary mixing and associated sequences. Further, by studying the sup-norm distance of a general quantile process from its corresponding uniform quantile process, we find that information at the two end points of the uniform quantile process can be so utilized that this weighted sup-norm distance converges in probability to zero under the so-called Csörgõ-Révész conditions. This enables us to obtain weak convergence for weighted general quantile processes of stationary mixing and associated sequences.  相似文献   

7.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

9.
The statistical problems associated with estimating the mean responding cell density in the limiting dilution assay (LDA) have largely been ignored. We evaluate techniques for analyzing LDA data from multiple biological samples, assumed to follow either a normal or gamma distribution. Simulated data is used to evaluate the performance of an unweighted mean, a log transform, and a weighted mean procedure described by Taswell (1987). In general, an unweighted mean with jackknife estimates will produce satisfactory results. In some cases, a log transform is more appropriate. Taswell's weighted mean algorithm is unable to estimate an accurate variance. We also show that methods which pool samples, or LDA data, are invalid. In addition, we show that optimization of the variance in multiple sample LDA's is dependent on the estimator, the between-organism variance, the replicate well size, and the numberof biological samples. However, this optimization is generally achieved by maximizing biological samples at the expense of well replicates.  相似文献   

10.
In this paper we outline a class of fully parametric proportional hazards models, in which the baseline hazard is assumed to be a power transform of the time scale, corresponding to assuming that survival times follow a Weibull distribution. Such a class of models allows for the possibility of time varying hazard rates, but assumes a constant hazard ratio. We outline how Bayesian inference proceeds for such a class of models using asymptotic approximations which require only the ability to maximize the joint log posterior density. We apply these models to a clinical trial to assess the efficacy of neutron therapy compared to conventional treatment for patients with tumors of the pelvic region. In this trial there was prior information about the log hazard ratio both in terms of elicited clinical beliefs and the results of previous studies. Finally, we consider a number of extensions to this class of models, in particular the use of alternative baseline functions, and the extension to multi-state data.  相似文献   

11.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

12.
In biomedical research, weighted logrank tests are frequently applied to compare two samples of randomly right censored survival times. We address the question how to combine a number of weighted logrank statistics to achieve good power of the corresponding survival test for a whole linear space or cone of alternatives, which are given by hazard rates. This leads to a new class of semiparametric projection tests that are motivated by likelihood ratio tests for an asymptotic model. We show that these tests can be carried out as permutation tests and discuss their asymptotic properties. A simulation study together with the analysis of a classical data set illustrates the advantages.  相似文献   

13.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

14.
Tonguç Çağın 《Statistics》2017,51(6):1259-1279
We study the almost sure convergence and rates of weighted sums of associated random variables under the classical assumption of existence of Laplace transforms. This assumption implies the existence of every moment, so we address the same problem assuming a suitable decrease rate on tail joint probabilities which only implies the existence of finitely many moments, proving the analogous characterizations of convergence and rates. Still relaxing further the assumptions on moment existence, we also prove a Marcinkiewicz–Zygmund for associated variables without means, complementing existing results for this dependence structure.  相似文献   

15.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   

16.
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence.  相似文献   

17.
We consider the variance estimation of the weighted likelihood estimator (WLE) under two‐phase stratified sampling without replacement. Asymptotic variance of the WLE in many semiparametric models contains unknown functions or does not have a closed form. The standard method of the inverse probability weighted (IPW) sample variances of an estimated influence function is then not available in these models. To address this issue, we develop the variance estimation procedure for the WLE in a general semiparametric model. The phase I variance is estimated by taking a numerical derivative of the IPW log likelihood. The phase II variance is estimated based on the bootstrap for a stratified sample in a finite population. Despite a theoretical difficulty of dependent observations due to sampling without replacement, we establish the (bootstrap) consistency of our estimators. Finite sample properties of our method are illustrated in a simulation study.  相似文献   

18.
We derive the large sample distribution of the weighted log rank statistic under a general class of local alternatives in which both the cure rates and the conditional distribution of time to failure among those who fail are assumed to vary in the two treatment arms. The analytic result presented here is important to data analysts who are designing clinical trials for diseases such as non-Hodgkins lymphoma, leukemia and melanoma, where a significant proportion of patients are cured. We present a numerical illustration comparing powers obtained from the analytic result to those obtained from simulations. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830–839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.  相似文献   

20.
As new diagnostic tests are developed and marketed, it is very important to be able to compare the accuracy of a given two continuous‐scale diagnostic tests. An effective method to evaluate the difference between the diagnostic accuracy of two tests is to compare partial areas under the receiver operating characteristic curves (AUCs). In this paper, we review existing parametric methods. Then, we propose a new semiparametric method and a new nonparametric method to investigate the difference between two partial AUCs. For the difference between two partial AUCs under each method, we derive a normal approximation, define an empirical log‐likelihood ratio, and show that the empirical log‐likelihood ratio follows a scaled chi‐square distribution. We construct five confidence intervals for the difference based on normal approximation, bootstrap, and empirical likelihood methods. Finally, extensive simulation studies are conducted to compare the finite‐sample performances of these intervals, and a real example is used as an application of our recommended intervals. The simulation results indicate that the proposed hybrid bootstrap and empirical likelihood intervals outperform other existing intervals in most cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号