首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Heavily right-censored time to event, or survival, data arise frequently in research areas such as medicine and industrial reliability. Recently, there have been suggestions that auxiliary outcomes which are more fully observed may be used to “enhance” or increase the efficiency of inferences for a primary survival time variable. However, efficiency gains from this approach have mostly been very small. Most of the situations considered have involved semiparametric models, so in this note we consider two very simple fully parametric models. In the one case involving a correlated auxiliary variable that is always observed, we find that efficiency gains are small unless the response and auxiliary variable are very highly correlated and the response is heavily censored. In the second case, which involves an intermediate stage in a three-stage model of failure, the efficiency gains can be more substantial. We suggest that careful study of specific situations is needed to identify opportunities for “enhanced” inferences, but that substantial gains seem more likely when auxiliary information involves structural information about the failure process.  相似文献   

2.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

3.
Numerical methods are needed to obtain maximum-likelihood estimates (MLEs) in many problems. Computation time can be an issue for some likelihoods even with modern computing power. We consider one such problem where the assumed model is a random-clumped multinomial distribution. We compute MLEs for this model in parallel using the Toolkit for Advanced Optimization software library. The computations are performed on a distributed-memory cluster with low latency interconnect. We demonstrate that for larger problems, scaling the number of processes improves wall clock time significantly. An illustrative example shows how parallel MLE computation can be useful in a large data analysis. Our experience with a direct numerical approach indicates that more substantial gains may be obtained by making use of the specific structure of the random-clumped model.  相似文献   

4.
We consider the bootstrap method for the covariates augmented Dickey–Fuller (CADF) unit root test suggested in Hansen (1995 Hansen, B. E. (1995). Rethinking the univariate approach to unit root testing: Using covariates to increase power. \ Econometric Theory 11:11481171.[Crossref], [Web of Science ®] [Google Scholar]) which uses related variables to improve the power of univariate unit root tests. It is shown that there are substantial power gains from including correlated covariates. The limit distribution of the CADF test, however, depends on the nuisance parameter that represents the correlation between the equation error and the covariates. Hence, inference based directly on the CADF test is not possible. To provide a valid inferential basis for the CADF test, we propose to use the parametric bootstrap procedure to obtain critical values, and establish the asymptotic validity of the bootstrap CADF test. Simulations show that the bootstrap CADF test significantly improves the asymptotic and the finite sample size performances of the CADF test, especially when the covariates are highly correlated with the error. Indeed, the bootstrap CADF test offers drastic power gains over the conventional unit root tests. Our testing procedures are applied to the extended Nelson and Plosser data set.  相似文献   

5.
In an influential article, Hansen showed that covariate augmentation can lead to substantial power gains when compared to univariate tests. In this article, we ask if this result extends also to the panel data context? The answer turns out to be yes, which is maybe not that surprising. What is surprising, however, is the extent of the power gain, which is shown to more than outweigh the well-known power loss in the presence of incidental trends. That is, the covariates have an order effect on the neighborhood around unity for which local asymptotic power is negligible.  相似文献   

6.
It is well known that more powerful variants of Dickey–Fuller unit root tests are available. We apply two of these modifications, on the basis of simple maximum statistics and weighted symmetric estimation, to Perron tests allowing for structural change in trend of the additive outlier type. Local alternative asymptotic distributions of the modified test statistics are derived, and it is shown that their implementation can lead to appreciable finite sample and asymptotic gains in power over the standard tests. Also, these gains are largely comparable with those from GLS-based modifications to Perron tests, though some interesting differences do arise. This is the case for both exogenously and endogenously chosen break dates. For the latter choice, the new tests are applied to the Nelson–Plosser data.  相似文献   

7.
In survival analysis, it is routine to test equality of two survival curves, which is often conducted by using the log-rank test. Although it is optimal under the proportional hazards assumption, the log-rank test is known to have little power when the survival or hazard functions cross. To test the overall homogeneity of hazard rate functions, we propose a group of partitioned log-rank tests. By partitioning the time axis and taking the supremum of the sum of two partitioned log-rank statistics over different partitioning points, the proposed test gains enormous power for cases with crossing hazards. On the other hand, when the hazards are indeed proportional, our test still maintains high power close to that of the optimal log-rank test. Extensive simulation studies are conducted to compare the proposed test with existing methods, and three real data examples are used to illustrate the commonality of crossing hazards and the advantages of the partitioned log-rank tests.  相似文献   

8.
In some applications, the clustered survival data are arranged spatially such as clinical centers or geographical regions. Incorporating spatial variation in these data not only can improve the accuracy and efficiency of the parameter estimation, but it also investigates the spatial patterns of survivorship for identifying high-risk areas. Competing risks in survival data concern a situation where there is more than one cause of failure, but only the occurrence of the first one is observable. In this paper, we considered Bayesian subdistribution hazard regression models with spatial random effects for the clustered HIV/AIDS data. An intrinsic conditional autoregressive (ICAR) distribution was employed to model the areal spatial random effects. Comparison among competing models was performed by the deviance information criterion. We illustrated the gains of our model through application to the HIV/AIDS data and the simulation studies.KEYWORDS: Competing risks, subdistribution hazard, cumulative incidence function, spatial random effect, Markov chain Monte Carlo  相似文献   

9.
While analyzing 2 × 2 contingency tables, the log odds ratio for measuring the strength of association is often approximated by a normal distribution with some variance. We show that the expression of that variance needs to be modified in the presence of correlation between two binomial distributions of the contingency table. In the present paper, we derive a correlation-adjusted variance of the limiting normal distribution of log odds ratio. We also propose a correlation adjusted test based on the standard odds ratio for analyzing matched-pair studies and any other study settings that induce correlated binary outcomes. We demonstrate that our proposed test outperforms the classical McNemar’s test. Simulation studies show the gains in power are especially manifest when sample size is small and strong correlation is present. Two examples of real data sets are used to demonstrate that the proposed method may lead to conclusions significantly different from those reached using McNemar’s test.  相似文献   

10.
In this article, we use cumulative residual Kullback-Leibler information (CRKL) and cumulative Kullback-Leibler information (CKL) to construct two goodness-of-fit test statistics for testing exponentiality with progressively Type-II censored data. The power of the proposed tests are compared with the power of goodness-of-fit test for exponentiality introduced by Balakrishnan et al. (2007 Balakrishnan, N., Habibi Rad, A., Arghami, N.R. (2007). Testing exponentiality based on Kullback-Leibler information with progressively type-II censored data. IEEE Transactions on Reliability 56(2):301307.[Crossref], [Web of Science ®] [Google Scholar]). We show that when the hazard function of the alternative is monotone decreasing, the test based on CRKL has higher power and when the hazard function of the alternative is non-monotone, the test based on CKL has higher power. But, when it is monotone increasing the power difference between test based on CKL and their proposed test is not so remarkable. The use of the proposed tests is shown in an illustrative example.  相似文献   

11.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

12.
 通过建立讨价还价理论模型,本文证明了谈判过程中让步成本较小的一方获得的收益较少。由于存在预算软约束和相对严重的委托代理问题,国有企业的让步并不会给代理人造成相应损失,相应地国有企业议价能力较低。运用2006年海关进出口交易数据,使用双边随机前沿分析方法(Two-tier SFA)测度谈判底线,本文估算了中国国有企业的国际议价能力。结果显示:(1)国有企业的讨价还价能力低于民营企业和外资企业;(2)国有企业的讨价还价能力也低于进出口交易伙伴,其进口价格高于公允价格3.69%,出口价格低于公允价格6.17%。因而继续推进市场化改革,中国才能在国际市场上获取公平的贸易收益。  相似文献   

13.
What population does the sample represent? The answer to this question is of crucial importance when estimating a survivor function in duration studies. As is well-known, in a stationary population, survival data obtained from a cross-sectional sample taken from the population at time $t_0$ represents not the target density $f(t)$ but its length-biased version proportional to $tf(t)$ , for $t>0$ . The problem of estimating survivor function from such length-biased samples becomes more complex, and interesting, in presence of competing risks and censoring. This paper lays out a sampling scheme related to a mixed Poisson process and develops nonparametric estimators of the survivor function of the target population assuming that the two independent competing risks have proportional hazards. Two cases are considered: with and without independent censoring before length biased sampling. In each case, the weak convergence of the process generated by the proposed estimator is proved. A well-known study of the duration in power for political leaders is used to illustrate our results. Finally, a simulation study is carried out in order to assess the finite sample behaviour of our estimators.  相似文献   

14.
Summary.  We propose 'Dunnett-type' test procedures to test for simple tree order restrictions on the means of p independent normal populations. The new tests are based on the estimation procedures that were introduced by Hwang and Peddada and later by Dunbar, Conaway and Peddada. The procedures proposed are also extended to test for 'two-sided' simple tree order restrictions. For non-normal data, nonparametric versions based on ranked data are also suggested. Using computer simulations, we compare the proposed test procedures with some existing test procedures in terms of size and power. Our simulation study suggests that the procedures compete well with the existing procedures for both one-sided and two-sided simple tree alternatives. In some instances, especially in the case of two-sided alternatives or for non-normally distributed data, the gains in power due to the procedures proposed can be substantial.  相似文献   

15.
Spectral domain tests for time series linearity typically suffer from a lack of power compared to time domain tests. We present two tests for Gaussianity and linearity of a stationary time series. The tests are two-stage procedures applying goodness-of-fit techniques to the estimated normalized bispectrum. We illustrate the performances of the tests are competitive with time domain tests. The new tests typically outperform Hinich's (1982 Hinich , M. J. ( 1982 ). Testing for Gaussianity and linearity of a stationary time series . J. Time Ser. Anal. 3 : 169176 .[Crossref] [Google Scholar]) bispectral based test, especially when the length of the time series is not large.  相似文献   

16.
In many parametric problems the use of order restrictions among the parameters can lead to improved precision. Our interest is in the study of several multinomial populations under the stochastic order restriction (SOR) for univariate situations. We use Bayesian methods to show that the SOR can lead to larger gains in precision than the method without the SOR when the SOR is reasonable. Unlike frequentist order restricted inference, our methodology permits analysis even when there is uncertainty about the SOR. Our method is sampling based, and we use simple and efficient rejection sampling. The Bayes factor in favor of the SOR is computed in a simple manner, and samples from the requisite posterior distributions are easily obtained. We use real data to illustrate the procedure, and we show that there is likely to be larger gains in precision under the SOR.  相似文献   

17.
The Hosmer–Lemeshow test is a widely used method for evaluating the goodness of fit of logistic regression models. But its power is much influenced by the sample size, like other chi-square tests. Paul, Pennell, and Lemeshow (2013 Paul, P., M. L. Pennell, and S. Lemeshow. 2013. Standardizing the power of the Hosmer–Lemeshow goodness of fit test in large data sets. Statistics in Medicine 32:6780.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) considered using a large number of groups for large data sets to standardize the power. But simulations show that their method performs poorly for some models. In addition, it does not work when the sample size is larger than 25,000. In the present paper, we propose a modified Hosmer–Lemeshow test that is based on estimation and standardization of the distribution parameter of the Hosmer–Lemeshow statistic. We provide a mathematical derivation for obtaining the critical value and power of our test. Through simulations, we can see that our method satisfactorily standardizes the power of the Hosmer–Lemeshow test. It is especially recommendable for enough large data sets, as the power is rather stable. A bank marketing data set is also analyzed for comparison with existing methods.  相似文献   

18.
It is known that the analysis of short panel time series data is very important in many practical problems. This paper calculates the exact moments up to order 4 under the null hypothesis of no serial correlation when there are many independent replications of size 3. We further calculate the tail probabilities under the null hypothesis using the Edgeworth approximation for $n=3, 4$ and $5,$ when the structure of the pdf (probability density function) of the test statistic is in essence different. Finally, we compare the three types of tail probabilities, namely, the Edgeworth approximation, the normal approximation and the exact probabilities through a large scale simulation study.  相似文献   

19.
Cross-classified data are often obtained in controlled experimental situations and in epidemiologic studies. As an example of the latter, occupational health studies sometimes require personal exposure measurements on a random sample of workers from one or more job groups, in one or more plant locations, on several different sampling dates. Because the marginal distributions of exposure data from such studies are generally right-skewed and well-approximated as lognormal, researchers in this area often consider the use of ANOVA models after a logarithmic transformation. While it is then of interest to estimate original-scale population parameters (e.g., the overall mean and variance), standard candidates such as maximum likelihood estimators (MLEs) can be unstable and highly biased. Uniformly minimum variance unbiased (UMVU) cstiniators offer a viable alternative, and are adaptable to sampling schemes that are typiral of experimental or epidemiologic studies. In this paper, we provide UMVU estimators for the mean and variance under two random effects ANOVA models for logtransformed data. We illustrate substantial mean squared error gains relative to the MLE when estimating the mean under a one-way classification. We illustrate that the results can readily be extended to encompass a useful class of purely random effects models, provided that the study data are balanced.  相似文献   

20.
One of the most well-known facts about unit root testing in time series is that the Dickey–Fuller (DF) test based on ordinary least squares (OLS) demeaned data suffers from low power, and that the use of generalized least squares (GLS) demeaning can lead to substantial power gains. Of course, this development has not gone unnoticed in the panel unit root literature. However, while the potential of using GLS demeaning is widely recognized, oddly enough, there are still no theoretical results available to facilitate a formal analysis of such demeaning in the panel data context. The present article can be seen as a reaction to this. The purpose is to evaluate the effect of GLS demeaning when used in conjuncture with the pooled OLS t-test for a unit root, resulting in a panel analog of the time series DF–GLS test. A key finding is that the success of GLS depend critically on the order in which the dependent variable is demeaned and first-differenced. If the variable is demeaned prior to taking first-differences, power is maximized by using GLS demeaning, whereas if the differencing is done first, then OLS demeaning is preferred. Furthermore, even if the former demeaning approach is used, such that GLS is preferred, the asymptotic distribution of the resulting test is independent of the tuning parameters that characterize the local alternative under which the demeaning performed. Hence, the demeaning can just as well be performed under the unit root null hypothesis. In this sense, GLS demeaning under the local alternative is redundant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号