首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Ghosh and Lahiri (1987a,b) considered simultaneous estimation of several strata means and variances where each stratum contains a finite number of elements, under the assumption that the posterior expectation of any stratum mean is a linear function of the sample observations - the so called“posterior linearity” property. In this paper we extend their result by retaining the “posterior linearity“ property of each stratum mean but allowing the superpopulation model whose mean as well as the variance-covariance structure changes from stratum to stratum. The performance of the proposed empirical Bayes estimators are found to be satisfactory both in terms of “asymptotic optimality” (Robbins (1955)) and “relative savings loss” (Efron and Morris (1973)).  相似文献   

2.
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a “super-covariate”—that is, a patient-specific prediction of the control group outcome—as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind “super-covariates” to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a “super-covariate”.  相似文献   

3.
In situations where individuals are screened for an infectious disease or other binary characteristic and where resources for testing are limited, group testing can offer substantial benefits. Group testing, where subjects are tested in groups (pools) initially, has been successfully applied to problems in blood bank screening, public health, drug discovery, genetics, and many other areas. In these applications, often the goal is to identify each individual as positive or negative using initial group tests and subsequent retests of individuals within positive groups. Many group testing identification procedures have been proposed; however, the vast majority of them fail to incorporate heterogeneity among the individuals being screened. In this paper, we present a new approach to identify positive individuals when covariate information is available on each. This covariate information is used to structure how retesting is implemented within positive groups; therefore, we call this new approach "informative retesting." We derive closed-form expressions and implementation algorithms for the probability mass functions for the number of tests needed to decode positive groups. These informative retesting procedures are illustrated through a number of examples and are applied to chlamydia and gonorrhea testing in Nebraska for the Infertility Prevention Project. Overall, our work shows compelling evidence that informative retesting can dramatically decrease the number of tests while providing accuracy similar to established non-informative retesting procedures.  相似文献   

4.
Thispaper considers the stratified proportional hazards model witha focus on the assessment of stratum effects. The assessmentof such effects is often of interest, for example, in clinicaltrials. In this case, two relevant tests are the test of stratuminteraction with covariates and the test of stratum interactionwith baseline hazard functions. For the test of stratum interactionwith covariates, one can use the partial likelihood method (Kalbfleischand Prentice, 1980; Lin, 1994). For the test of stratum interactionwith baseline hazard functions, however, there seems to be noformal test available. We consider this problem and propose aclass of nonparametric tests. The asymptotic distributions ofthe tests are derived using the martingale theory. The proposedtests can also be used for survival comparisons which need tobe adjusted for covariate effects. The method is illustratedwith data from a lung cancer clinical trial.  相似文献   

5.
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate‐dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate‐dependent censoring. We consider a covariate‐adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate‐adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate‐adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks.  相似文献   

6.
Semiparametric maximum likelihood estimators have recently been proposed for a class of two‐phase, outcome‐dependent sampling models. All of them were “restricted” maximum likelihood estimators, in the sense that the maximization is carried out only over distributions concentrated on the observed values of the covariate vectors. In this paper, the authors give conditions for consistency of these restricted maximum likelihood estimators. They also consider the corresponding unrestricted maximization problems, in which the “absolute” maximum likelihood estimators may then have support on additional points in the covariate space. Their main consistency result also covers these unrestricted maximum likelihood estimators, when they exist for all sample sizes.  相似文献   

7.
Abstract

Continuous-time multi-state models are commonly used to study diseases with multiple stages. Potential risk factors associated with the disease are added to the transition intensities of the model as covariates, but missing covariate measurements arise frequently in practice. We propose a likelihood-based method that deals efficiently with a missing covariate in these models. Our simulation study showed that the method performs well for both “missing completely at random” and “missing at random” mechanisms. We also applied our method to a real dataset, the Einstein Aging Study.  相似文献   

8.
ABSTRACT

One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.  相似文献   

9.
A major objective in many clinical trials is to compare several competing treatments in a randomized experiment. In such studies, it is often necessary to adjust for some other important factor that affects the event rates in the treatment groups. When this factor is discrete, one usual approach uses a stratified version of the logrank test. In this article, we consider the problem that arises when the factor giving rise to the strata is missing at random for some of the study subjects. This article proposes a modified version of the stratified logrank test, in which the unobserved stratum indicators are replaced by an estimate of their conditional expectation given available auxiliary covariate measurements. The null asymptotic distribution of the proposed test statistic is investigated. Simulation experiments are also conducted to examine the finite-sample behavior of this test under both null and alternative hypotheses. Simulations indicate that the proposed test performs well, even under some moderate deviations to the at-random missingness assumption.  相似文献   

10.
It is shown that if a binary regression function is increasing then retrospective sampling induces a stochastic ordering of the covariate distributions among the responders, which we call cases, and the non-responders, which we call controls. We also show that if the covariate distributions are stochastically ordered then the regression function must be increasing. This means that testing whether the regression function is monotone is equivalent to testing whether the covariate distributions are stochastically ordered. Capitalizing on these new probabilistic observations we proceed to develop two new non-parametric tests for stochastic order. The new tests are based on either the maximally selected, or integrated, chi-bar statistic of order one. The tests are easy to compute and interpret and their large sampling distributions are easily found. Numerical comparisons show that they compare favorably with existing methods in both small and large samples. We emphasize that the new tests are applicable to any testing problem involving two stochastically ordered distributions.  相似文献   

11.
Recurrent event data are commonly encountered in longitudinal studies when events occur repeatedly over time for each study subject. An accelerated failure time (AFT) model on the sojourn time between recurrent events is considered in this article. This model assumes that the covariate effect and the subject-specific frailty are additive on the logarithm of sojourn time, and the covariate effect maintains the same over distinct episodes, while the distributions of the frailty and the random error in the model are unspecified. With the ordinal nature of recurrent events, two scale transformations of the sojourn times are derived to construct semiparametric methods of log-rank type for estimating the marginal covariate effects in the model. The proposed estimation approaches/inference procedures also can be extended to the bivariate events, which alternate themselves over time. Examples and comparisons are presented to illustrate the performance of the proposed methods.  相似文献   

12.
When the subjects in a study possess different demographic and disease characteristics and are exposed to more than one types of failure, a practical problem is to assess the covariate effects on each type of failure as well as on all-cause failure. The most widely used method is to employ the Cox models on each cause-specific hazard and the all-cause hazard. It has been pointed out that this method causes the problem of internal inconsistency. To solve such a problem, the additive hazard models have been advocated. In this paper, we model each cause-specific hazard with the additive hazard model that includes both constant and time-varying covariate effects. We illustrate that the covariate effect on all-cause failure can be estimated by the sum of the effects on all competing risks. Using data from a longitudinal study on breast cancer patients, we show that the proposed method gives simple interpretation of the final results, when the primary covariate effect is constant in the additive manner on each cause-specific hazard. Based on the given additive models on the cause-specific hazards, we derive the inferences for the adjusted survival and cumulative incidence functions.  相似文献   

13.
Shi  Yushu  Laud  Purushottam  Neuner  Joan 《Lifetime data analysis》2021,27(1):156-176

In this paper, we first propose a dependent Dirichlet process (DDP) model using a mixture of Weibull models with each mixture component resembling a Cox model for survival data. We then build a Dirichlet process mixture model for competing risks data without regression covariates. Next we extend this model to a DDP model for competing risks regression data by using a multiplicative covariate effect on subdistribution hazards in the mixture components. Though built on proportional hazards (or subdistribution hazards) models, the proposed nonparametric Bayesian regression models do not require the assumption of constant hazard (or subdistribution hazard) ratio. An external time-dependent covariate is also considered in the survival model. After describing the model, we discuss how both cause-specific and subdistribution hazard ratios can be estimated from the same nonparametric Bayesian model for competing risks regression. For use with the regression models proposed, we introduce an omnibus prior that is suitable when little external information is available about covariate effects. Finally we compare the models’ performance with existing methods through simulations. We also illustrate the proposed competing risks regression model with data from a breast cancer study. An R package “DPWeibull” implementing all of the proposed methods is available at CRAN.

  相似文献   

14.
Medical and epidemiological studies often involve groups of subjects associated with increasing levels of exposure to a risk factor. Survival of the groups is expected to follow the same order as the level of exposure. Formal tests for this trend fall into the regression framework if one knows what function of exposure to use as a covariate. When unknown, a linear function of exposure level is often used. Jonckheere-type tests for trend have generated continued interest largely because they do not require specification of a covariate. This paper shows that the Jonckheere-type test statistics are special cases of a generalized linear rank statistic with time-dependent covariates which unfortunately depend on the initial group sizes and censoring distributions. Using asymptotic relative efficiency calculations, the Jonckheere tests are compared to standard linear rank tests based on a linear covariate over a spectrum of shapes for the true trend.  相似文献   

15.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

16.
In longitudinal studies, an individual may potentially undergo a series of repeated recurrence events. The gap times, which are referred to as the times between successive recurrent events, are typically the outcome variables of interest. Various regression models have been developed in order to evaluate covariate effects on gap times based on recurrence event data. The proportional hazards model, additive hazards model, and the accelerated failure time model are all notable examples. Quantile regression is a useful alternative to the aforementioned models for survival analysis since it can provide great flexibility to assess covariate effects on the entire distribution of the gap time. In order to analyze recurrence gap time data, we must overcome the problem of the last gap time subjected to induced dependent censoring, when numbers of recurrent events exceed one time. In this paper, we adopt the Buckley–James-type estimation method in order to construct a weighted estimation equation for regression coefficients under the quantile model, and develop an iterative procedure to obtain the estimates. We use extensive simulation studies to evaluate the finite-sample performance of the proposed estimator. Finally, analysis of bladder cancer data is presented as an illustration of our proposed methodology.  相似文献   

17.
Response adaptive randomization (RAR) methods for clinical trials are susceptible to imbalance in the distribution of influential covariates across treatment arms. This can make the interpretation of trial results difficult, because observed differences between treatment groups may be a function of the covariates and not necessarily because of the treatments themselves. We propose a method for balancing the distribution of covariate strata across treatment arms within RAR. The method uses odds ratios to modify global RAR probabilities to obtain stratum‐specific modified RAR probabilities. We provide illustrative examples and a simple simulation study to demonstrate the effectiveness of the strategy for maintaining covariate balance. The proposed method is straightforward to implement and applicable to any type of RAR method or outcome.  相似文献   

18.
This paper is about the validity of established panel unit root tests applied to panels in which the individual time series are of different lengths, a case often encountered in practice. Most of the tests considered work well under various types of cross-correlation which is true for both, their application in balanced as well as in unbalanced panels. A Monte Carlo study reveals that in unbalanced panels, procedures involving the computation of individual $p$ -values for each cross-section unit (or the combination thereof) are mostly superior to those relying on a pooled Dickey–Fuller regression framework. As the former are able to consider each unit separately, they do not require cutting back the “longer” time series so as to obtain the smallest “balanced” quadrangle which in turn means that no potentially valuable information is lost.  相似文献   

19.
Various methods to control the influence of a covariate on a response variable are compared. These methods are ANOVA with or without homogeneity of variances (HOV) of errors and Kruskal–Wallis (K–W) tests on (covariate-adjusted) residuals and analysis of covariance (ANCOVA). Covariate-adjusted residuals are obtained from the overall regression line fit to the entire data set ignoring the treatment levels or factors. It is demonstrated that the methods on covariate-adjusted residuals are only appropriate when the regression lines are parallel and covariate means are equal for all treatments. Empirical size and power performance of the methods are compared by extensive Monte Carlo simulations. We manipulated the conditions such as assumptions of normality and HOV, sample size, and clustering of the covariates. The parametric methods on residuals and ANCOVA exhibited similar size and power when error terms have symmetric distributions with variances having the same functional form for each treatment, and covariates have uniform distributions within the same interval for each treatment. In such cases, parametric tests have higher power compared to the K–W test on residuals. When error terms have asymmetric distributions or have variances that are heterogeneous with different functional forms for each treatment, the tests are liberal with K–W test having higher power than others. The methods on covariate-adjusted residuals are severely affected by the clustering of the covariates relative to the treatment factors when covariate means are very different for treatments. For data clusters, ANCOVA method exhibits the appropriate level. However, such a clustering might suggest dependence between the covariates and the treatment factors, so makes ANCOVA less reliable as well.  相似文献   

20.
When observational data are used to compare treatment-specific survivals, regular two-sample tests, such as the log-rank test, need to be adjusted for the imbalance between treatments with respect to baseline covariate distributions. Besides, the standard assumption that survival time and censoring time are conditionally independent given the treatment, required for the regular two-sample tests, may not be realistic in observational studies. Moreover, treatment-specific hazards are often non-proportional, resulting in small power for the log-rank test. In this paper, we propose a set of adjusted weighted log-rank tests and their supremum versions by inverse probability of treatment and censoring weighting to compare treatment-specific survivals based on data from observational studies. These tests are proven to be asymptotically correct. Simulation studies show that with realistic sample sizes and censoring rates, the proposed tests have the desired Type I error probabilities and are more powerful than the adjusted log-rank test when the treatment-specific hazards differ in non-proportional ways. A real data example illustrates the practical utility of the new methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号