首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 0 毫秒
1.
We study the design problem for the optimal classification of functional data. The goal is to select sampling time points so that functional data observed at these time points can be classified accurately. We propose optimal designs that are applicable to either dense or sparse functional data. Using linear discriminant analysis, we formulate our design objectives as explicit functions of the sampling points. We study the theoretical properties of the proposed design objectives and provide a practical implementation. The performance of the proposed design is evaluated through simulations and real data applications. The Canadian Journal of Statistics 48: 285–307; 2020 © 2019 Statistical Society of Canada  相似文献   

2.
Identifying an optimal cutoff value for a continuous biomarker is often useful for medical applications. For binary outcome, commonly used cutoff finding criteria include Youden's index, classification accuracy, and the Euclidean distance to the upper left corner on the ROC curve. We extend these three criteria to accommodate censored survival time that subjected to competing risks. We provide various definitions of time-dependent true positive rate and false positive rate and estimate those quantities using nonparametric methods. In simulation studies, the Euclidean distance to the upper left corner on the ROC curve shows the best overall performance.  相似文献   

3.
A common problem in randomized controlled clinical trials is the optimal assignment of patients to treatment protocols, The traditional optimal design assumes a single criterion, although in reality, there are usually more than one objective in a clinical trial. In this paper, optimal treatment allocation schemes are found for a dual-objective clinical trial with a binary response. A graphical method for finding the optimal strategy is proposed and illustrative examples are discussed.  相似文献   

4.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

5.
The Zernike polynomials arise in several applications such as optical metrology or image analysis on a circular domain. In the present paper, we determine optimal designs for regression models which are represented by expansions in terms of Zernike polynomials. We consider two estimation methods for the coefficients in these models and determine the corresponding optimal designs. The first one is the classical least squares method and Φ p -optimal designs in the sense of Kiefer [Kiefer, J., 1974, General equivalence theory for optimum designs (approximate theory). Annals of Statistics, 2 849–879.] are derived, which minimize an appropriate functional of the covariance matrix of the least squares estimator. It is demonstrated that optimal designs with respect to Kiefer's Φ p -criteria (p>?∞) are essentially unique and concentrate observations on certain circles in the experimental domain. E-optimal designs have the same structure but it is shown in several examples that these optimal designs are not necessarily uniquely determined. The second method is based on the direct estimation of the Fourier coefficients in the expansion of the expected response in terms of Zernike polynomials and optimal designs minimizing the trace of the covariance matrix of the corresponding estimator are determined. The designs are also compared with the uniform designs on a grid, which is commonly used in this context.  相似文献   

6.
For clinical trials with time‐to‐event as the primary endpoint, the clinical cutoff is often event‐driven and the log‐rank test is the most commonly used statistical method for evaluating treatment effect. However, this method relies on the proportional hazards assumption in that it has the maximal power in this circumstance. In certain disease areas or populations, some patients can be curable and never experience the events despite a long follow‐up. The event accumulation may dry out after a certain period of follow‐up and the treatment effect could be reflected as the combination of improvement of cure rate and the delay of events for those uncurable patients. Study power depends on both cure rate improvement and hazard reduction. In this paper, we illustrate these practical issues using simulation studies and explore sample size recommendations, alternative ways for clinical cutoffs, and efficient testing methods with the highest study power possible.  相似文献   

7.
Randomly right censored data often arise in industrial life testing and clinical trials. Several authors have proposed asymptotic confidence bands for the survival function when data are randomly censored on the right. All of these bands are based on the empirical estimator of the survival function. In this paper, families of asymptotic (1-)100% level confidence bands are developed from the smoothed estimate of the survival function under the general random censorship model. The new bands are compared to empirical bands, and it is shown that for small sample sizes, the smooth bands have a higher coverage probability than the empirical counterparts.  相似文献   

8.
Cost assessment serves as an essential part in economic evaluation of medical interventions. In many studies, costs as well as survival data are frequently censored. Standard survival analysis techniques are often invalid for censored costs, due to the induced dependent censoring problem. Owing to high skewness in many cost data, it is desirable to estimate the median costs, which will be available with estimated survival function of costs. We propose a kernel-based survival estimator for costs, which is monotone, consistent, and more efficient than several existing estimators. We conduct numerical studies to examine the finite-sample performance of the proposed estimator.  相似文献   

9.
Although the statistical methods enabling efficient adaptive seamless designs are increasingly well established, it is important to continue to use the endpoints and specifications that best suit the therapy area and stage of development concerned when conducting such a trial. Approaches exist that allow adaptive designs to continue seamlessly either in a subpopulation of patients or in the whole population on the basis of data obtained from the first stage of a phase II/III design: our proposed design adds extra flexibility by also allowing the trial to continue in all patients but with both the subgroup and the full population as co-primary populations. Further, methodology is presented which controls the Type-I error rate at less than 2.5% when the phase II and III endpoints are different but correlated time-to-event endpoints. The operating characteristics of the design are described along with a discussion of the practical aspects in an oncology setting.  相似文献   

10.
We discuss the optimal allocation problem in a multi-level stress test with Type-II censoring and Weibull (extreme value) regression model. We derive the maximum-likelihood estimators and their asymptotic variance–covariance matrix through the Fisher information. Four optimality criteria are used to discuss the optimal allocation problem. Optimal allocation of units, both exactly for small sample sizes and asymptotically for large sample sizes, for two- and four-stress-level situations are determined numerically. Conclusions and discussions are provided based on the numerical studies.  相似文献   

11.
In this paper we perform inference on the effect of a treatment on survival times in studies where the treatment assignment is not randomized and the assignment time is not known in advance. Two such studies are discussed: a heart transplant program and a study of Swedish unemployed eligible for employment subsidy. We estimate survival functions on a treated and a control group which are made comparable through matching on observed covariates. The inference is performed by conditioning on waiting time to treatment, that is, time between the entrance in the study and treatment. This can be done only when sufficient data are available. In other cases, averaging over waiting times is a possibility, although the classical interpretation of the estimated survival functions is lost unless hazards are not functions of waiting time. To show unbiasedness and to obtain an estimator of the variance, we build on the potential outcome framework, which was introduced by J. Neyman in the context of randomized experiments, and adapted to observational studies by D.B. Rubin. Our approach does not make parametric or distributional assumptions. In particular, we do not assume proportionality of the hazards compared. Small sample performance of the estimator and a derived test of no treatment effect are studied in a Monte Carlo study.  相似文献   

12.
We implement a joint model for mixed multivariate longitudinal measurements, applied to the prediction of time until lung transplant or death in idiopathic pulmonary fibrosis. Specifically, we formulate a unified Bayesian joint model for the mixed longitudinal responses and time-to-event outcomes. For the longitudinal model of continuous and binary responses, we investigate multivariate generalized linear mixed models using shared random effects. Longitudinal and time-to-event data are assumed to be independent conditional on available covariates and shared parameters. A Markov chain Monte Carlo algorithm, implemented in OpenBUGS, is used for parameter estimation. To illustrate practical considerations in choosing a final model, we fit 37 different candidate models using all possible combinations of random effects and employ a deviance information criterion to select a best-fitting model. We demonstrate the prediction of future event probabilities within a fixed time interval for patients utilizing baseline data, post-baseline longitudinal responses, and the time-to-event outcome. The performance of our joint model is also evaluated in simulation studies.  相似文献   

13.
The standard log-rank test has been extended by adopting various weight functions. Cancer vaccine or immunotherapy trials have shown a delayed onset of effect for the experimental therapy. This is manifested as a delayed separation of the survival curves. This work proposes new weighted log-rank tests to account for such delay. The weight function is motivated by the time-varying hazard ratio between the experimental and the control therapies. We implement a numerical evaluation of the Schoenfeld approximation (NESA) for the mean of the test statistic. The NESA enables us to assess the power and to calculate the sample size for detecting such delayed treatment effect and also for a more general specification of the non-proportional hazards in a trial. We further show a connection between our proposed test and the weighted Cox regression. Then the average hazard ratio using the same weight is obtained as an estimand of the treatment effect. Extensive simulation studies are conducted to compare the performance of the proposed tests with the standard log-rank test and to assess their robustness to model mis-specifications. Our tests outperform the Gρ,γ class in general and have performance close to the optimal test. We demonstrate our methods on two cancer immunotherapy trials.  相似文献   

14.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   

15.
In many toxicological assays, interactions between primary and secondary effects may cause a downturn in mean responses at high doses. In this situation, the typical monotonicity assumption is invalid and may be quite misleading. Prior literature addresses the analysis of response functions with a downturn, but so far as we know, this paper initiates the study of experimental design for this situation. A growth model is combined with a death model to allow for the downturn in mean doses. Several different objective functions are studied. When the number of treatments equals the number of parameters, Fisher information is found to be independent of the model of the treatment means and on the magnitudes of the treatments. In general, A- and DA-optimal weights for estimating adjacent mean differences are found analytically for a simple model and numerically for a biologically motivated model. Results on c-optimality are also obtained for estimating the peak dose and the EC50 (the treatment with response half way between the control and the peak response on the increasing portion of the response function). Finally, when interest lies only in the increasing portion of the response function, we propose composite D-optimal designs.  相似文献   

16.
17.
Muitivariate failure time data are common in medical research; com¬monly used statistical models for such correlated failure-time data include frailty and marginal models. Both types of models most often assume pro¬portional hazards (Cox, 1972); but the Cox model may not fit the data well This article presents a class of linear transformation frailty models that in¬cludes, as a special case, the proportional hazards model with frailty. We then propose approximate procedures to derive the best linear unbiased es¬timates and predictors of the regression parameters and frailties. We apply the proposed methods to analyze results of a clinical trial of different dose levels of didansine (ddl) among HIV-infected patients who were intolerant of zidovudine (ZDV). These methods yield estimates of treatment effects and of frailties corresponding to patient groups defined by clinical history prior to entry into the trial.  相似文献   

18.
Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial.  相似文献   

19.
In many case-control studies, it is common to utilize paired data when treatments are being evaluated. In this article, we propose and examine an efficient distribution-free test to compare two independent samples, where each is based on paired observations. We extend and modify the density-based empirical likelihood ratio test presented by Gurevich and Vexler [7] to formulate an appropriate parametric likelihood ratio test statistic corresponding to the hypothesis of our interest and then to approximate the test statistic nonparametrically. We conduct an extensive Monte Carlo study to evaluate the proposed test. The results of the performed simulation study demonstrate the robustness of the proposed test with respect to values of test parameters. Furthermore, an extensive power analysis via Monte Carlo simulations confirms that the proposed method outperforms the classical and general procedures in most cases related to a wide class of alternatives. An application to a real paired data study illustrates that the proposed test can be efficiently implemented in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号