首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adaptive designs are effective mechanisms for flexibly allocating experimental resources. In clinical trials particularly, such designs allow researchers to balance short- and long-term goals. Unfortunately, fully sequential strategies require outcomes from all previous allocations prior to the next allocation. This can prolong an experiment unduly. As a result, we seek designs for models that specifically incorporate delays.We utilize a delay model in which patients arrive according to a Poisson process and their response times are exponential. We examine three designs with an eye towards minimizing patient losses: a delayed two-armed bandit rule which is optimal for the model and objective of interest; a newly proposed hyperopic rule; and a randomized play-the-winner rule. The results show that, except when the delay rate is several orders of magnitude different than the patient arrival rate, the delayed response bandit is nearly as efficient as the immediate response bandit. The delayed hyperopic design also performs extremely well throughout the range of delays, despite the fact that the rate of delay is not one of its design parameters. The delayed randomized play-the-winner rule is far less efficient than either of the other methods.  相似文献   

2.
Longitudinal health-related quality-of-life (QOL) data are often collected as part of clinical studies. Here two analyses of QOL data from a prospective study of breast cancer patients evaluate how physical performance is related to factors such as age, menopausal status and type of adjuvant treatment. The first analysis uses summary statistic methods. The same questions are then addressed using a multilevel model. Because of the structure of the physical performance response, regression models for the analysis of ordinal data are used. The analyses of base-line and follow-up QOL data at four time points over two years from 257 women show that reported base-line physical performance was consistently associated with later performance and that women who had received chemotherapy in the month before the QOL assessment had a greater physical performance burden. There is a slight power gain of the multilevel model over the summary statistic analysis. The multilevel model also allows relationships with time-dependent covariates to be included, highlighting treatment-related factors affecting physical performance that could not be considered within the summary statistic analysis. Checking of the multilevel model assumptions is exemplified.  相似文献   

3.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

4.
5.
In many cancer trials patients are at risk of recurrence and death after the appearance and the successful treatment of the first diagnosed tumour. In this situation competing risks models that model several competing causes of therapy or surgery failure are a natural framework to describe the evolution of the disease.Typically, regression models for competing risks outcomes are based on proportional hazards model for each of the cause-specific hazard rates. An immediate practical problem is then how to deal with the abundance of regression parameters. The aim of reduced rank proportional hazards models is to reduce the number of parameters that need to be estimated while at the same time keeping the distinction between different transitions. They have the advantage of describing the competing risks model in fewer parameters, cope with transitions where few events are present and facilitate the interpretation of these estimates.We shall illustrate the use of this technique on 2795 patients from a breast cancer trial (EORTC 10854).  相似文献   

6.
The concept of minimum aberration has been extended to choose blocked fractional factorial designs (FFDs). The minimum aberration criterion ranks blocked FFDs according to their treatment and block wordlength patterns, which are often obtained by counting words in the treatment defining contrast subgroups and alias sets. When the number of factors is large, there are a huge number of words to be counted, causing some difficulties in computation. Based on coding theory, the concept of minimum moment aberration, proposed by Xu [Statist. Sinica, 13 (2003) 691–708] for unblocked FFDs, is extended to blocked FFDs. A method is then proposed for constructing minimum aberration blocked FFDs without using defining contrast subgroups and alias sets. Minimum aberration blocked FFDs for all 32 runs, 64 runs up to 32 factors, and all 81 runs are given with respect to three combined wordlength patterns.  相似文献   

7.
For many forms of cancer, patients will receive the initial regimen of treatments, then experience cancer progression and eventually die of the disease. Understanding the disease process in patients with cancer is essential in clinical, epidemiological and translational research. One challenge in analyzing such data is that death dependently censors cancer progression (e.g., recurrence), whereas progression does not censor death. We deal with the informative censoring by first selecting a suitable copula model through an exploratory diagnostic approach and then developing an inference procedure to simultaneously estimate the marginal survival function of cancer relapse and an association parameter in the copula model. We show that the proposed estimators possess consistency and weak convergence. We use simulation studies to evaluate the finite sample performance of the proposed method, and illustrate it through an application to data from a study of early stage breast cancer.  相似文献   

8.
Unbalanced data classification has been a long-standing issue in the field of medical vision science. We introduced the methods of support vector machines (SVM) with active learning (AL) to improve prediction of unbalanced classes in the medical imaging field. A standard SVM algorithm with four different AL approaches are proposed: (1) The first one uses random sampling to select the initial pool with AL algorithm; (2) the second doubles the training instances of the rare category to reduce the unbalanced ratio before the AL algorithm; (3) the third uses a balanced pool with equal number from each category; and (4) the fourth uses a balanced pool and implements balanced sampling throughout the AL algorithm. Grid pixel data of two scleroderma lung disease patterns, lung fibrosis (LF), and honeycomb (HC) were extracted from computed tomography images of 71 patients to produce a training set of 348 HC and 3009 LF instances and a test set of 291 HC and 2665 LF. From our research, SVM with AL using balanced sampling compared to random sampling increased the test sensitivity of HC by 56% (17.5% vs. 73.5%) and 47% (23% vs. 70%) for the original and denoised dataset, respectively. SVM with AL with balanced sampling can improve the classification performances of unbalanced data.  相似文献   

9.
Sequential administration of immunotherapy following radiotherapy (immunoRT) has attracted much attention in cancer research. Due to its unique feature that radiotherapy upregulates the expression of a predictive biomarker for immunotherapy, novel clinical trial designs are needed for immunoRT to identify patient subgroups and the optimal dose for each subgroup. In this article, we propose a Bayesian phase I/II design for immunotherapy administered after standard-dose radiotherapy for this purpose. We construct a latent subgroup membership variable and model it as a function of the baseline and pre-post radiotherapy change in the predictive biomarker measurements. Conditional on the latent subgroup membership of each patient, we jointly model the continuous immune response and the binary efficacy outcome using plateau models, and model toxicity using the equivalent toxicity score approach to account for toxicity grades. During the trial, based on accumulating data, we continuously update model estimates and adaptively randomize patients to admissible doses. Simulation studies and an illustrative trial application show that our design has good operating characteristics in terms of identifying both patient subgroups and the optimal dose for each subgroup.  相似文献   

10.
A latent Markov model for detecting patterns of criminal activity   总被引:1,自引:0,他引:1  
Summary.  The paper investigates the problem of determining patterns of criminal behaviour from official criminal histories, concentrating on the variety and type of offending convictions. The analysis is carried out on the basis of a multivariate latent Markov model which allows for discrete covariates affecting the initial and the transition probabilities of the latent process. We also show some simplifications which reduce the number of parameters substantially; we include a Rasch-like parameterization of the conditional distribution of the response variables given the latent process and a constraint of partial homogeneity of the latent Markov chain. For the maximum likelihood estimation of the model we outline an EM algorithm based on recursions known in the hidden Markov literature, which make the estimation feasible also when the number of time occasions is large. Through this model, we analyse the conviction histories of a cohort of offenders who were born in England and Wales in 1953. The final model identifies five latent classes and specifies common transition probabilities for males and females between 5-year age periods, but with different initial probabilities.  相似文献   

11.
A full likelihood method is proposed to analyse continuous longitudinal data with non-ignorable (informative) missing values and non-monotone patterns. The problem arose in a breast cancer clinical trial where repeated assessments of quality of life were collected: patients rated their coping ability during and after treatment. We allow the missingness probabilities to depend on unobserved responses, and we use a multivariate normal model for the outcomes. A first-order Markov dependence structure for the responses is a natural choice and facilitates the construction of the likelihood; estimates are obtained via the Nelder–Mead simplex algorithm. Computations are difficult and become intractable with more than three or four assessments. Applying the method to the quality-of-life data results in easily interpretable estimates, confirms the suspicion that the data are non-ignorably missing and highlights the likely bias of standard methods. Although treatment comparisons are not affected here, the methods are useful for obtaining unbiased means and estimating trends over time.  相似文献   

12.
Using several variables known to be related to prostate cancer, a multivariate classification method is developed to predict the onset of clinical prostate cancer. A multivariate mixed-effects model is used to describe longitudinal changes in prostate specific antigen (PSA), a free testosterone index (FTI), and body mass index (BMI) before any clinical evidence of prostate cancer. The patterns of change in these three variables are allowed to vary depending on whether the subject develops prostate cancer or not and the severity of the prostate cancer at diagnosis. An application of Bayes' theorem provides posterior probabilities that we use to predict whether an individual will develop prostate cancer and, if so, whether it is a high-risk or a low-risk cancer. The classification rule is applied sequentially one multivariate observation at a time until the subject is classified as a cancer case or until the last observation has been used. We perform the analyses using each of the three variables individually, combined together in pairs, and all three variables together in one analysis. We compare the classification results among the various analyses and a simulation study demonstrates how the sensitivity of prediction changes with respect to the number and type of variables used in the prediction process.  相似文献   

13.
This study investigated the impact of spatial location on the effectiveness of population‐based breast screening in reducing breast cancer mortality compared to other detection methods among Queensland women. The analysis was based on linked population‐based datasets from BreastScreen Queensland and the Queensland Cancer Registry for the period of 1997–2008 for women aged less than 90 years at diagnosis. A Bayesian hierarchical regression modelling approach was adopted and posterior estimation was performed using Markov Chain Monte Carlo techniques. This approach accommodated sparse data resulting from rare outcomes in small geographic areas, while allowing for spatial correlation and demographic influences to be included. A relative survival model was chosen to evaluate the relative excess risk for each breast cancer related factor. Several models were fitted to examine the influence of demographic information, cancer stage, geographic information and detection method on women's relative survival. Overall, the study demonstrated that including the detection method and geographic information when assessing the relative survival of breast cancer patients helped capture unexplained and spatial variability. The study also found evidence of better survival among women with breast cancer diagnosed in a screening program than those detected otherwise, as well as lower risk for those residing in a more urban or socio‐economically advantaged region, even after adjusting for tumour stage, environmental factors and demographics. However, no evidence of dependency between method of detection and geographic location was found. This project provides a sophisticated approach to examining the benefit of a screening program while considering the influence of geographic factors.  相似文献   

14.
A note on using the F-measure for evaluating record linkage algorithms   总被引:1,自引:0,他引:1  
Record linkage is the process of identifying and linking records about the same entities from one or more databases. Record linkage can be viewed as a classification problem where the aim is to decide whether a pair of records is a match (i.e. two records refer to the same real-world entity) or a non-match (two records refer to two different entities). Various classification techniques—including supervised, unsupervised, semi-supervised and active learning based—have been employed for record linkage. If ground truth data in the form of known true matches and non-matches are available, the quality of classified links can be evaluated. Due to the generally high class imbalance in record linkage problems, standard accuracy or misclassification rate are not meaningful for assessing the quality of a set of linked records. Instead, precision and recall, as commonly used in information retrieval and machine learning, are used. These are often combined into the popular F-measure, which is the harmonic mean of precision and recall. We show that the F-measure can also be expressed as a weighted sum of precision and recall, with weights which depend on the linkage method being used. This reformulation reveals that the F-measure has a major conceptual weakness: the relative importance assigned to precision and recall should be an aspect of the problem and the researcher or user, but not of the particular linkage method being used. We suggest alternative measures which do not suffer from this fundamental flaw.  相似文献   

15.
Summary.  In survival data that are collected from phase III clinical trials on breast cancer, a patient may experience more than one event, including recurrence of the original cancer, new primary cancer and death. Radiation oncologists are often interested in comparing patterns of local or regional recurrences alone as first events to identify a subgroup of patients who need to be treated by radiation therapy after surgery. The cumulative incidence function provides estimates of the cumulative probability of locoregional recurrences in the presence of other competing events. A simple version of the Gompertz distribution is proposed to parameterize the cumulative incidence function directly. The model interpretation for the cumulative incidence function is more natural than it is with the usual cause-specific hazard parameterization. Maximum likelihood analysis is used to estimate simultaneously parametric models for cumulative incidence functions of all causes. The parametric cumulative incidence approach is applied to a data set from the National Surgical Adjuvant Breast and Bowel Project and compared with analyses that are based on parametric cause-specific hazard models and nonparametric cumulative incidence estimation.  相似文献   

16.
Mixture cure models are widely used when a proportion of patients are cured. The proportional hazards mixture cure model and the accelerated failure time mixture cure model are the most popular models in practice. Usually the expectation–maximisation (EM) algorithm is applied to both models for parameter estimation. Bootstrap methods are used for variance estimation. In this paper we propose a smooth semi‐nonparametric (SNP) approach in which maximum likelihood is applied directly to mixture cure models for parameter estimation. The variance can be estimated by the inverse of the second derivative of the SNP likelihood. A comprehensive simulation study indicates good performance of the proposed method. We investigate stage effects in breast cancer by applying the proposed method to breast cancer data from the South Carolina Cancer Registry.  相似文献   

17.
Abstract.  A Markov property associates a set of conditional independencies to a graph. Two alternative Markov properties are available for chain graphs (CGs), the Lauritzen–Wermuth–Frydenberg (LWF) and the Andersson–Madigan– Perlman (AMP) Markov properties, which are different in general but coincide for the subclass of CGs with no flags . Markov equivalence induces a partition of the class of CGs into equivalence classes and every equivalence class contains a, possibly empty, subclass of CGs with no flags itself containing a, possibly empty, subclass of directed acyclic graphs (DAGs). LWF-Markov equivalence classes of CGs can be naturally characterized by means of the so-called largest CGs , whereas a graphical characterization of equivalence classes of DAGs is provided by the essential graphs . In this paper, we show the existence of largest CGs with no flags that provide a natural characterization of equivalence classes of CGs of this kind, with respect to both the LWF- and the AMP-Markov properties. We propose a procedure for the construction of the largest CGs, the largest CGs with no flags and the essential graphs, thereby providing a unified approach to the problem. As by-products we obtain a characterization of graphs that are largest CGs with no flags and an alternative characterization of graphs which are largest CGs. Furthermore, a known characterization of the essential graphs is shown to be a special case of our more general framework. The three graphical characterizations have a common structure: they use two versions of a locally verifiable graphical rule. Moreover, in case of DAGs, an immediate comparison of three characterizing graphs is possible.  相似文献   

18.
With rapid improvements in medical treatment and health care, many datasets dealing with time to relapse or death now reveal a substantial portion of patients who are cured (i.e., who never experience the event). Extended survival models called cure rate models account for the probability of a subject being cured and can be broadly classified into the classical mixture models of Berkson and Gage (BG type) or the stochastic tumor models pioneered by Yakovlev and extended to a hierarchical framework by Chen, Ibrahim, and Sinha (YCIS type). Recent developments in Bayesian hierarchical cure models have evoked significant interest regarding relationships and preferences between these two classes of models. Our present work proposes a unifying class of cure rate models that facilitates flexible hierarchical model-building while including both existing cure model classes as special cases. This unifying class enables robust modeling by accounting for uncertainty in underlying mechanisms leading to cure. Issues such as regressing on the cure fraction and propriety of the associated posterior distributions under different modeling assumptions are also discussed. Finally, we offer a simulation study and also illustrate with two datasets (on melanoma and breast cancer) that reveal our framework's ability to distinguish among underlying mechanisms that lead to relapse and cure.  相似文献   

19.
Summary.  In longitudinal studies, we are often interested in modelling repeated assessments of volume over time. Our motivating example is an acupuncture clinical trial in which we compare the effects of active acupuncture, sham acupuncture and standard medical care on chemotherapy-induced nausea in patients being treated for advanced stage breast cancer. An important end point for this study was the daily measurement of the volume of emesis over a 14-day follow-up period. The repeated volume data contained many 0s, had apparent serial correlation and had missing observations, making analysis challenging. The paper proposes a two-part latent process model for analysing the emesis volume data which addresses these challenges. We propose a Monte Carlo EM algorithm for parameter estimation and we use this methodology to show the beneficial effects of acupuncture on reducing the volume of emesis in women being treated for breast cancer with chemotherapy. Through simulations, we demonstrate the importance of correctly modelling the serial correlation for making conditional inference. Further, we show that the correct model for the correlation structure is less important for making correct inference on marginal means.  相似文献   

20.
T-cell engagers are a class of oncology drugs which engage T-cells to initiate immune response against malignant cells. T-cell engagers have features that are unlike prior classes of oncology drugs (e.g., chemotherapies or targeted therapies), because (1) starting dose level often must be conservative due to immune-related side effects such as cytokine release syndrome (CRS); (2) dose level can usually be safely titrated higher as a result of subject's immune system adaptation after first exposure to lower dose; and (3) due to preventive management of CRS, these safety events rarely worsen to become dose limiting toxicities (DLTs). It is generally believed that for T-cell engagers the dose intensity of the starting dose and the peak dose intensity both correlate with improved efficacy. Existing dose finding methodologies are not designed to efficiently identify both the initial starting dose and peak dose intensity in a single trial. In this study, we propose a new trial design, dose intra-subject escalation to an event (DIETE) design, that can (1) estimate the maximum tolerated initial dose level (MTD1); and (2) incorporate systematic intra-subject dose-escalation to estimate the maximum tolerated dose level subsequent to adaptation induced by the initial dose level (MTD2) with a survival analysis approach. We compare our framework to similar methodologies and evaluate their key operating characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号