共查询到15条相似文献,搜索用时 0 毫秒
1.
Petros Pechlivanoglou Fentaw Abegaz Maarten J Postma Ernst Wit 《Pharmaceutical statistics》2015,14(4):322-331
Mixed treatment comparison (MTC) models rely on estimates of relative effectiveness from randomized clinical trials so as to respect randomization across treatment arms. This approach could potentially be simplified by an alternative parameterization of the way effectiveness is modeled. We introduce a treatment‐based parameterization of the MTC model that estimates outcomes on both the study and treatment levels. We compare the proposed model to the commonly used MTC models using a simulation study as well as three randomized clinical trial datasets from published systematic reviews comparing (i) treatments on bleeding after cirrhosis, (ii) the impact of antihypertensive drugs in diabetes mellitus, and (iii) smoking cessation strategies. The simulation results suggest similar or sometimes better performance of the treatment‐based MTC model. Moreover, from the real data analyses, little differences were observed on the inference extracted from both models. Overall, our proposed MTC approach performed as good, or better, than the commonly applied indirect and MTC models and is simpler, fast, and easier to implement in standard statistical software. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
2.
Experience has shown us that when data are pooled from multiple studies to create an integrated summary, an analysis based on naïvely‐pooled data is vulnerable to the mischief of Simpson's Paradox. Using the proportions of patients with a target adverse event (AE) as an example, we demonstrate the Paradox's effect on both the comparison and the estimation of the proportions. While meta analytic approaches have been recommended and increasingly used for comparing safety data between treatments, reporting proportions of subjects experiencing a target AE based on data from multiple studies has received little attention. In this paper, we suggest two possible approaches to report these cumulative proportions. In addition, we urge that regulatory guidelines on reporting such proportions be established so that risks can be communicated in a scientifically defensible and balanced manner. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
3.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study. 相似文献
4.
We propose a novel method to quantify the similarity between an impression (Q) from an unknown source and a test impression (K) from a known source. Using the property of geometrical congruence in the impressions, the degree of correspondence is quantified using ideas from graph theory and maximum clique (MC). The algorithm uses the x and y coordinates of the edges in the images as the data. We focus on local areas in Q and the corresponding regions in K and extract features for comparison. Using pairs of images with known origin, we train a random forest to classify pairs into mates and non-mates. We collected impressions from 60 pairs of shoes of the same brand and model, worn over six months. Using a different set of very similar shoes, we evaluated the performance of the algorithm in terms of the accuracy with which it correctly classified images into source classes. Using classification error rates and ROC curves, we compare the proposed method to other algorithms in the literature and show that for these data, our method shows good classification performance relative to other methods. The algorithm can be implemented with the R package shoeprintr. 相似文献
5.
An important part of the evaluation of a therapy is an investigation of the assumption of homogeneity of its effect across pre-defined subpopulations. In this paper we describe simple graphical presentations that could be used to assess the homogeneity of treatment effect and identify outliers. The emphasis in the paper is on meta-analysis but the methods described can be generalized to other investigations. 相似文献
6.
Helmut Petto Ulrich Mrowietz Stefan Wilhelm Alexander Schacht 《Pharmaceutical statistics》2019,18(1):4-21
Assessment of severity is essential for the management of chronic diseases. Continuous variables like scores obtained from the Hamilton Rating Scale for Depression or the Psoriasis Area and Severity Index (PASI) are standard measures used in clinical trials of depression and psoriasis. In clinical trials of psoriasis, for example, the reduction of PASI from baseline in response to therapy, in particular the proportion of patients achieving at least 75%, 90%, or 100% improvement of disease (PASI 75, PASI 90, or PASI 100), is typically used to evaluate treatment efficacy. However, evaluation of the proportions of patients reaching absolute PASI values (eg, ≤1, ≤2, ≤3, or ≤5) has recently gained greater clinical interest and is increasingly being reported. When relative versus absolute scores are standard, as is the case with the PASI in psoriasis, it is difficult to compare absolute changes using existing published data. Thus, we developed a method to estimate absolute PASI levels from aggregated relative levels. This conversion method is based on a latent 2‐dimensional normal distribution for the absolute score at baseline and at a specific endpoint with a truncation to allow for baseline inclusion criterion. The model was fitted to aggregated results from simulations and from 3 phase III studies that had known absolute PASI proportions. The predictions represented the actual results quite precisely. This model might be applied to other conditions, such as depression, to estimate proportions of patients achieving an absolute low level of disease activity, given absolute values at baseline and proportions of patients achieving relative improvements at a subsequent time point. 相似文献
7.
Sylvia. Richardson & Peter J. Green 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1997,59(4):731-792
New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context. 相似文献
8.
T-cell engagers are a class of oncology drugs which engage T-cells to initiate immune response against malignant cells. T-cell engagers have features that are unlike prior classes of oncology drugs (e.g., chemotherapies or targeted therapies), because (1) starting dose level often must be conservative due to immune-related side effects such as cytokine release syndrome (CRS); (2) dose level can usually be safely titrated higher as a result of subject's immune system adaptation after first exposure to lower dose; and (3) due to preventive management of CRS, these safety events rarely worsen to become dose limiting toxicities (DLTs). It is generally believed that for T-cell engagers the dose intensity of the starting dose and the peak dose intensity both correlate with improved efficacy. Existing dose finding methodologies are not designed to efficiently identify both the initial starting dose and peak dose intensity in a single trial. In this study, we propose a new trial design, dose intra-subject escalation to an event (DIETE) design, that can (1) estimate the maximum tolerated initial dose level (MTD1); and (2) incorporate systematic intra-subject dose-escalation to estimate the maximum tolerated dose level subsequent to adaptation induced by the initial dose level (MTD2) with a survival analysis approach. We compare our framework to similar methodologies and evaluate their key operating characteristics. 相似文献
9.
In medical studies, there is interest in inferring the marginal distribution of a survival time subject to competing risks. The Kyushu Lipid Intervention Study (KLIS) was a clinical study for hypercholesterolemia, where pravastatin treatment was compared with conventional treatment. The primary endpoint was time to events of coronary heart disease (CHD). In this study, however, some subjects died from causes other than CHD or were censored due to loss to follow-up. Because the treatments were targeted to reduce CHD events, the investigators were interested in the effect of the treatment on CHD events in the absence of causes of death or events other than CHD. In this paper, we present a method for estimating treatment group-specific marginal survival curves of time-to-event data in the presence of dependent competing risks. The proposed method is a straightforward extension of the Inverse Probability of Censoring Weighted (IPCW) method to settings with more than one reason for censoring. The results of our analysis showed that the IPCW marginal incidence for CHD was almost the same as the lower bound for which subjects with competing events were assumed to be censored at the end of all follow-up. This result provided reassurance that the results in KLIS were robust to competing risks. 相似文献
10.
Receiver operating characteristic(ROC)curves are useful for studying the performance of diagnostic tests. ROC curves occur in many fields of applications including psychophysics, quality control and medical diagnostics. In practical situations, often the responses to a diagnostic test are classified into a number of ordered categories. Such data are referred to as ratings data. It is typically assumed that the underlying model is based on a continuous probability distribution. The ROC curve is then constructed from such data using this probability model. Properties of the ROC curve are inherited from the model. Therefore, understanding the role of different probability distributions in ROC modeling is an interesting and important area of research. In this paper the Lomax distribution is considered as a model for ratings data and the corresponding ROC curve is derived. The maximum likelihood estimation procedure for the related parameters is discussed. This procedure is then illustrated in the analysis of a neurological data example. 相似文献
11.
12.
AbstractFor the restricted parameter space (0,1), we propose Zhang’s loss function which satisfies all the 7 properties for a good loss function on (0,1). We then calculate the Bayes rule (estimator), the posterior expectation, the integrated risk, and the Bayes risk of the parameter in (0,1) under Zhang’s loss function. We also calculate the usual Bayes estimator under the squared error loss function, and the Bayes estimator has been proved to underestimate the Bayes estimator under Zhang’s loss function. Finally, the numerical simulations and a real data example of some monthly magazine exposure data exemplify our theoretical studies of two size relationships about the Bayes estimators and the Posterior Expected Zhang’s Losses (PEZLs). 相似文献
13.
Abstract. In any epidemic, there may exist an unidentified subpopulation which might be naturally immune or isolated and who will not be involved in the transmission of the disease. Estimation of key parameters, for example, the basic reproductive number, without accounting for this possibility would underestimate the severity of the epidemics. Here, we propose a procedure to estimate the basic reproductive number ( R 0 ) in an epidemic model with an unknown initial number of susceptibles. The infection process is usually not completely observed, but is reconstructed by a kernel‐smoothing method under a counting process framework. Simulation is used to evaluate the performance of the estimators for major epidemics. We illustrate the procedure using the Abakaliki smallpox data. 相似文献
14.
Patrick Borges 《Journal of Statistical Computation and Simulation》2017,87(9):1712-1722
In this paper we develop a regression model for survival data in the presence of long-term survivors based on the generalized Gompertz distribution introduced by El-Gohary et al. [The generalized Gompertz distribution. Appl Math Model. 2013;37:13–24] in a defective version. This model includes as special case the Gompertz cure rate model proposed by Gieser et al. [Modelling cure rates using the Gompertz model with covariate information. Stat Med. 1998;17:831–839]. Next, an expectation maximization algorithm is then developed for determining the maximum likelihood estimates (MLEs) of the parameters of the model. In addition, we discuss the construction of confidence intervals for the parameters using the asymptotic distributions of the MLEs and the parametric bootstrap method, and assess their performance through a Monte Carlo simulation study. Finally, the proposed methodology was applied to a database on uterine cervical cancer. 相似文献
15.
V. Dupač 《Statistics》2013,47(1):107-117
Usually, the dependence in stationary processes is described by a set of coefficients. In this paper, a measure of dependence is proposed which can be used instead of the autocorrelation function, and another measure for the dependence between two processes instead of cross-correlation function and coherence coefficients. In the end, an improvement of extrapolation of a process is investigated which is caused by the knowledge of another related process. 相似文献