首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

2.
In this paper, we investigate the effect of tuberculosis pericarditis (TBP) treatment on CD4 count changes over time and draw inferences in the presence of missing data. We accounted for missing data and conducted sensitivity analyses to assess whether inferences under missing at random (MAR) assumption are sensitive to not missing at random (NMAR) assumptions using the selection model (SeM) framework. We conducted sensitivity analysis using the local influence approach and stress-testing analysis. Our analyses showed that the inferences from the MAR are robust to the NMAR assumption and influential subjects do not overturn the study conclusions about treatment effects and the dropout mechanism. Therefore, the missing CD4 count measurements are likely to be MAR. The results also revealed that TBP treatment does not interact with HIV/AIDS treatment and that TBP treatment has no significant effect on CD4 count changes over time. Although the methods considered were applied to data in the IMPI trial setting, the methods can also be applied to clinical trials with similar settings.  相似文献   

3.
Summary.  We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference. These issues concern some of the most fundamental advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking before randomization and matching after assignment of treatment to achieve covariate balance. Applied researchers in a wide range of scientific disciplines seem to fall prey to one or more of these fallacies and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show how this decomposition can help scholars from different experimental and observational research traditions to understand better each other's inferential problems and attempted solutions.  相似文献   

4.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

5.
In an experiment to compare K(<2) treatments, suppose that eligible subjects arrive at an experimental site sequentially and must be treated immediately. In this paper, we assume that the size of the experiment cannot be predetermined and propose and analyze a class of treatment assignment rules which offer compromises between the complete randomization and the perfect balance schemes, A special case of these assignment rules is thoroughly investigated and is featured in the numerical compu-tations. For practical use, a method of implementation of this special rule is provided.  相似文献   

6.
Missing data are present in almost all statistical analysis. In simple paired design tests, when some subject has one of the involved variables missing in the so-called partially overlapping samples scheme, it is usually discarded for the analysis. The lack of consistency between the information reported in the univariate and multivariate analysis is, perhaps, the main consequence. Although the randomness on the missing mechanism (missingness completely at random) is an usual and needed assumption for this particular situation, missing data presence could lead to serious inconsistencies on the reported conclusions. In this paper, the authors develop a simple and direct procedure which allows using the whole available information in order to perform paired tests. In particular, the proposed methodology is applied to check the equality among the means from two paired samples. In addition, the use of two different resampling techniques is also explored. Finally, real-world data are analysed.  相似文献   

7.
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time‐in‐study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference‐based imputations, where information from reference arms can be borrowed to impute post‐discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time‐varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Cross-classified data are often obtained in controlled experimental situations and in epidemiologic studies. As an example of the latter, occupational health studies sometimes require personal exposure measurements on a random sample of workers from one or more job groups, in one or more plant locations, on several different sampling dates. Because the marginal distributions of exposure data from such studies are generally right-skewed and well-approximated as lognormal, researchers in this area often consider the use of ANOVA models after a logarithmic transformation. While it is then of interest to estimate original-scale population parameters (e.g., the overall mean and variance), standard candidates such as maximum likelihood estimators (MLEs) can be unstable and highly biased. Uniformly minimum variance unbiased (UMVU) cstiniators offer a viable alternative, and are adaptable to sampling schemes that are typiral of experimental or epidemiologic studies. In this paper, we provide UMVU estimators for the mean and variance under two random effects ANOVA models for logtransformed data. We illustrate substantial mean squared error gains relative to the MLE when estimating the mean under a one-way classification. We illustrate that the results can readily be extended to encompass a useful class of purely random effects models, provided that the study data are balanced.  相似文献   

9.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
In this article, we develop a formal goodness-of-fit testing procedure for one-shot device testing data, in which each observation in the sample is either left censored or right censored. Such data are also called current status data. We provide an algorithm for calculating the nonparametric maximum likelihood estimate (NPMLE) of the unknown lifetime distribution based on such data. Then, we consider four different test statistics that can be used for testing the goodness-of-fit of accelerated failure time (AFT) model by the use of samples of residuals: a chi-square-type statistic based on the difference between the empirical and expected numbers of failures at each inspection time; two other statistics based on the difference between the NPMLE of the lifetime distribution obtained from one-shot device testing data and the distribution specified under the null hypothesis; as a final statistic, we use White's idea of comparing two estimators of the Fisher Information (FI) to propose a test statistic. We then compare these tests in terms of power, and draw some conclusions. Finally, we present an example to illustrate the proposed tests.  相似文献   

11.
Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
13.
Detecting dependence between marks and locations of marked point processes   总被引:1,自引:0,他引:1  
Summary.  We introduce two characteristics for stationary and isotropic marked point proces- ses, E ( h ) and V ( h ), and describe their use in investigating mark–point interactions. These quantities are functions of the interpoint distance h and denote the conditional expectation and the conditional variance of a mark respectively, given that there is a further point of the process a distance h away. We present tests based on E and V for the hypothesis that the values of the marks can be modelled by a random field which is independent of the unmarked point process. We apply the methods to two data sets in forestry.  相似文献   

14.
Summary. Models for multiple-test screening data generally require the assumption that the tests are independent conditional on disease state. This assumption may be unreasonable, especially when the biological basis of the tests is the same. We propose a model that allows for correlation between two diagnostic test results. Since models that incorporate test correlation involve more parameters than can be estimated with the available data, posterior inferences will depend more heavily on prior distributions, even with large sample sizes. If we have reasonably accurate information about one of the two screening tests (perhaps the standard currently used test) or the prevalences of the populations tested, accurate inferences about all the parameters, including the test correlation, are possible. We present a model for evaluating dependent diagnostic tests and analyse real and simulated data sets. Our analysis shows that, when the tests are correlated, a model that assumes conditional independence can perform very poorly. We recommend that, if the tests are only moderately accurate and measure the same biological responses, researchers use the dependence model for their analyses.  相似文献   

15.
Judging scholarly posters creates a challenge to assign the judges efficiently. If there are many posters and few reviews per judge, the commonly used balanced incomplete block design is not a feasible option. An additional challenge is an unknown number of judges before the event. We propose two connected near-balanced incomplete block designs that both satisfy the requirements of our setting: one that generates a connected assignment and balances the treatments and another one that further balances pairs of treatments. We describe both fixed and random effects models to estimate the population marginal means of the poster scores and rationalize the use of the random effects model. We evaluate the estimation accuracy and efficiency, especially the winning chance of the truly best posters, of the two designs in comparison with a random assignment via simulation studies. The two proposed designs both demonstrate accuracy and efficiency gain over the random assignment.  相似文献   

16.
Randomization in industrial and scientific experiments on equipment has meant randomizing the order of application of levels of treatments to units. This definition is inadequate because it does not render independent error terms. Randomization also requires independent resettings of treatment levels when the levels for the preceding run are the same. We review how the literature incorrectly explains how randomization is to be carried out. The need to reset levels of a treatment from one run to the next is never emphasized. Using a simple example we show why statistical tests are biased for all treatments even when levels for just one treatment are not independently reset. Even if the expected mean squares recognize the restrictions on randomization, the usual F test will not give predictable results because its numerator and denominator are correlated.Experimental design on equipment includes experiments from the chemical, automobile, pharmaceutical, and aeronautical industries. The statistical interpretation of data from such experiments will be misleading. Books on experimental design must emphasize the independent resetting of levels just as carefully as they emphasize the random assignment of treatment levels.  相似文献   

17.
In this article, we consider the efficiency of three experimental designs for comparing two treatments with correlated binary outcomes. This work is motivated in large part by the use of toxicological bioassays with laboratory animals used to identify agents capable of causing adverse health effects in humans, but has much broader research design implications. From the toxicological perspective, the completely randomized (CR), litter-matched (LM) and nested (NE) designs correspond to the random assignment of individual animals, littermates, or entire litters to either a control or test group. The randomization schemes underlying these three designs provide a framework for the construction of exact randomization tests for comparing the two treatment groups, and for the development of the asymptotic properties of these tests. The computed Pitman asymptotic relative efficiencies demonstrate that the LM design is the most powerful in the presence of positive intra-litter correlation, followed by the CR and NE designs, respectively. The relevance of these asymptotic results for the finite sample case is confirmed by computer simulation. The more detailed results presented in this paper will be of value in informing the design of experiments with two treatment groups involving correlated binary outcomes.  相似文献   

18.
We considered binomial distributed random variables whose parameters are unknown and some of those parameters need to be estimated. We studied the maximum likelihood ratio test and the maximally selected χ2-test to detect if there is a change in the distributions among the random variables. Their limit distributions under the null hypothesis and their asymptotic distributions under the alternative hypothesis were obtained when the number of the observations is fixed. We discussed the properties of the limit distribution and found an efficient way to calculate the probability of multivariate normal random variables. Finally, those results for both tests have been applied to examples of Lindisfarne's data, the Talipes Data. Our conclusions are consistent with other researchers' findings.  相似文献   

19.
Permutation tests are often used to analyze data since they may not require one to make assumptions regarding the form of the distribution to have a random and independent sample selection. We initially considered a permutation test to assess the treatment effect on computed tomography lesion volume in the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial, which has highly skewed data. However, we encountered difficulties in summarizing the permutation test results on the lesion volume. In this paper, we discuss some aspects of permutation tests and illustrate our findings. This experience with the NINDS t-PA Stroke Trial data emphasizes that permutation tests are useful for clinical trials and can be used to validate assumptions of an observed test statistic. The permutation test places fewer restrictions on the underlying distribution but is not always distribution-free or an exact test, especially for ill-behaved data. Quasi-likelihood estimation using the generalized estimating equation (GEE) approach on transformed data seems to be a good choice for analyzing CT lesion data, based on both its corresponding permutation test and its clinical interpretation.  相似文献   

20.
In count data models, overdispersion of the dependent variable can be incorporated into the model if a heterogeneity term is added into the mean parameter of the Poisson distribution. We use a nonparametric estimation for the heterogeneity density based on a squared Kth-order polynomial expansion, that we generalize for panel data. A numerical illustration using an insurance dataset is discussed. Even if some statistical analyses showed no clear differences between these new models and the standard Poisson with gamma random effects, we show that the choice of the random effects distribution has a significant influence for interpreting our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号