首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 67 毫秒
1.
Summary.  We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions.  相似文献   

2.
Bias in meta-analysis due to outcome variable selection within studies   总被引:1,自引:0,他引:1  
Although bias in meta-analysis arising from selective publication has been studied, within-study selection has received little attention. Chronic diseases often have several possible outcome variables. Methods based on the size of the effect allow results from studies with different outcomes to be combined. However, the possibility of selective reporting of outcomes must be considered. The effect of selective reporting on estimates of the size of the effect and significance levels is presented, and sensitivity analyses are suggested. Substantial publication bias could arise from multiple testing of outcomes in a study, followed by selective reporting. Two meta-analyses, on anthelminth therapy and a treatment for incontinence, are reassessed allowing for within-study selection, as it is clear that more outcomes had been measured than were reported. The sensitivity analyses show that the robustness of the anthelminth results is dependent on what assumption one makes about the reporting strategy for the largest trial. The possible influence of correlation between within-child measurements was such that the conclusions could easily be reversed. The effect of a mild assumption on within-trial selection alone could alter general recommendations about the treatment for incontinence.  相似文献   

3.
This article provides a unified methodology of meta-analysis that synthesizes medical evidence by using both available individual patient data (IPD) and published summary statistics within the framework of likelihood principle. Most up-to-date scientific evidence on medicine is crucial information not only to consumers but also to decision makers, and can only be obtained when existing evidence from the literature and the most recent individual patient data are optimally synthesized. We propose a general linear mixed effects model to conduct meta-analyses when individual patient data are only available for some of the studies and summary statistics have to be used for the rest of the studies. Our approach includes both the traditional meta-analyses in which only summary statistics are available for all studies and the other extreme case in which individual patient data are available for all studies as special examples. We implement the proposed model with statistical procedures from standard computing packages. We provide measures of heterogeneity based on the proposed model. Finally, we demonstrate the proposed methodology through a real life example studying the cerebrospinal fluid biomarkers to identify individuals with high risk of developing Alzheimer's disease when they are still cognitively normal.  相似文献   

4.
This paper focusses mainly on three crucial choices identified in recent meta-analyses, namely (a) the effect of using approximate statistical techniques rather than exact methods, (fa) the effect of using fixed or random effect models, and (c) the effect of publication bias on the meta-analysis result. The paper considers their impact on a set of over thirty studies of passive smoking and lung cancer in non-smokers, and addresses other issues such as the role of study comparability, the choice of raw or adjusted data when using published summary statistics, and the effect of biases such as misclassification of subjects and study quality. The paper concludes that, at least in this example, different conclusions might be drawn from metaanalyses based on fixed or random effect models; that exact methods might increase estimated confidence interval widths by 5–20% over standard approximate (logit and Mantel-Haenszel) methods, and that these methods themselves differ by this order of magnitude; that taking study quality into account changes some results, and also improves homogeneity; that the use of unadjusted or author-adjusted data makes limited difference; that there appears to be obvious publication bias favouring observed raised relative risks; and that the choice of studies for inclusion is the single most critical choice made by the modeller.  相似文献   

5.
In this article, we propose an estimation procedure to estimate parameters of joint model when there exists a relationship between cluster size and clustered failure times of subunits within a cluster. We use a joint random effect model of clustered failure times and cluster size. To investigate the possible association, two submodels are connected by a common latent variable. The EM algorithm is applied for the estimation of parameters in the models. Simulation studies are performed to assess the finite sample properties of the estimators. Also, sensitivity tests show the influence of the misspecification of random effect distributions. The methods are applied to a lymphatic filariasis study for adult worm nests.  相似文献   

6.
Because of its simplicity, the Q statistic is frequently used to test the heterogeneity of the estimated intervention effect in meta-analyses of individually randomized trials. However, it is inappropriate to apply it directly to the meta-analyses of cluster randomized trials without taking clustering effects into account. We consider the properties of the adjusted Q statistic for testing heterogeneity in the meta-analyses of cluster randomized trials with binary outcomes. We also derive an analytic expression for the power of this statistic to detect heterogeneity in meta-analyses, which can be useful when planning a meta-analysis. A simulation study is used to assess the performance of the adjusted Q statistic, in terms of its Type I error rate and power. The simulation results are compared to that obtained from the proposed formula. It is found that the adjusted Q statistic has a Type I error rate close to the nominal level of 5%, as compared to the unadjusted Q statistic commonly used to test for heterogeneity in the meta-analyses of individually randomized trials with an inflated Type I error rate. Data from a meta-analysis of four cluster randomized trials are used to illustrate the procedures.  相似文献   

7.
Decisions to undertake bio-medical studies might depend on the results of previous similar studies. So too might the timing of meta-analyses. We show how temporal dependence among the studies analyzed in the meta-analysis, as well as the timing of the meta-analysis itself, can bias the results of the meta-analysis. We show analytically and numerically that a “toy” meta-analysis is biased. We then study bias in a more realistic stochastic process model of meta-analysis. We conclude that in meta-analysis it is difficult of avoid bias that is caused by statistical dependence among studies.  相似文献   

8.
Many areas of statistical modeling are plagued by the “curse of dimensionality,” in which there are more variables than observations. This is especially true when developing functional regression models where the independent dataset is some type of spectral decomposition, such as data from near-infrared spectroscopy. While we could develop a very complex model by simply taking enough samples (such that n > p), this could prove impossible or prohibitively expensive. In addition, a regression model developed like this could turn out to be highly inefficient, as spectral data usually exhibit high multicollinearity. In this article, we propose a two-part algorithm for selecting an effective and efficient functional regression model. Our algorithm begins by evaluating a subset of discrete wavelet transformations, allowing for variation in both wavelet and filter number. Next, we perform an intermediate processing step to remove variables with low correlation to the response data. Finally, we use the genetic algorithm to perform a stochastic search through the subset regression model space, driven by an information-theoretic objective function. We allow our algorithm to develop the regression model for each response variable independently, so as to optimally model each variable. We demonstrate our method on the familiar biscuit dough dataset, which has been used in a similar context by several researchers. Our results demonstrate both the flexibility and the power of our algorithm. For each response variable, a different subset model is selected, and different wavelet transformations are used. The models developed by our algorithm show an improvement, as measured by lower mean error, over results in the published literature.  相似文献   

9.
Genome-wide association studies are effective in investigating the loci related with complex diseases. Sometimes, the genotype is not exactly decoded and only genotype probability is obtained. In this case, F-test based on imputed genotype is usually used for the association analysis. Simulations show that existing methods such as the dosage test have poor performance when the genetic model is misspecified. In this study, we develop a robust test to detect the association of a disease and genetic loci while the genotype is uncertain and the genetic model is unknown.  相似文献   

10.
Estimation of the allele frequency at genetic markers is a key ingredient in biological and biomedical research, such as studies of human genetic variation or of the genetic etiology of heritable traits. As genetic data becomes increasingly available, investigators face a dilemma: when should data from other studies and population subgroups be pooled with the primary data? Pooling additional samples will generally reduce the variance of the frequency estimates; however, used inappropriately, pooled estimates can be severely biased due to population stratification. Because of this potential bias, most investigators avoid pooling, even for samples with the same ethnic background and residing on the same continent. Here, we propose an empirical Bayes approach for estimating allele frequencies of single nucleotide polymorphisms. This procedure adaptively incorporates genotypes from related samples, so that more similar samples have a greater influence on the estimates. In every example we have considered, our estimator achieves a mean squared error (MSE) that is smaller than either pooling or not, and sometimes substantially improves over both extremes. The bias introduced is small, as is shown by a simulation study that is carefully matched to a real data example. Our method is particularly useful when small groups of individuals are genotyped at a large number of markers, a situation we are likely to encounter in a genome-wide association study.  相似文献   

11.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   

12.
Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In genetic programming, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. Genetic programming is illustrated via an example of machine learning of the Boolean 11-multiplexer function and symbolic regression of the econometric exchange equation from noisy empirical data.Hierarchical automatic function definition enables genetic programming to define potentially useful functions automatically and dynamically during a run, much as a human programmer writing a complex computer program creates subroutines (procedures, functions) to perform groups of steps which must be performed with different instantiations of the dummy variables (formal parameters) in more than one place in the main program. Hierarchical automatic function definition is illustrated via the machine learning of the Boolean 11-parity function.  相似文献   

13.

Meta-analysis refers to quantitative methods for combining results from independent studies in order to draw overall conclusions. Hierarchical models including selection models are introduced and shown to be useful in such Bayesian meta-analysis. Semiparametric hierarchical models are proposed using the Dirichlet process prior. These rich class of models combine the information of independent studies, allowing investigation of variability both between and within studies, and weight function. Here we investigate sensitivity of results to unobserved studies by considering a hierarchical selection model with including unknown weight function and use Markov chain Monte Carlo methods to develop inference for the parameters of interest. Using Bayesian method, this model is used on a meta-analysis of twelve studies comparing the effectiveness of two different types of flouride, in preventing cavities. Clinical informative prior is assumed. Summaries and plots of model parameters are analyzed to address questions of interest.  相似文献   

14.
A problem which occurs in the practice of meta-analysis is that one or more component studies may have sparse data, such as zero events in the treatment and control groups. Two possible approaches were explored using simulations. The corrected method, in which one half was added to each cell was compared to the uncorrected method. These methods were compared over a range of sparse data situations in terms of coverage rates using three summary statistics:the Mantel-Haenszel odds ratio and the dersimonian and Laird odds ratio and rate difference. The uncorrected method performed better only when using the Mantel-Haenszel odds ratio with very little heterogeneity present. For all other sparse data applications, the continuity correction performed better and is recommended for use in meta-analyses of similar scope  相似文献   

15.
Analysis of familial aggregation in the presence of varying family sizes   总被引:2,自引:0,他引:2  
Summary.  Family studies are frequently undertaken as the first step in the search for genetic and/or environmental determinants of disease. Significant familial aggregation of disease is suggestive of a genetic aetiology for the disease and may lead to more focused genetic analysis. Of course, it may also be due to shared environmental factors. Many methods have been proposed in the literature for the analysis of family studies. One model that is appealing for the simplicity of its computation and the conditional interpretation of its parameters is the quadratic exponential model. However, a limiting factor in its application is that it is not reproducible , meaning that all families must be of the same size. To increase the applicability of this model, we propose a hybrid approach in which analysis is based on the assumption of the quadratic exponential model for a selected family size and combines a missing data approach for smaller families with a marginalization approach for larger families. We apply our approach to a family study of colorectal cancer that was sponsored by the Cancer Genetics Network of the National Institutes of Health. We investigate the properties of our approach in simulation studies. Our approach applies more generally to clustered binary data.  相似文献   

16.
ABSTRACT

Recently, researchers have tried to design the T2 chart economically to achieve the minimum possible quality cost; however, when T2 chart is designed, it is important to consider multiple scenarios. This research presents the robust economic designs of the T2 chart where there is more than one scenario. An illustrative example is used to demonstrate the effect of the model parameters on the optimal designs. The genetic algorithm optimization method is employed to obtain the optimal designs. Simulation studies show that the robust economic designs of T2 chart are more effective than traditional economic design in practice.  相似文献   

17.
In studies that involve censored time-to-event data, stratification is frequently encountered due to different reasons, such as stratified sampling or model adjustment due to violation of model assumptions. Often, the main interest is not in the clustering variables, and the cluster-related parameters are treated as nuisance. When inference is about a parameter of interest in presence of many nuisance parameters, standard likelihood methods often perform very poorly and may lead to severe bias. This problem is particularly evident in models for clustered data with cluster-specific nuisance parameters, when the number of clusters is relatively high with respect to the within-cluster size. However, it is still unclear how the presence of censoring would affect this issue. We consider clustered failure time data with independent censoring, and propose frequentist inference based on an integrated likelihood. We then apply the proposed approach to a stratified Weibull model. Simulation studies show that appropriately defined integrated likelihoods provide very accurate inferential results in all circumstances, such as for highly clustered data or heavy censoring, even in extreme settings where standard likelihood procedures lead to strongly misleading results. We show that the proposed method performs generally as well as the frailty model, but it is superior when the frailty distribution is seriously misspecified. An application, which concerns treatments for a frequent disease in late-stage HIV-infected people, illustrates the proposed inferential method in Weibull regression models, and compares different inferential conclusions from alternative methods.  相似文献   

18.
The generalized odds ratio (ORG) is a novel model-free approach to test the association in genetic studies by estimating the overall risk effect based on the complete genotype distribution. However, the power of ORG has not been explored and, particularly, in a setting where the mode of inheritance is known. A population genetics model was simulated in order to define the mode of inheritance of a pertinent gene–disease association in advance. Then, the power of ORG was explored based on this model and compared with the chi-square test for trend. The model considered bi- and tri-allelic gene–disease associations, and deviations from the Hardy–Weinberg equilibrium (HWE). The simulations showed that bi- and tri-allelic variants have the same pattern of power results. The power of ORG increases with increase in the frequency of mutant allele and the coefficient of selection and, of course, the degree of dominance of the mutant allele. The deviation from HWE has a considerable impact on power only for small values of the above parameters. The ORG showed superiority in power compared with the chi-square test for trend when there is deviation from HWE; otherwise, the pattern of results was similar in both the approaches.  相似文献   

19.
Responses of two groups, measured on the same ordinal scale, are compared through the column effect association model, applied on the corresponding 2 × J contingency table. Monotonic or umbrella shaped ordering for the scores of the model are related to stochastic or umbrella ordering of the underlying response distributions, respectively. An algorithm for testing all possible hypotheses of stochastic ordering and deciding on an appropriate one is proposed.  相似文献   

20.
Although efficiency robust tests are preferred for genetic association studies when the genetic model is unknown, their statistical properties have been studied for different study designs separately under special situations. We study some statistical properties of the maximin efficiency robust test and a maximum‐type robust test (MAX3) under a general setting and obtain unified results. The results can also be applied to testing hypothesis with a constrained two‐dimensional parameter space. The results are applied to genetic association studies using case–parents trio data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号