首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Missing variances, on the basis of the summary-level data, can be a problem when an inverse variance weighted meta-analysis is undertaken. A wide range of approaches in dealing with this issue exist, such as excluding data without a variance measure, using a function of sample size as a weight and imputing the missing standard errors/deviations. A non-linear mixed effects modelling approach was taken to describe the time-course of standard deviations across 14 studies. The model was then used to make predictions of the missing standard deviations, thus, enabling a precision weighted model-based meta-analysis of a mean pain endpoint over time. Maximum likelihood and Bayesian approaches were implemented with example code to illustrate how this imputation can be carried out and to compare the output from each method. The resultant imputations were nearly identical for the two approaches. This modelling approach acknowledges the fact that standard deviations are not necessarily constant over time and can differ between treatments and across studies in a predictable way.  相似文献   

2.
This article proposes the use of optimization techniques and tools to maximize the likelihood if maximization cannot be easily accomplished with standard statistical software. In such situations, the use of the programming language AMPL with the freely available optimization solvers under the NEOS Server is an attractive alternative to algorithms developed for specific optimization problems in statistics. This article is meant to be a short tutorial introducing statisticians to these methods and tools. We provide an example to illustrate these methods. The necessary files for maximization are included in the Appendix so that the reader can carry out the optimization procedure described.  相似文献   

3.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   

4.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   

5.
Regression methods for common data types such as measured, count and categorical variables are well understood but increasingly statisticians need ways to model relationships between variable types such as shapes, curves, trees, correlation matrices and images that do not fit into the standard framework. Data types that lie in metric spaces but not in vector spaces are difficult to use within the usual regression setting, either as the response and/or a predictor. We represent the information in these variables using distance matrices which requires only the specification of a distance function. A low-dimensional representation of such distance matrices can be obtained using methods such as multidimensional scaling. Once these variables have been represented as scores, an internal model linking the predictors and the responses can be developed using standard methods. We call scoring as the transformation from a new observation to a score, whereas backscoring is a method to represent a score as an observation in the data space. Both methods are essential for prediction and explanation. We illustrate the methodology for shape data, unregistered curve data and correlation matrices using motion capture data from an experiment to study the motion of children with cleft lip.  相似文献   

6.
Genetic data are in widespread use in ecological research, and an understanding of this type of data and its uses and interpretations will soon be an imperative for ecological statisticians. Here, we provide an introduction to the subject, intended for statisticians who have no previous knowledge of genetics. Although there are numerous types of genetic data, we restrict attention to multilocus genotype data from microsatellite loci. We look at two application areas in wide use: investigating population structure using genetic assignment and related techniques; and using genotype data in capture–recapture studies for estimating population size and demographic parameters. In each case, we outline the conceptual framework and draw attention to both the strengths and weaknesses of existing approaches to analysis and interpretation.  相似文献   

7.
The conventional random effects model for meta-analysis of proportions approximates within-study variation using a normal distribution. Due to potential approximation bias, particularly for the estimation of rare events such as some adverse drug reactions, the conventional method is considered inferior to the exact methods based on binomial distributions. In this article, we compare two existing exact approaches—beta binomial (B-B) and normal-binomial (N-B)—through an extensive simulation study with focus on the case of rare events that are commonly encountered in medical research. In addition, we implement the empirical (“sandwich”) estimator of variance into the two models to improve the robustness of the statistical inferences. To our knowledge, it is the first such application of sandwich estimator of variance to meta-analysis of proportions. The simulation study shows that the B-B approach tends to have substantially smaller bias and mean squared error than N-B for rare events with occurrences under 5%, while N-B outperforms B-B for relatively common events. Use of the sandwich estimator of variance improves the precision of estimation for both models. We illustrate the two approaches by applying them to two published meta-analysis from the fields of orthopedic surgery and prevention of adverse drug reactions.  相似文献   

8.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   

9.
This paper provides an introduction to utilities for statisticians working mainly in clinical research who have not had experience of health technology assessment work. Utility is the numeric valuation applied to a health state based on the preference of being in that state relative to perfect health. Utilities are often combined with survival data in health economic modelling to obtain quality‐adjusted life years. There are several methods available for deriving the preference weights and the health states to which they are applied, and combining them to estimate utilities, and the clinical statistician has valuable skills that can be applied in ensuring the robustness of the trial design, data collection and analyses to obtain and handle this data. In addition to raising awareness of the subject and providing source references, the paper outlines the concepts and approaches around utilities using examples, discusses some of the key issues, and proposes areas where statisticians can collaborate with health economic colleagues to improve the quality of this important element of health technology assessment. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Subgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.  相似文献   

11.
The generalization of the Behrens–Fisher problem to comparing more than two means from nonhomogeneous populations has attracted the attention of statisticians for many decades. Several approaches offer different approximations to the distribution of the test statistic. The question of statistical properties of these approximations is still alive. Here, we present a brief overview of several approaches suggested in the literature and implemented in software with a focus on investigating the accuracy of p values as well as their dependence on nuisance parameters and on the underlying assumption of normality. We illustrate by simulation the behavior of p values. In addition to the Satterthwaite–Fai–Cornelius test, the Kenward–Roger test, the simple ANOVA F test, the parametric bootstrap test, and the generalized F test will be briefly discussed.  相似文献   

12.
This article proposes a Bayesian approach for meta-analysis of correlation coefficients through power prior. The primary purpose of this method is to allow meta-analytic researchers to evaluate the contribution and influence of each individual study to the estimated overall effect size though power prior. We use the relationship between high-performance work systems and financial performance as an example to illustrate how to apply this method. We also introduce free online software that can be used to conduct Bayesian meta-analysis proposed in this study. Implications and future directions are also discussed in this article.  相似文献   

13.
Data from complex surveys are being used increasingly to build the same sort of explanatory and predictive models as those used in the rest of statistics. Unfortunately the assumptions underlying standard statistical methods are not even approximately valid for most survey data. The problem of parameter estimation has been largely solved, at least for routine data analysis, through the use of weighted estimating equations, and software for most standard analytical procedures is now available in the major statistical packages. One notable omission from standard software is an analogue of the likelihood ratio test. An exception is the Rao–Scott test for loglinear models in contingency tables. In this paper we show how the Rao–Scott test can be extended to handle arbitrary regression models. We illustrate the process of fitting a model to survey data with an example from NHANES.  相似文献   

14.
This is an expository article. Here we show how the successfully used Kalman filter, popular with control engineers and other scientists, can be easily understood by statisticians if we use a Bayesian formulation and some well-known results in multivariate statistics. We also give a simple example illustrating the use of the Kalman filter for quality control work.  相似文献   

15.
Summary.  On-line auctions pose many challenges for the empirical researcher, one of which is the effective and reliable modelling of price paths. We propose a novel way of modelling price paths in eBay's on-line auctions by using functional data analysis. One of the practical challenges is that the functional objects are sampled only very sparsely and unevenly. Most approaches rely on smoothing to recover the underlying functional object from the data, which can be difficult if the data are irregularly distributed. We present a new approach that can overcome this challenge. The approach is based on the ideas of mixed models. Specifically, we propose a semiparametric mixed model with boosting to recover the functional object. As well as being able to handle sparse and unevenly distributed data, the model also results in conceptually more meaningful functional objects. In particular, we motivate our method within the framework of eBay's on-line auctions. On-line auctions produce monotonic increasing price curves that are often correlated across auctions. The semiparametric mixed model accounts for this correlation in a parsimonious way. It also manages to capture the underlying monotonic trend in the data without imposing model constraints. Our application shows that the resulting functional objects are conceptually more appealing. Moreover, when used to forecast the outcome of an on-line auction, our approach also results in more accurate price predictions compared with standard approaches. We illustrate our model on a set of 183 closed auctions for Palm M515 personal digital assistants.  相似文献   

16.
Meta-analytical approaches have been extensively used to analyze medical data. In most cases, the data come from different studies or independent trials with similar characteristics. However, these methods can be applied in a broader sense. In this paper, we show how existing meta-analytic techniques can also be used as well when dealing with parameters estimated from individual hierarchical data. Specifically, we propose to apply statistical methods that account for the variances (and possibly covariances) of such measures. The estimated parameters together with their estimated variances can be incorporated into a general linear mixed model framework. We illustrate the methodology by using data from a first-in-man study and a simulated data set. The analysis was implemented with the SAS procedure MIXED and example code is offered.  相似文献   

17.
A meta-analysis of a continuous outcome measure may involve missing standard errors. This is not a problem depending on assumptions made about the population standard deviation. Multiple imputation can be used to impute missing values while allowing for uncertainty in the imputation. Markov chain Monte Carlo simulation is a multiple imputation technique for generating posterior predictive distributions for missing data. We present an example of imputing missing variances using WinBUGS. The example highlights the importance of checking model assumptions, whether for missing or observed data.  相似文献   

18.
Organizations tailor their mentoring strategies to accommodate internal resources and preferences, producing different approaches in academic, government, and corporate environments. Across these settings, three common barriers impede effective mentoring of statisticians: overspecialization, time constraints, and geographic dispersion. The authors share mentoring strategies that have emerged at their organization, Mathematica Policy Research, to overcome these obstacles. Practices include creating a methodology working group to unite researchers with diverse backgrounds, integrating mentoring into existing workflows, and harnessing modern technological infrastructure to facilitate virtual mentoring. Although these strategies emerged within a specific professional context, they suggest opportunities for statisticians to expand the channels through which mentorship can occur.  相似文献   

19.
Density function is a fundamental concept in data analysis. Non-parametric methods including kernel smoothing estimate are available if the data is completely observed. However, in studies such as diagnostic studies following a two-stage design the membership of some of the subjects may be missing. Simply ignoring those subjects with unknown membership is valid only in the MCAR situation. In this paper, we consider kernel smoothing estimate of the density functions, using the inverse probability approaches to address the missing values. We illustrate the approaches with simulation studies and real study data in mental health.  相似文献   

20.
Benefit-risk assessment is a fundamental element of drug development with the aim to strengthen decision making for the benefit of public health. Appropriate benefit-risk assessment can provide useful information for proactive intervention in health care settings, which could save lives, reduce litigation, improve patient safety and health care outcomes, and furthermore, lower overall health care costs. Recent development in this area presents challenges and opportunities to statisticians in the pharmaceutical industry. We review the development and examine statistical issues in comparative benefit-risk assessment. We argue that a structured benefit-risk assessment should be a multi-disciplinary effort involving experts in clinical science, safety assessment, decision science, health economics, epidemiology and statistics. Well planned and conducted analyses with clear consideration on benefit and risk are critical for appropriate benefit-risk assessment. Pharmaceutical statisticians should extend their knowledge to relevant areas such as pharmaco-epidemiology, decision analysis, modeling, and simulation to play an increasingly important role in comparative benefit-risk assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号