首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   

2.
Recently, many articles have obtained analytical expressions for the biases of various maximum likelihood estimators, despite their lack of closed-form solution. These bias expressions have provided an attractive alternative to the bootstrap. Unless the bias function is “flat,” however, the expressions are being evaluated at the wrong point(s). We propose an “improved” analytical bias-adjusted estimator, in which the bias expression is evaluated at a more appropriate point (at the bias adjusted estimator itself). Simulations illustrate that the improved analytical bias-adjusted estimator can eliminate significantly more bias than the simple estimator, which has been well established in the literature.  相似文献   

3.
In many situations it is necessary to test the equality of the means of two normal populations when the variances are unknown and unequal. This paper studies the celebrated and controversial Behrens-Fisher problem via an adjusted likelihood-ratio test using the maximum likelihood estimates of the parameters under both the null and the alternative models. This procedure allows the significance level to be adjusted in accordance with the degrees of freedom to balance the risk due to the bias in using the maximum likelihood estimates and the risk due to the increase of variance. A large scale Monte Carlo investigation is carried out to show that -2 InA has an empirical chi-square distribution with fractional degrees of freedom instead of a chi-square distribution with one degree of freedom. Also Monte Carlo power curves are investigated under several different conditions to evaluate the performances of several conventional procedures with that of this procedure with respect to control over Type I errors and power.  相似文献   

4.
In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter estimates.Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are nonlinear making the task of maximum likelihood estimation harder. In this paper we develop an alternative approach exploiting the notion of an empirical likelihood. It is shown that within the framework of generalised linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter estimates using only unconstrained estimation. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to estimate annual birth probabilities by parity.  相似文献   

5.
Summary.  Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the null by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods.  相似文献   

6.
We consider the first-order Poisson autoregressive model proposed by McKenzie [Some simple models for discrete variate time series. Water Resour Bull. 1985;21:645–650] and Al-Osh and Alzaid [First-order integer valued autoregressive (INAR(1)) process. J Time Ser Anal. 1987;8:261–275], which may be suitable in situations where the time series data are non-negative and integer valued. We derive the second-order bias of the squared difference estimator [Weiß. Process capability analysis for serially dependent processes of Poisson counts. J Stat Comput Simul. 2012;82:383–404] for one of the parameters and show that this bias can be used to define a bias-reduced estimator. The behaviour of a modified conditional least-squares estimator is also studied. Furthermore, we access the asymptotic properties of the estimators here discussed. We present numerical evidence, based upon Monte Carlo simulation studies, showing that the here proposed bias-adjusted estimator outperforms the other estimators in small samples. We also present an application to a real data set.  相似文献   

7.
When making decisions regarding the investment and design for a Phase 3 programme in the development of a new drug, the results from preceding Phase 2 trials are an important source of information. However, only projects in which the Phase 2 results show promising treatment effects will typically be considered for a Phase 3 investment decision. This implies that, for those projects where Phase 3 is pursued, the underlying Phase 2 estimates are subject to selection bias. We will in this article investigate the nature of this selection bias based on a selection of distributions for the treatment effect. We illustrate some properties of Bayesian estimates, providing shrinkage of the Phase 2 estimate to counteract the selection bias. We further give some empirical guidance regarding the choice of prior distribution and comment on the consequences for decision-making in investment and planning for Phase 3 programmes.  相似文献   

8.
This paper addresses the problem of obtaining maximum likelihood estimates for the parameters of the Pearson Type I distribution (beta distribution with unknown end points and shape parameters). Since they do not seem to have appeared in the literature, the likelihood equations and the information matrix are derived. The regularity conditions which ensure asymptotic normality and efficiency are examined, and some apparent conflicts in the literature are noted. To ensure regularity, the shape parameters must be greater than two, giving an (assymmetrical) bell-shaped distribution with high contact in the tails. A numerical investigation was carried out to explore the bias and variance of the maximum likelihood estimates and their dependence on sample size. The numerical study indicated that only for large samples (n ≥ 1000) does the bias in the estimates become small and does the Cramér-Rao bound give a good approximation for their variance. The likelihood function has a global maximum which corresponds to parameter estimates that are inadmissable. Useful parameter estimates can be obtained at a local maximum, which is sometimes difficult to locate when the sample size is small.  相似文献   

9.
The Finnish common toad data of Heikkinen and Hogmander are reanalysed using an alternative fully Bayesian model that does not require a pseudolikelihood approximation and an alternative prior distribution for the true presence or absence status of toads in each 10 km×10 km square. Markov chain Monte Carlo methods are used to obtain posterior probability estimates of the square-specific presences of the common toad and these are presented as a map. The results are different from those of Heikkinen and Hogmander and we offer an explanation in terms of the prior used for square-specific presence of the toads. We suggest that our approach is more faithful to the data and avoids unnecessary confounding of effects. We demonstrate how to extend our model efficiently with square-specific covariates and illustrate this by introducing deterministic spatial changes.  相似文献   

10.
In this article, we develop an empirical Bayesian approach for the Bayesian estimation of parameters in four bivariate exponential (BVE) distributions. We have opted for gamma distribution as a prior for the parameters of the model in which the hyper parameters have been estimated based on the method of moments and maximum likelihood estimates (MLEs). A simulation study was conducted to compute empirical Bayesian estimates of the parameters and their standard errors. We use moment estimators or MLEs to estimate the hyper parameters of the prior distributions. Furthermore, we compare the posterior mode of parameters obtained by different prior distributions and the Bayesian estimates based on gamma priors are very close to the true values as compared to improper priors. We use MCMC method to obtain the posterior mean and compared the same using the improper priors and the classical estimates, MLEs.  相似文献   

11.
Olaf Bunke 《Statistics》2013,47(6):467-481
Bayes estimates are derived in multivariate linear models with unknown distribution. The prior distribution is defined using a Dirichlet prior for the unknown error distribution and a normal-Wishart distribution for the parameters. The posterior distribution is determined and explicit expressions are given in the special cases of location-scale and two-sample models. The calculation of self-informative limits of Bayes estimates yields standard estimates.  相似文献   

12.
ABSTRACT

Quantile regression models, as an important tool in practice, can describe effects of risk factors on the entire conditional distribution of the response variable with its estimates robust to outliers. However, there is few discussion on quantile regression for longitudinal data with both missing responses and measurement errors, which are commonly seen in practice. We develop a weighted and bias-corrected quantile loss function for the quantile regression with longitudinal data, which allows both missingness and measurement errors. Additionally, we establish the asymptotic properties of the proposed estimator. Simulation studies demonstrate the expected performance in correcting the bias resulted from missingness and measurement errors. Finally, we investigate the Lifestyle Education for Activity and Nutrition study and confirm the effective of intervention in producing weight loss after nine month at the high quantile.  相似文献   

13.
By running Monte Carlo simulations, we compare different estimation strategies of ordered response models in the presence of non-random unobserved heterogeneity. We find that very simple binary recoding schemes deliver parameter estimates with very low bias and high efficiency. Furthermore, if the researcher is interested in the relative size of parameters the simple linear fixed effects model is the method of choice.  相似文献   

14.
We conducted confirmatory factor analysis (CFA) of responses (N=803) to a self‐reported measure of optimism, using full‐information estimation via adaptive quadrature (AQ), an alternative estimation method for ordinal data. We evaluated AQ results in terms of the number of iterations required to achieve convergence, model fit, parameter estimates, standard errors (SE), and statistical significance, across four link‐functions (logit, probit, log‐log, complimentary log‐log) using 3–10 and 20 quadrature points. We compared AQ results with those obtained using maximum likelihood, robust maximum likelihood, and robust diagonally weighted least‐squares estimation. Compared to the other two link‐functions, logit and probit not only produced fit statistics, parameters estimates, SEs, and levels of significance that varied less across numbers of quadrature points, but also fitted the data better and provided larger completely standardised loadings than did maximum likelihood and diagonally weighted least‐squares. Our findings demonstrate the viability of using full‐information AQ to estimate CFA models with real‐world ordinal data.  相似文献   

15.
A common assumption in fitting panel data models is normality of stochastic subject effects. This can be extremely restrictive, making vague most potential features of true distributions. The objective of this article is to propose a modeling strategy, from a semi-parametric Bayesian perspective, to specify a flexible distribution for the random effects in dynamic panel data models. This is addressed here by assuming the Dirichlet process mixture model to introduce Dirichlet process prior for the random-effects distribution. We address the role of initial conditions in dynamic processes, emphasizing on joint modeling of start-up and subsequent responses. We adopt Gibbs sampling techniques to approximate posterior estimates. These important topics are illustrated by a simulation study and also by testing hypothetical models in two empirical contexts drawn from economic studies. We use modified versions of information criteria to compare the fitted models.  相似文献   

16.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

17.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

18.
Semi parametric methods provide estimates of finite parameter vectors without requiring that the complete data generation process be assumed in a finite-dimensional family. By avoiding bias from incorrect specification, such estimators gain robustness, although usually at the cost of decreased precision. The most familiar semi parametric method in econometrics is ordi¬nary least squares, which estimates the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family. The recent literature in econometric theory has extended semi parametric methods to a variety of non-linear models, including models appropriate for analysis of censored duration data. Horowitz and Newman make perhaps the first empirical application of these methods, to data on employment duration. Their analysis provides insights into the practical problems of implementing these methods, and limited information on performance. Their data set, containing 226 male controls from the Denver income maintenance experiment in 1971-74, does not show any significant covariates (except race), even when a fully parametric model is assumed. Consequently, the authors are unable to reject the fully parametric model in a test against the alternative semi parametric estimators. This provides some negative, but tenuous, evidence that in practical applications the reduction in bias using semi parametric estimators is insufficient to offset loss in precision. Larger samples, and data sets with strongly significant covariates, will need to be interval, and if the observation period is long enough will eventually be more loyal on average for those starting employment spells near the end of the observation period.  相似文献   

19.
Measurement error is a commonly addressed problem in psychometrics and the behavioral sciences, particularly where gold standard data either does not exist or are too expensive. The Bayesian approach can be utilized to adjust for the bias that results from measurement error in tests. Bayesian methods offer other practical advantages for the analysis of epidemiological data including the possibility of incorporating relevant prior scientific information and the ability to make inferences that do not rely on large sample assumptions. In this paper we consider a logistic regression model where both the response and a binary covariate are subject to misclassification. We assume both a continuous measure and a binary diagnostic test are available for the response variable but no gold standard test is assumed available. We consider a fully Bayesian analysis that affords such adjustments, accounting for the sources of error and correcting estimates of the regression parameters. Based on the results from our example and simulations, the models that account for misclassification produce more statistically significant results, than the models that ignore misclassification. A real data example on math disorders is considered.  相似文献   

20.
Evaluation of the impact of nosocomial infection on duration of hospital stay usually relies on estimates obtained in prospective cohort studies. However, the statistical methods used to estimate the extra length of stay are usually not adequate. A naive comparison of duration of stay in infected and non-infected patients is not adequate to estimate the extra hospitalisation time due to nosocomial infections. Matching for duration of stay prior to infection can compensate in part for the bias of ad hoc methods. New model-based approaches have been developed to estimate the excess length of stay. It will be demonstrated that statistical models based on multivariate counting processes provide an appropriate framework to analyse the occurrence and impact of nosocomial infections. We will propose and investigate new approaches to estimate the extra time spent in hospitals attributable to nosocomial infections based on functionals of the transition probabilities in multistate models. Additionally, within the class of structural nested failure time models an alternative approach to estimate the extra stay due to nosocomial infections is derived. The methods are illustrated using data from a cohort study on 756 patients admitted to intensive care units at the University Hospital in Freiburg.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号