首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The rise over recent years in the use of network meta‐analyses (NMAs) in clinical research and health economic analysis is little short of meteoric driven, in part, by a desire from decision makers to extend inferences beyond direct comparisons in controlled clinical trials. But is the increased use and reliance of NMAs justified? Do such analyses provide a reliable basis for the relative effectiveness assessment of medicines and, in turn, for critical decisions relating to healthcare access and provisioning? And can such analyses also be used earlier, as part of the evidence base for licensure? Despite several important publications highlighting inherently unverifiable assumptions underpinning NMAs, these assumptions and associated potential for serious bias are often overlooked in the reporting and interpretation of NMAs. A more cautious, and better informed, approach to the use and interpretation of NMAs in clinical research is warranted given the assumptions that sit behind such analyses. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
The t-test of an individual coefficient is used widely in models of qualitative choice. However, it is well known that the t-test can yield misleading results when the sample size is small. This paper provides some experimental evidence on the finite sample properties of the t-test in models with sample selection biases, through a comparison of the t-test with the likelihood ratio and Lagrange multiplier tests, which are asymptotically equivalent to the squared t-test. The finite sample problems with the t-test are shown to be alarming, and much more serious than in models such as binary choice models. An empirical example is also presented to highlight the differences in the calculated test statistics.  相似文献   

3.
In this paper, we study a working sub-model of partially linear model determined by variable selection. Such a sub-model is more feasible and practical in application, but usually biased. As a result, the common parameter estimators are inconsistent and the corresponding confidence regions are invalid. To deal with the problems relating to the model bias, a nonparametric adjustment procedure is provided to construct a partially unbiased sub-model. It is proved that both the adjusted restricted-model estimator and the adjusted preliminary test estimator are partially consistent, which means when the samples drop into some given subspaces, the estimators are consistent. Luckily, such subspaces are large enough in a certain sense and thus such a partial consistency is close to global consistency. Furthermore, we build a valid confidence region for parameters in the sub-model by the corresponding empirical likelihood.  相似文献   

4.
This article considers identification and estimation of social network models in a system of simultaneous equations. We show that, with or without row-normalization of the social adjacency matrix, the network model has different equilibrium implications, needs different identification conditions, and requires different estimation strategies. When the adjacency matrix is not row-normalized, the variation in the Bonacich centrality across nodes in a network can be used as an IV to identify social interaction effects and improve estimation efficiency. The number of such IVs depends on the number of networks. When there are many networks in the data, the proposed estimators may have an asymptotic bias due to the presence of many IVs. We propose a bias-correction procedure for the many-instrument bias. Simulation experiments show that the bias-corrected estimators perform well in finite samples. We also provide an empirical example to illustrate the proposed estimation procedure.  相似文献   

5.
In two-sample comparison problems it is often of interest to examine whether one distribution function majorises the other, that is, for the presence of stochastic ordering. This paper develops a nonparametric test for stochastic ordering from size-biased data, allowing the pattern of the size bias to differ between the two samples. The test is formulated in terms of a maximally selected local empirical likelihood statistic. A Gaussian multiplier bootstrap is devised to calibrate the test. Simulation results show that the proposed test outperforms an analogous Wald-type test, and that it provides substantially greater power over ignoring the size bias. The approach is illustrated using data on blood alcohol concentration of drivers involved in car accidents, where the size bias is due to drunker drivers being more likely to be involved in accidents. Further, younger drivers tend to be more affected by alcohol, so in making comparisons with older drivers the analysis is adjusted for differences in the patterns of size bias.  相似文献   

6.
在国内关于CPI是否存在偏差及如何测度偏差研究很少的学术背景下,主要围绕CPI价格指数是否存在偏差、偏差的比较基准、偏差的来源及如何测度偏差、降低偏差的主要方法等问题,对近年来国外的相关研究进行了综述,以期能对中国关于CPI的研究提供帮助。  相似文献   

7.
Summary.  It is perhaps underappreciated that ruptured abdominal aortic aneurysm is a significant cause of mortality in the UK. The only curative treatment is an emergency operation and quantifying the success of this presents many difficulties. In particular, there is empirical evidence of reporting bias, suggesting that studies failing to report operating theatre mortality may be those where death in theatre is more common. We suggest a procedure for correcting for this bias and re-examine a recent meta-analysis of the available data. This casts considerable doubt on some conclusions from naïve analyses that do not take into account the potential bias. Perhaps most importantly, our procedure indicates a modest improvement in operating theatre mortality over the last 50 years, which is a trend that is not evident from the usual naïve analyses.  相似文献   

8.
ApEn, approximate entropy, is a recently developed family of parameters and statistics quantifying regularity (complexity) in data, providing an information-theoretic quantity for continuous-state processes. We provide the motivation for ApEn development, and indicate the superiority of ApEn to the K-S entropy for statistical application, and for discrimination of both correlated stochastic and noisy deterministic processes. We study the variation of ApEn with input parameter choices, reemphasizing that ApEn is a relative measure of regularity. We study the bias in the ApEn statistic, and present evidence for asymptotic normality in the ApEn distributions, assuming weak dependence. We provide a new test for the hypothesis that an underlying time-series is generated by i.i.d. variables, which does not require distribution specification. We introduce randomized ApEn, which derives an empirical significance probability that two processes differ, based on one data set from each process.  相似文献   

9.
The theoretical and empirical implications of omitted variables, particularly dynamic adjustment effects, are studied. In particular, the attempt to model for such omissions by including possibly irrelevant variables is investigated. This extends the existing knowledge of misspecification analysis in several directions. Ordinary least squares is the estimation technique under study, as has been the case in several recent and related studies. In our empirical example, the question of seasonal variation in interest rates is addressed. We deal with the related issue of deterministic versus stochastic detrending and demonstrate that it can be usefully cast in the context of “misspecification analysis” in dynamic models developed in this article.  相似文献   

10.
We consider the problem of supplementing survey data with additional information from a population. The framework we use is very general; examples are missing data problems, measurement error models and combining data from multiple surveys. We do not require the survey data to be a simple random sample of the population of interest. The key assumption we make is that there exists a set of common variables between the survey and the supplementary data. Thus, the supplementary data serve the dual role of providing adjustments to the survey data for model consistencies and also enriching the survey data for improved efficiency. We propose a semi‐parametric approach using empirical likelihood to combine data from the two sources. The method possesses favourable large and moderate sample properties. We use the method to investigate wage regression using data from the National Longitudinal Survey of Youth Study.  相似文献   

11.
Sample size planning is an important design consideration for a phase 3 trial. In this paper, we consider how to improve this planning when using data from phase 2 trials. We use an approach based on the concept of assurance. We consider adjusting phase 2 results because of two possible sources of bias. The first source arises from selecting compounds with pre‐specified favourable phase 2 results and using these favourable results as the basis of treatment effect for phase 3 sample size planning. The next source arises from projecting phase 2 treatment effect to the phase 3 population when this projection is optimistic because of a generally more heterogeneous patient population at the confirmatory stage. In an attempt to reduce the impact of these two sources of bias, we adjust (discount) the phase 2 estimate of treatment effect. We consider multiplicative and additive adjustment. Following a previously proposed concept, we consider the properties of several criteria, termed launch criteria, for deciding whether or not to progress development to phase 3. We use simulations to investigate launch criteria with or without bias adjustment for the sample size calculation under various scenarios. The simulation results are supplemented with empirical evidence to support the need to discount phase 2 results when the latter are used in phase 3 planning. Finally, we offer some recommendations based on both the simulations and the empirical investigations. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Finite Sample Properties of the Two-Step Empirical Likelihood Estimator   总被引:1,自引:1,他引:0  
We investigate the finite sample properties of two-step empirical likelihood (EL) estimators. These estimators are shown to have the same third-order bias properties as EL itself. The Monte Carlo study provides evidence that (i) higher order asymptotics fails to provide a good approximation in the sense that the bias of the two-step EL estimators can be substantial and sensitive to the number of moment restrictions and (ii) the two-step EL estimators may have heavy tails.  相似文献   

13.
ABSTRACT

We investigate the finite sample properties of two-step empirical likelihood (EL) estimators. These estimators are shown to have the same third-order bias properties as EL itself. The Monte Carlo study provides evidence that (i) higher order asymptotics fails to provide a good approximation in the sense that the bias of the two-step EL estimators can be substantial and sensitive to the number of moment restrictions and (ii) the two-step EL estimators may have heavy tails.  相似文献   

14.
A wide class of block designs admitting a simple analysis has been considered. The statistical properties of such designs have been indicated and the problems relating to their characterization and construction have been investigated.  相似文献   

15.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   

16.
This article analyzes scores given by judges of figure skating at the 1980 Winter Olympics. Judges' scores are found to be highly correlated, with little evidence of scoring along political lines. However, an analysis of variance shows a small but consistent “patriotic” bias; judges tend to give higher scores to contestants from their own country. The influence of such effects on final placings is estimated.  相似文献   

17.
Observational data analysis is often based on tacit assumptions of ignorability or randomness. The paper develops a general approach to local sensitivity analysis for selectivity bias, which aims to study the sensitivity of inference to small departures from such assumptions. If M is a model assuming ignorability, we surround M by a small neighbourhood N defined in the sense of Kullback–Leibler divergence and then compare the inference for models in N with that for M . Interpretable bounds for such differences are developed. Applications to missing data and to observational comparisons are discussed. Local approximations to sensitivity analysis are model robust and can be applied to a wide range of statistical problems.  相似文献   

18.
This article reviews the exciting and rapidly expanding literature on realized volatility. After presenting a general univariate framework for estimating realized volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized.  相似文献   

19.
Various methodologies proposed for some inference problems associated with two‐arm trails are known to suffer from difficulties, as documented in Senn (2001). We propose an alternative Bayesian approach to these problems that deals with these difficulties through providing an explicit measure of statistical evidence and the strength of this evidence. Bayesian methods are often criticized for their intrinsic subjectivity. We show how these concerns can be dealt with through assessing the bias induced by a prior model checking and checking for prior‐data conflict. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Summary.  The paper compares current and 1-year retrospective data on unemployment in the German Socio-Economic Panel study. 13% of all unemployment spells are not reported 1 year later, and another 7% are misreported. The ratio of retrospective to current unemployment has increased in recent years and is related to salience of unemployment measures such as the loss of life satisfaction that is associated with unemployment. Individuals with weak labour force attachment, e.g. women with children or individuals who are close to retirement, have the greatest propensity to under-report unemployment retrospectively. The data are consistent with evidence on retrospective bias found by cognitive psychologists and survey methodologists.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号