首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Mendelian randomization (MR) uses genetic variants as instrumental variables to infer whether a risk factor causally affects a health outcome. Meta‐analysis has been used historically in MR to combine results from separate epidemiological studies, with each study using a small but select group of genetic variants. In recent years, it has been used to combine genome‐wide association study (GWAS) summary data for large numbers of genetic variants. Heterogeneity among the causal estimates obtained from multiple genetic variants points to a possible violation of the necessary instrumental variable assumptions. In this article, we provide a basic introduction to MR and the instrumental variable theory that it relies upon. We then describe how random effects models, meta‐regression, and robust regression are being used to test and adjust for heterogeneity in order to improve the rigor of the MR approach.  相似文献   

2.
This paper presents a procedure for testing the hypothesis that the underlying distribution of the data is elliptical when using robust location and scatter estimators instead of the sample mean and covariance matrix. Under mild assumptions that include elliptical distributions without first moments, we derive the test statistic asymptotic behavior under the null hypothesis and under special alternatives. Numerical experiments allow to compare the behavior of the tests based on the sample mean and covariance matrix with that based on robust estimators, under various elliptical distributions and different alternatives. We also provide a numerical comparison with other competing tests.  相似文献   

3.
    
Proschan, Brittain, and Kammerman made a very interesting observation that for some examples of the unequal allocation minimization, the mean of the unconditional randomization distribution is shifted away from 0. Kuznetsova and Tymofyeyev linked this phenomenon to the variations in the allocation ratio from allocation to allocation in the examples considered in the paper by Proschan et al. and advocated the use of unequal allocation procedures that preserve the allocation ratio at every step. In this paper, we show that the shift phenomenon extends to very common settings: using conditional randomization test in a study with equal allocation. This phenomenon has the same cause: variations in the allocation ratio among the allocation sequences in the conditional reference set, not previously noted. We consider two kinds of conditional randomization tests. The first kind is the often used randomization test that conditions on the treatment group totals; we describe the variations in the conditional allocation ratio with this test on examples of permuted block randomization and biased coin randomization. The second kind is the randomization test proposed by Zheng and Zelen for a multicenter trial with permuted block central allocation that conditions on the within‐center treatment totals. On the basis of the sequence of conditional allocation ratios, we derive the value of the shift in the conditional randomization distribution for specific vector of responses and the expected value of the shift when responses are independent identically distributed random variables. We discuss the asymptotic behavior of the shift for the two types of tests. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
    
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
    
Change detection is one of the most important tasks in time series analysis. When the series is very long, or when it is rapidly updated, it has to be treated as a stream. This means that the change detection algorithm must process each sample in O (1) time and memory. A good algorithm must be generic in terms of the type of changes it can detect. Beyond all, a good algorithm must present a favorable and controlled ratio of the number of samples needed to detect a change to the rate of false positives. We present a change‐point detection algorithm called ProTO which dynamically manages a set of candidate change‐points whose expected size is a controllable constant. In terms of sample processing, ProTO is comparable with the fastest known algorithm—the Page‐Hinkley Test (PHT). Yet, because PHT is limited to just one candidate, ProTO outperforms it in terms of the ratio of the delay to the false positive rate, as well as in terms of robustness. We provide variants of ProTO for detecting changes in the mean or the variance of the stream, and experiment with two realistic applications, as well as with synthetic data. On real problems, ProTO compares favorably with state‐of‐the‐art algorithms implemented in the R‐package, which require more than O (1) time per sample.  相似文献   

6.
    
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
    
Clinical trials involving multiple time‐to‐event outcomes are increasingly common. In this paper, permutation tests for testing for group differences in multivariate time‐to‐event data are proposed. Unlike other two‐sample tests for multivariate survival data, the proposed tests attain the nominal type I error rate. A simulation study shows that the proposed tests outperform their competitors when the degree of censored observations is sufficiently high. When the degree of censoring is low, it is seen that naive tests such as Hotelling's T2 outperform tests tailored to survival data. Computational and practical aspects of the proposed tests are discussed, and their use is illustrated by analyses of three publicly available datasets. Implementations of the proposed tests are available in an accompanying R package.  相似文献   

8.
    
Politis & Romano (1994) proposed a general subsampling methodology for the construction of large‐sample confidence regions for an arbitrary parameter under minimal conditions. Nevertheless, the subsampling distribution estimators may sometimes be inefficient (in the case of the sample mean of i.i.d. data, for instance) as compared to alternative estimators such as the bootstrap and/or the asymptotic normal distribution (with estimated variance). The authors investigate here the extent to which the performance of subsampling distribution estimators can be improved by interpolation and extrapolation techniques, while at the same time retaining the robustness property of consistent distribution estimation even in nonregular cases; both i.i.d. and weakly dependent (mixing) observations are considered.  相似文献   

9.
The Cochran-Armitage test is the most frequently used test for trend among binomial proportions. This test can be performed based on the asymptotic normality of its test statistic or based on an exact null distribution. As an alternative, a recently introduced modification of the Baumgartner-Weiß-Schindler statistic, a novel nonparametric statistic, can be used. Simulation results indicate that the exact test based on this modification is preferable to the Cochran-Armitage test. This exact test is less conservative and more powerful than the exact Cochran-Armitage test. The power comparison to the asymptotic Cochran-Armitage test does not show a clear winner, but the difference in power is usually small. The exact test based on the modification is recommended here because, in contrast to the asymptotic Cochran-Armitage test, it guarantees a type I error rate less than or equal to the significance level. Moreover, an exact test is often more appropriate than an asymptotic test because randomization rather than random sampling is the norm, for example in biomedical research. The methods are illustrated with an example data set.  相似文献   

10.
Consider the distribution of Zi diwhere the d.di?s are 1=1 lldifferences independently, identically and symmetrically distributed with mean zero. The problem is to determine properties of the sdd given the distribution of the d.i?fs and the sample size n. The standardized moments as a function of the moments of the d.i!s are developed. A variance reduction technique for estimating the quantiles of the sdd using Monte Carlo methods is developed based on using the randomization sample consisting of the 2n values of Z i+d. rather than the single observation i=l lZ d. corresponding to each sample didn. The randomization sample is shown to produce unbiased and consistent estimators.  相似文献   

11.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   

12.
    
Robust classification algorithms have been developed in recent years with great success. We take advantage of this development and recast the classical two‐sample test problem in the framework of classification. Based on the estimates of classification probabilities from a classifier trained from the samples, a test statistic is proposed. We explain why such a test can be a powerful test and compare its performance in terms of the power and efficiency with those of some other recently proposed tests with simulation and real‐life data. The test proposed is nonparametric and can be applied to complex and high‐dimensional data wherever there is a classifier that provides consistent estimate of the classification probability for such data.  相似文献   

13.
    
We discuss three classes of bivariate symmetry models and study the estimation of their distribution functions (DFs). Under radial symmetry, an estimator based on the mean of the empirical and survival DFs is considered. For exchangeable symmetry, an estimator based on the mean of the empirical DF and its exchangeable image is presented. At their intersection, we define radial exchangeability and study estimation of its DF. The symmetrized estimators coincide with the non parametric maximum likelihood estimators of the DF under each model. We obtain their mean and variance and state their asymptotic normality. The relative efficiency of the estimators for the bivariate normal distribution is obtained.  相似文献   

14.
Abstract. The short‐term and long‐term hazard ratio model includes the proportional hazards model and the proportional odds model as submodels, and allows a wider range of hazard ratio patterns compared with some of the more traditional models. We propose two omnibus tests for checking this model, based, respectively, on the martingale residuals and the contrast between the non‐parametric and model‐based estimators of the survival function. These tests are shown to be consistent against any departure from the model. The empirical behaviours of the tests are studied in simulations, and the tests are illustrated with some real data examples.  相似文献   

15.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

16.
    
This paper proposes a consistent parametric test of Granger-causality in quantiles. Although the concept of Granger-causality is defined in terms of the conditional distribution, most articles have tested Granger-causality using conditional mean regression models in which the causal relations are linear. Rather than focusing on a single part of the conditional distribution, we develop a test that evaluates nonlinear causalities and possible causal relations in all conditional quantiles, which provides a sufficient condition for Granger-causality when all quantiles are considered. The proposed test statistic has correct asymptotic size, is consistent against fixed alternatives, and has power against Pitman deviations from the null hypothesis. As the proposed test statistic is asymptotically nonpivotal, we tabulate critical values via a subsampling approach. We present Monte Carlo evidence and an application considering the causal relation between the gold price, the USD/GBP exchange rate, and the oil price.  相似文献   

17.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   

18.
    
We propose a unified approach that is flexibly applicable to various types of grouped data for estimating and testing parametric income distributions. To simplify the use of our approach, we also provide a parametric bootstrap method and show its asymptotic validity. We also compare this approach with existing methods for grouped income data, and assess their finite-sample performance by a Monte Carlo simulation. For empirical demonstrations, we apply our approach to recovering China's income/consumption distributions from a sequence of income/consumption share tables and the U.S. income distributions from a combination of income shares and sample quantiles. Supplementary materials for this article are available online.  相似文献   

19.
    
In this article, we consider the three-factor unbalanced nested design model without the assumption of equal error variance. For the problem of testing “main effects” of the three factors, we propose a parametric bootstrap (PB) approach and compare it with the existing generalized F (GF) test. The Type I error rates of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test performs better than the generalized F-test. The PB test performs very satisfactorily even for small samples while the GF test exhibits poor Type I error properties when the number of factorial combinations or treatments goes up. It is also noted that the same tests can be used to test the significance of the random effect variance component in a three-factor mixed effects nested model under unequal error variances.  相似文献   

20.
    
Nonparametric estimation of the survival function for either incident or prevalent cohort failure time data, exclusively, has been well studied in the literature; the Kaplan‐Meier (KM) estimator is routinely used for right‐censored incident cohort failure time data, whereas a modified form of the KM estimator, sometimes referred to as the Tsai–Jewell–Wang (TJW) estimator, is the default estimator used for prevalent cohort data with follow‐up. Often, failure time data comprise observations from a combination of incident and prevalent cohorts. In this note, we justify the use of the TJW estimator for a combined sample of incident and prevalent cohort data with follow‐up. We suggest how the TJW estimator forms the basis for density estimation and hypothesis testing problems, when incident and prevalent cohorts are combined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号