首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The asymptotic distributions of many classical test statistics are normal. The resulting approximations are often accurate for commonly used significance levels, 0.05 or 0.01. In genome‐wide association studies, however, the significance level can be as low as 1×10−7, and the accuracy of the p‐values can be challenging. We study the accuracies of these small p‐values are using two‐term Edgeworth expansions for three commonly used test statistics in GWAS. These tests have nuisance parameters not defined under the null hypothesis but estimable. We derive results for this general form of testing statistics using Edgeworth expansions, and find that the commonly used score test, maximin efficiency robust test and the chi‐squared test are second order accurate in the presence of the nuisance parameter, justifying the use of the p‐values obtained from these tests in the genome‐wide association studies.  相似文献   

2.
Multivariate control charts are used to monitor stochastic processes for changes and unusual observations. Hotelling's T2 statistic is calculated for each new observation and an out‐of‐control signal is issued if it goes beyond the control limits. However, this classical approach becomes unreliable as the number of variables p approaches the number of observations n, and impossible when p exceeds n. In this paper, we devise an improvement to the monitoring procedure in high‐dimensional settings. We regularise the covariance matrix to estimate the baseline parameter and incorporate a leave‐one‐out re‐sampling approach to estimate the empirical distribution of future observations. An extensive simulation study demonstrates that the new method outperforms the classical Hotelling T2 approach in power, and maintains appropriate false positive rates. We demonstrate the utility of the method using a set of quality control samples collected to monitor a gas chromatography–mass spectrometry apparatus over a period of 67 days.  相似文献   

3.
Case‐cohort design has been demonstrated to be an economical and efficient approach in large cohort studies when the measurement of some covariates on all individuals is expensive. Various methods have been proposed for case‐cohort data when the dimension of covariates is smaller than sample size. However, limited work has been done for high‐dimensional case‐cohort data which are frequently collected in large epidemiological studies. In this paper, we propose a variable screening method for ultrahigh‐dimensional case‐cohort data under the framework of proportional model, which allows the covariate dimension increases with sample size at exponential rate. Our procedure enjoys the sure screening property and the ranking consistency under some mild regularity conditions. We further extend this method to an iterative version to handle the scenarios where some covariates are jointly important but are marginally unrelated or weakly correlated to the response. The finite sample performance of the proposed procedure is evaluated via both simulation studies and an application to a real data from the breast cancer study.  相似文献   

4.
Clinical trials involving multiple time‐to‐event outcomes are increasingly common. In this paper, permutation tests for testing for group differences in multivariate time‐to‐event data are proposed. Unlike other two‐sample tests for multivariate survival data, the proposed tests attain the nominal type I error rate. A simulation study shows that the proposed tests outperform their competitors when the degree of censored observations is sufficiently high. When the degree of censoring is low, it is seen that naive tests such as Hotelling's T2 outperform tests tailored to survival data. Computational and practical aspects of the proposed tests are discussed, and their use is illustrated by analyses of three publicly available datasets. Implementations of the proposed tests are available in an accompanying R package.  相似文献   

5.
In this paper, we propose a smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q‐learning algorithm in which nonregular inference is involved, we show that, under assumptions adopted in this paper, the proposed smoothed Q‐learning estimator is asymptotically normally distributed even when the Q‐learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q‐learning estimator is standard. We derive the optimal smoothing parameter and propose a data‐driven method for estimating it. The finite sample properties of the smoothed Q‐learning estimator are studied and compared with several existing estimators including the Q‐learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness–Alzheimer's Disease (CATIE‐AD) study.  相似文献   

6.
With the rapid growth of modern technology, many biomedical studies are being conducted to collect massive datasets with volumes of multi‐modality imaging, genetic, neurocognitive and clinical information from increasingly large cohorts. Simultaneously extracting and integrating rich and diverse heterogeneous information in neuroimaging and/or genomics from these big datasets could transform our understanding of how genetic variants impact brain structure and function, cognitive function and brain‐related disease risk across the lifespan. Such understanding is critical for diagnosis, prevention and treatment of numerous complex brain‐related disorders (e.g., schizophrenia and Alzheimer's disease). However, the development of analytical methods for the joint analysis of both high‐dimensional imaging phenotypes and high‐dimensional genetic data, a big data squared (BD2) problem, presents major computational and theoretical challenges for existing analytical methods. Besides the high‐dimensional nature of BD2, various neuroimaging measures often exhibit strong spatial smoothness and dependence and genetic markers may have a natural dependence structure arising from linkage disequilibrium. We review some recent developments of various statistical techniques for imaging genetics, including massive univariate and voxel‐wise approaches, reduced rank regression, mixture models and group sparse multi‐task regression. By doing so, we hope that this review may encourage others in the statistical community to enter into this new and exciting field of research. The Canadian Journal of Statistics 47: 108–131; 2019 © 2019 Statistical Society of Canada  相似文献   

7.
A consistent approach to the problem of testing non‐correlation between two univariate infinite‐order autoregressive models was proposed by Hong (1996). His test is based on a weighted sum of squares of residual cross‐correlations, with weights depending on a kernel function. In this paper, the author follows Hong's approach to test non‐correlation of two cointegrated (or partially non‐stationary) ARMA time series. The test of Pham, Roy & Cédras (2003) may be seen as a special case of his approach, as it corresponds to the choice of a truncated uniform kernel. The proposed procedure remains valid for testing non‐correlation between two stationary invertible multivariate ARMA time series. The author derives the asymptotic distribution of his test statistics under the null hypothesis and proves that his procedures are consistent. He also studies the level and power of his proposed tests in finite samples through simulation. Finally, he presents an illustration based on real data.  相似文献   

8.
This paper presents a non‐parametric method for estimating the conditional density associated to the jump rate of a piecewise‐deterministic Markov process. In our framework, the estimation needs only one observation of the process within a long time interval. Our method relies on a generalization of Aalen's multiplicative intensity model. We prove the uniform consistency of our estimator, under some reasonable assumptions related to the primitive characteristics of the process. A simulation study illustrates the behaviour of our estimator.  相似文献   

9.
We consider hypothesis testing problems for low‐dimensional coefficients in a high dimensional additive hazard model. A variance reduced partial profiling estimator (VRPPE) is proposed and its asymptotic normality is established, which enables us to test the significance of each single coefficient when the data dimension is much larger than the sample size. Based on the p‐values obtained from the proposed test statistics, we then apply a multiple testing procedure to identify significant coefficients and show that the false discovery rate can be controlled at the desired level. The proposed method is also extended to testing a low‐dimensional sub‐vector of coefficients. The finite sample performance of the proposed testing procedure is evaluated by simulation studies. We also apply it to two real data sets, with one focusing on testing low‐dimensional coefficients and the other focusing on identifying significant coefficients through the proposed multiple testing procedure.  相似文献   

10.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

11.
We propose a new model for regression and dependence analysis when addressing spatial data with possibly heavy tails and an asymmetric marginal distribution. We first propose a stationary process with t marginals obtained through scale mixing of a Gaussian process with an inverse square root process with Gamma marginals. We then generalize this construction by considering a skew‐Gaussian process, thus obtaining a process with skew‐t marginal distributions. For the proposed (skew) t process, we study the second‐order and geometrical properties and in the t case, we provide analytic expressions for the bivariate distribution. In an extensive simulation study, we investigate the use of the weighted pairwise likelihood as a method of estimation for the t process. Moreover we compare the performance of the optimal linear predictor of the t process versus the optimal Gaussian predictor. Finally, the effectiveness of our methodology is illustrated by analyzing a georeferenced dataset on maximum temperatures in Australia.  相似文献   

12.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

13.
We discuss a class of difference‐based estimators for the autocovariance in nonparametric regression when the signal is discontinuous and the errors form a stationary m‐dependent process. These estimators circumvent the particularly challenging task of pre‐estimating such an unknown regression function. We provide finite‐sample expressions of their mean squared errors for piecewise constant signals and Gaussian errors. Based on this, we derive biased‐optimized estimates that do not depend on the unknown autocovariance structure. Notably, for positively correlated errors, that part of the variance of our estimators that depend on the signal is minimal as well. Further, we provide sufficient conditions for ‐consistency; this result is extended to piecewise Hölder regression with non‐Gaussian errors. We combine our biased‐optimized autocovariance estimates with a projection‐based approach and derive covariance matrix estimates, a method that is of independent interest. An R package, several simulations and an application to biophysical measurements complement this paper.  相似文献   

14.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

15.
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi‐group comparison detecting the difference in parameters between groups under mixed‐effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

18.
Lachenbruch ( 1976 , 2001 ) introduced two‐part tests for comparison of two means in zero‐inflated continuous data. We are extending this approach and compare k independent distributions (by comparing their means, either overall or the departure from equal proportion of zeros and equal means of nonzero values) by introducing two tests: a two‐part Wald test and a two‐part likelihood ratio test. If the continuous part of the distributions is lognormal then the proposed two test statistics have asymptotically chi‐square distribution with $2(k-1)$ degrees of freedom. A simulation study was conducted to compare the performance of the proposed tests with several well‐known tests such as ANOVA, Welch ( 1951 ), Brown & Forsythe ( 1974 ), Kruskal–Wallis, and one‐part Wald test proposed by Tu & Zhou ( 1999 ). Results indicate that the proposed tests keep the nominal type I error and have consistently best power among all tests being compared. An application to rainfall data is provided as an example. The Canadian Journal of Statistics 39: 690–702; 2011. © 2011 Statistical Society of Canada  相似文献   

19.
This paper considers inference for both spatial lattice data with possibly irregularly shaped sampling region and non‐lattice data, by extending the recently proposed self‐normalization (SN) approach from stationary time series to the spatial setup. A nice feature of the SN method is that it avoids the choice of tuning parameters, which are usually required for other non‐parametric inference approaches. The extension is non‐trivial as spatial data has no natural one‐directional time ordering. The SN‐based inference is convenient to implement and is shown through simulation studies to provide more accurate coverage compared with the widely used subsampling approach. We also illustrate the idea of SN using a real data example.  相似文献   

20.
The Ising model is one of the simplest and most famous models of interacting systems. It was originally proposed to model ferromagnetic interactions in statistical physics and is now widely used to model spatial processes in many areas such as ecology, sociology, and genetics, usually without testing its goodness of fit. Here, we propose various test statistics and an exact goodness‐of‐fit test for the finite‐lattice Ising model. The theory of Markov bases has been developed in algebraic statistics for exact goodness‐of‐fit testing using a Monte Carlo approach. However, finding a Markov basis is often computationally intractable. Thus, we develop a Monte Carlo method for exact goodness‐of‐fit testing for the Ising model that avoids computing a Markov basis and also leads to a better connectivity of the Markov chain and hence to a faster convergence. We show how this method can be applied to analyze the spatial organization of receptors on the cell membrane.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号