首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
A multi‐sample test for equality of mean directions is developed for populations having Langevin‐von Mises‐Fisher distributions with a common unknown concentration. The proposed test statistic is a monotone transformation of the likelihood ratio. The high‐concentration asymptotic null distribution of the test statistic is derived. In contrast to previously suggested high‐concentration tests, the high‐concentration asymptotic approximation to the null distribution of the proposed test statistic is also valid for large sample sizes with any fixed nonzero concentration parameter. Simulations of size and power show that the proposed test outperforms competing tests. An example with three‐dimensional data from an anthropological study illustrates the practical application of the testing procedure.  相似文献   

2.
We propose two new procedures based on multiple hypothesis testing for correct support estimation in high‐dimensional sparse linear models. We conclusively prove that both procedures are powerful and do not require the sample size to be large. The first procedure tackles the atypical setting of ordered variable selection through an extension of a testing procedure previously developed in the context of a linear hypothesis. The second procedure is the main contribution of this paper. It enables data analysts to perform support estimation in the general high‐dimensional framework of non‐ordered variable selection. A thorough simulation study and applications to real datasets using the R package mht shows that our non‐ordered variable procedure produces excellent results in terms of correct support estimation as well as in terms of mean square errors and false discovery rate, when compared to common methods such as the Lasso, the SCAD penalty, forward regression or the false discovery rate procedure (FDR).  相似文献   

3.
We propose a multivariate functional response low‐rank regression model with possible high‐dimensional functional responses and scalar covariates. By expanding the slope functions on a set of sieve bases, we reconstruct the basis coefficients as a matrix. To estimate these coefficients, we propose an efficient procedure using nuclear norm regularization. We also derive error bounds for our estimates and evaluate our method using simulations. We further apply our method to the Human Connectome Project neuroimaging data to predict cortical surface motor task‐evoked functional magnetic resonance imaging signals using various clinical covariates to illustrate the usefulness of our results.  相似文献   

4.
This paper is concerned with testing the equality of two high‐dimensional spatial sign covariance matrices with applications to testing the proportionality of two high‐dimensional covariance matrices. It is interesting that these two testing problems are completely equivalent for the class of elliptically symmetric distributions. This paper develops a new test for testing the equality of two high‐dimensional spatial sign covariance matrices based on the Frobenius norm of the difference between two spatial sign covariance matrices. The asymptotic normality of the proposed testing statistic is derived under the null and alternative hypotheses when the dimension and sample sizes both tend to infinity. Moreover, the asymptotic power function is also presented. Simulation studies show that the proposed test performs very well in a wide range of settings and can be allowed for the case of large dimensions and small sample sizes.  相似文献   

5.
Abstract. This paper proposes, implements and investigates a new non‐parametric two‐sample test for detecting stochastic dominance. We pose the question of detecting the stochastic dominance in a non‐standard way. This is motivated by existing evidence showing that standard formulations and pertaining procedures may lead to serious errors in inference. The procedure that we introduce matches testing and model selection. More precisely, we reparametrize the testing problem in terms of Fourier coefficients of well‐known comparison densities. Next, the estimated Fourier coefficients are used to form a kind of signed smooth rank statistic. In such a setting, the number of Fourier coefficients incorporated into the statistic is a smoothing parameter. We determine this parameter via some flexible selection rule. We establish the asymptotic properties of the new test under null and alternative hypotheses. The finite sample performance of the new solution is demonstrated through Monte Carlo studies and an application to a set of survival times.  相似文献   

6.
Abstract. Motivated by applications of Poisson processes for modelling periodic time‐varying phenomena, we study a semi‐parametric estimator of the period of cyclic intensity function of a non‐homogeneous Poisson process. There are no parametric assumptions on the intensity function which is treated as an infinite dimensional nuisance parameter. We propose a new family of estimators for the period of the intensity function, address the identifiability and consistency issues and present simulations which demonstrate good performance of the proposed estimation procedure in practice. We compare our method to competing methods on synthetic data and apply it to a real data set from a call center.  相似文献   

7.
Sample covariance matrices play a central role in numerous popular statistical methodologies, for example principal components analysis, Kalman filtering and independent component analysis. However, modern random matrix theory indicates that, when the dimension of a random vector is not negligible with respect to the sample size, the sample covariance matrix demonstrates significant deviations from the underlying population covariance matrix. There is an urgent need to develop new estimation tools in such cases with high‐dimensional data to recover the characteristics of the population covariance matrix from the observed sample covariance matrix. We propose a novel solution to this problem based on the method of moments. When the parametric dimension of the population spectrum is finite and known, we prove that the proposed estimator is strongly consistent and asymptotically Gaussian. Otherwise, we combine the first estimation method with a cross‐validation procedure to select the unknown model dimension. Simulation experiments demonstrate the consistency of the proposed procedure. We also indicate possible extensions of the proposed estimator to the case where the population spectrum has a density.  相似文献   

8.
In this paper, we study the effects of noise on bipower variation, realized volatility (RV) and testing for co‐jumps in high‐frequency data under the small noise framework. We first establish asymptotic properties of bipower variation in this framework. In the presence of the small noise, RV is asymptotically biased, and the additional asymptotic conditional variance term appears in its limit distribution. We also propose consistent estimators for the asymptotic variances of RV. Second, we derive the asymptotic distribution of the test statistic proposed in (Ann. Stat. 37, 1792‐1838) under the presence of small noise for testing the presence of co‐jumps in a two‐dimensional Itô semimartingale. In contrast to the setting in (Ann. Stat. 37, 1792‐1838), we show that the additional asymptotic variance terms appear and propose consistent estimators for the asymptotic variances in order to make the test feasible. Simulation experiments show that our asymptotic results give reasonable approximations in the finite sample cases.  相似文献   

9.
Case‐cohort design has been demonstrated to be an economical and efficient approach in large cohort studies when the measurement of some covariates on all individuals is expensive. Various methods have been proposed for case‐cohort data when the dimension of covariates is smaller than sample size. However, limited work has been done for high‐dimensional case‐cohort data which are frequently collected in large epidemiological studies. In this paper, we propose a variable screening method for ultrahigh‐dimensional case‐cohort data under the framework of proportional model, which allows the covariate dimension increases with sample size at exponential rate. Our procedure enjoys the sure screening property and the ranking consistency under some mild regularity conditions. We further extend this method to an iterative version to handle the scenarios where some covariates are jointly important but are marginally unrelated or weakly correlated to the response. The finite sample performance of the proposed procedure is evaluated via both simulation studies and an application to a real data from the breast cancer study.  相似文献   

10.
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well‐known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph‐based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid‐trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Statistical analyses of crossover clinical trials have mainly focused on assessing the treatment effect, carryover effect, and period effect. When a treatment‐by‐period interaction is plausible, it is important to test such interaction first before making inferences on differences among individual treatments. Considerably less attention has been paid to the treatment‐by‐period interaction, which has historically been aliased with the carryover effect in two‐period or three‐period designs. In this article, from the data of a newly developed four‐period crossover design, we propose a statistical method to compare the effects of two active drugs with respect to two response variables. We study estimation and hypothesis testing considering the treatment‐by‐period interaction. Constrained least squares is used to estimate the treatment effect, period effect, and treatment‐by‐period interaction. For hypothesis testing, we extend a general multivariate method for analyzing the crossover design with multiple responses. Results from simulation studies have shown that this method performs very well. We also illustrate how to apply our method to the real data problem.  相似文献   

12.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
We consider a recurrent event wherein the inter‐event times are independent and identically distributed with a common absolutely continuous distribution function F. In this article, interest is in the problem of testing the null hypothesis that F belongs to some parametric family where the q‐dimensional parameter is unknown. We propose a general Chi‐squared test in which cell boundaries are data dependent. An estimator of the parameter obtained by minimizing a quadratic form resulting from a properly scaled vector of differences between Observed and Expected frequencies is used to construct the test. This estimator is known as the minimum chi‐square estimator. Large sample properties of the proposed test statistic are established using empirical processes tools. A simulation study is conducted to assess the performance of the test under parameter misspecification, and our procedures are applied to a fleet of Boeing 720 jet planes' air conditioning system failures.  相似文献   

14.
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test.  相似文献   

15.
A joint estimation approach for multiple high‐dimensional Gaussian copula graphical models is proposed, which achieves estimation robustness by exploiting non‐parametric rank‐based correlation coefficient estimators. Although we focus on continuous data in this paper, the proposed method can be extended to deal with binary or mixed data. Based on a weighted minimisation problem, the estimators can be obtained by implementing second‐order cone programming. Theoretical properties of the procedure are investigated. We show that the proposed joint estimation procedure leads to a faster convergence rate than estimating the graphs individually. It is also shown that the proposed procedure achieves an exact graph structure recovery with probability tending to 1 under certain regularity conditions. Besides theoretical analysis, we conduct numerical simulations to compare the estimation performance and graph recovery performance of some state‐of‐the‐art methods including both joint estimation methods and estimation methods for individuals. The proposed method is then applied to a gene expression data set, which illustrates its practical usefulness.  相似文献   

16.
Abstract. This article considers the problem of cardinality estimation in data stream applications. We present a statistical analysis of probabilistic counting algorithms, focusing on two techniques that use pseudo‐random variates to form low‐dimensional data sketches. We apply conventional statistical methods to compare probabilistic algorithms based on storing either selected order statistics, or random projections. We derive estimators of the cardinality in both cases, and show that the maximal‐term estimator is recursively computable and has exponentially decreasing error bounds. Furthermore, we show that the estimators have comparable asymptotic efficiency, and explain this result by demonstrating an unexpected connection between the two approaches.  相似文献   

17.
In this paper, we investigate the problem of testing semiparametric hypotheses in locally stationary processes. The proposed method is based on an empirical version of the L2‐distance between the true time varying spectral density and its best approximation under the null hypothesis. As this approach only requires estimation of integrals of the time varying spectral density and its square, we do not have to choose a smoothing bandwidth for the local estimation of the spectral density – in contrast to most other procedures discussed in the literature. Asymptotic normality of the test statistic is derived both under the null hypothesis and the alternative. We also propose a bootstrap procedure to obtain critical values in the case of small sample sizes. Additionally, we investigate the finite sample properties of the new method and compare it with the currently available procedures by means of a simulation study. Finally, we illustrate the performance of the new test in two data examples, one regarding log returns of the S&P 500 and the other a well‐known series of weekly egg prices.  相似文献   

18.
This article proposes a class of weighted differences of averages (WDA) statistics to test and estimate possible change-points in variance for time series with weakly dependent blocks and dependent panel data without specific distributional assumptions. We derive the asymptotic distributions of the test statistics for testing the existence of a single variance change-point under the null and local alternatives. We also study the consistency of the change-point estimator. Within the proposed class of the WDA test statistics, a standardized WDA test is shown to have the best consistency rate and is recommended for practical use. An iterative binary searching procedure is suggested for estimating the locations of possible multiple change-points in variance, whose consistency is also established. Simulation studies are conducted to compare detection power and number of wrong rejections of the proposed procedure to that of a cumulative sum (CUSUM) based test and a likelihood ratio-based test. Finally, we apply the proposed method to a stock index dataset and an unemployment rate dataset. Supplementary materials for this article are available online.  相似文献   

19.
This article proposes a variable selection procedure for partially linear models with right-censored data via penalized least squares. We apply the SCAD penalty to select significant variables and estimate unknown parameters simultaneously. The sampling properties for the proposed procedure are investigated. The rate of convergence and the asymptotic normality of the proposed estimators are established. Furthermore, the SCAD-penalized estimators of the nonzero coefficients are shown to have the asymptotic oracle property. In addition, an iterative algorithm is proposed to find the solution of the penalized least squares. Simulation studies are conducted to examine the finite sample performance of the proposed method.  相似文献   

20.
Abstract. We investigate resampling methodologies for testing the null hypothesis that two samples of labelled landmark data in three dimensions come from populations with a common mean reflection shape or mean reflection size‐and‐shape. The investigation includes comparisons between (i) two different test statistics that are functions of the projection onto tangent space of the data, namely the James statistic and an empirical likelihood statistic; (ii) bootstrap and permutation procedures; and (iii) three methods for resampling under the null hypothesis, namely translating in tangent space, resampling using weights determined by empirical likelihood and using a novel method to transform the original sample entirely within refection shape space. We present results of extensive numerical simulations, on which basis we recommend a bootstrap test procedure that we expect will work well in practise. We demonstrate the procedure using a data set of human faces, to test whether humans in different age groups have a common mean face shape.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号