首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   

2.
Statistical procedures for the detection of a change in the dependence structure of a series of multivariate observations are studied in this work. The test statistics that are proposed are $L_1$ , $L_2$ , and $L_{\infty }$ distances computed from vectors of differences of Kendall's tau; two multivariate extensions of Kendall's measure of association are used. Since the distributions of these statistics under the null hypothesis of no change depend on the unknown underlying copula of the vectors, a procedure based on the multiplier central limit theorem is used for the computation of p‐values; the method is shown to be valid both asymptotically and for moderate sample sizes. Alternative versions of the tests that take into account possible breakpoints in the marginal distributions are also investigated. Monte Carlo simulations show that the tests are powerful under many scenarios of change‐point. In addition, two estimators of the time of change are proposed and their efficiency is carefully studied. The methodologies are illustrated on simulated series from the Canadian Regional Climate Model. The Canadian Journal of Statistics 41: 65–82; 2013 © 2012 Statistical Society of Canada  相似文献   

3.
The class $G^{\rho,\lambda }$ of weighted log‐rank tests proposed by Fleming & Harrington [Fleming & Harrington (1991) Counting Processes and Survival Analysis, Wiley, New York] has been widely used in survival analysis and is nowadays, unquestionably, the established method to compare, nonparametrically, k different survival functions based on right‐censored survival data. This paper extends the $G^{\rho,\lambda }$ class to interval‐censored data. First we introduce a new general class of rank based tests, then we show the analogy to the above proposal of Fleming & Harrington. The asymptotic behaviour of the proposed tests is derived using an observed Fisher information approach and a permutation approach. Aiming to make this family of tests interpretable and useful for practitioners, we explain how to interpret different choices of weights and we apply it to data from a cohort of intravenous drug users at risk for HIV infection. The Canadian Journal of Statistics 40: 501–516; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
This article deals with testing inference in the class of beta regression models with varying dispersion. We focus on inference in small samples. We perform a numerical analysis in order to evaluate the sizes and powers of different tests. We consider the likelihood ratio test, two adjusted likelihood ratio tests proposed by Ferrari and Pinheiro [Improved likelihood inference in beta regression, J. Stat. Comput. Simul. 81 (2011), pp. 431–443], the score test, the Wald test and bootstrap versions of the likelihood ratio, score and Wald tests. We perform tests on the parameters that index the mean submodel and also on the parameters in the linear predictor of the precision submodel. Overall, the numerical evidence favours the bootstrap tests. It is also shown that the score test is considerably less size-distorted than the likelihood ratio and Wald tests. An application that uses real (not simulated) data is presented and discussed.  相似文献   

5.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

6.
A new test is proposed for the hypothesis of uniformity on bi‐dimensional supports. The procedure is an adaptation of the “distance to boundary test” (DB test) proposed in Berrendero, Cuevas, & Vázquez‐Grande (2006). This new version of the DB test, called DBU test, allows us (as a novel, interesting feature) to deal with the case where the support S of the underlying distribution is unknown. This means that S is not specified in the null hypothesis so that, in fact, we test the null hypothesis that the underlying distribution is uniform on some support S belonging to a given class ${\cal C}$ . We pay special attention to the case that ${\cal C}$ is either the class of compact convex supports or the (broader) class of compact λ‐convex supports (also called r‐convex or α‐convex in the literature). The basic idea is to apply the DB test in a sort of plug‐in version, where the support S is approximated by using methods of set estimation. The DBU method is analysed from both the theoretical and practical point of view, via some asymptotic results and a simulation study, respectively. The Canadian Journal of Statistics 40: 378–395; 2012 © 2012 Statistical Society of Canada  相似文献   

7.
This paper deals with a bias correction of Akaike's information criterion (AIC) for selecting variables in multivariate normal linear regression models when the true distribution of observation is an unknown non‐normal distribution. It is well known that the bias of AIC is $O(1)$ , and there are a number of the first‐order bias‐corrected AICs which improve the bias to $O(n^{-1})$ , where $n$ is the sample size. A new information criterion is proposed by slightly adjusting the first‐order bias‐corrected AIC. Although the adjustment is achieved by merely using constant coefficients, the bias of the new criterion is reduced to $O(n^{-2})$ . Then, a variance of the new criterion is also improved. Through numerical experiments, we verify that our criterion is superior to others. The Canadian Journal of Statistics 39: 126–146; 2011 © 2011 Statistical Society of Canada  相似文献   

8.
9.
In this study, we investigate the finite sample properties of the optimal generalized method of moments estimator (OGMME) for a spatial econometric model with a first-order spatial autoregressive process in the dependent variable and the disturbance term (for short SARAR(1, 1)). We show that the estimated asymptotic standard errors for spatial autoregressive parameters can be substantially smaller than their empirical counterparts. Hence, we extend the finite sample variance correction methodology of Windmeijer (2005 Windmeijer, F. (2005). A finite sample correction for the variance of linear efficient two-step GMM estimators. Journal of Econometrics 126(1):2551.[Crossref], [Web of Science ®] [Google Scholar]) to the OGMME for the SARAR(1, 1) model. Results from simulation studies indicate that the correction method improves the variance estimates in small samples and leads to more accurate inference for the spatial autoregressive parameters. For the same model, we compare the finite sample properties of various test statistics for linear restrictions on autoregressive parameters. These tests include the standard asymptotic Wald test based on various GMMEs, a bootstrapped version of the Wald test, two versions of the C(α) test, the standard Lagrange multiplier (LM) test, the minimum chi-square test (MC), and two versions of the generalized method of moments (GMM) criterion test. Finally, we study the finite sample properties of effects estimators that show how changes in explanatory variables impact the dependent variable.  相似文献   

10.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

11.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

12.
13.
Asymptotically, the Wald‐type test for generalised estimating equations (GEE) models can control the type I error rate at the nominal level. However in small sample studies, it may lead to inflated type I error rates. Even with currently available small sample corrections for the GEE Wald‐type test, the type I error rate inflation is still serious when the tested contrast is multidimensional. This paper extends the ANOVA‐type test for heteroscedastic factorial designs to GEE and shows that the proposed ANOVA‐type test can also control the type I error rate at the nominal level in small sample studies while still maintaining robustness with respect to mis‐specification of the working correlation matrix. Differences of inference between the Wald‐type test and the proposed test are observed in a two‐way repeated measures ANOVA model for a diet‐induced obesity study and a two‐way repeated measures logistic regression for a collagen‐induced arthritis study. Simulation studies confirm that the proposed test has better control of the type I error rate than the Wald‐type test in small sample repeated measures models. Additional simulation studies further show that the proposed test can even achieve larger power than the Wald‐type test in some cases of the large sample repeated measures ANOVA models that were investigated.  相似文献   

14.
In a missing data setting, we have a sample in which a vector of explanatory variables ${\bf x}_i$ is observed for every subject i, while scalar responses $y_i$ are missing by happenstance on some individuals. In this work we propose robust estimators of the distribution of the responses assuming missing at random (MAR) data, under a semiparametric regression model. Our approach allows the consistent estimation of any weakly continuous functional of the response's distribution. In particular, strongly consistent estimators of any continuous location functional, such as the median, L‐functionals and M‐functionals, are proposed. A robust fit for the regression model combined with the robust properties of the location functional gives rise to a robust recipe for estimating the location parameter. Robustness is quantified through the breakdown point of the proposed procedure. The asymptotic distribution of the location estimators is also derived. The proofs of the theorems are presented in Supplementary Material available online. The Canadian Journal of Statistics 41: 111–132; 2013 © 2012 Statistical Society of Canada  相似文献   

15.
We consider in this paper the semiparametric mixture of two unknown distributions equal up to a location parameter. The model is said to be semiparametric in the sense that the mixed distribution is not supposed to belong to a parametric family. To insure the identifiability of the model, it is assumed that the mixed distribution is zero symmetric, the model being then defined by the mixing proportion, two location parameters and the probability density function of the mixed distribution. We propose a new class of M‐estimators of these parameters based on a Fourier approach and prove that they are ‐consistent under mild regularity conditions. Their finite sample properties are illustrated by a Monte Carlo study, and a benchmark real dataset is also studied with our method.  相似文献   

16.
This paper considers estimators of survivor functions subject to a stochastic ordering constraint based on right censored data. We present the constrained nonparametric maximum likelihood estimator (C‐NPMLE) of the survivor functions in one‐and two‐sample settings where the survivor distributions could be discrete or continuous and discuss the non‐uniqueness of the estimators. We also present a computationally efficient algorithm to obtain the C‐NPMLE. To address the possibility of non‐uniqueness of the C‐NPMLE of $S_1(t)$ when $S_1(t)\le S_2(t)$ , we consider the maximum C‐NPMLE (MC‐NPMLE) of $S_1(t)$ . In the one‐sample case with arbitrary upper bound survivor function $S_2(t)$ , we present a novel and efficient algorithm for finding the MC‐NPMLE of $S_1(t)$ . Dykstra ( 1982 ) also considered constrained nonparametric maximum likelihood estimation for such problems, however, as we show, Dykstra's method has an error and does not always give the C‐NPMLE. We corrected this error and simulation shows improvement in efficiency compared to Dykstra's estimator. Confidence intervals based on bootstrap methods are proposed and consistency of the estimators is proved. Data from a study on larynx cancer are analysed to illustrate the method. The Canadian Journal of Statistics 40: 22–39; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

18.
19.
20.
Likelihood ratio tests for fixed model terms are proposed for the analysis of linear mixed models when using residual maximum likelihood estimation. Bartlett-type adjustments, using an approximate decomposition of the data, are developed for the test statistics. A simulation study is used to compare properties of the test statistics proposed, with or without adjustment, with a Wald test. A proposed test statistic constructed by dropping fixed terms from the full fixed model is shown to give a better approximation to the asymptotic χ2-distribution than the Wald test for small data sets. Bartlett adjustment is shown to improve the χ2-approximation for the proposed tests substantially.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号