首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

2.
The penalized spline is a popular method for function estimation when the assumption of “smoothness” is valid. In this paper, methods for estimation and inference are proposed using penalized splines under additional constraints of shape, such as monotonicity or convexity. The constrained penalized spline estimator is shown to have the same convergence rates as the corresponding unconstrained penalized spline, although in practice the squared error loss is typically smaller for the constrained versions. The penalty parameter may be chosen with generalized cross‐validation, which also provides a method for determining if the shape restrictions hold. The method is not a formal hypothesis test, but is shown to have nice large‐sample properties, and simulations show that it compares well with existing tests for monotonicity. Extensions to the partial linear model, the generalized regression model, and the varying coefficient model are given, and examples demonstrate the utility of the methods. The Canadian Journal of Statistics 40: 190–206; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
In this article, we address the testing problem for additivity in nonparametric regression models. We develop a kernel‐based consistent test of a hypothesis of additivity in nonparametric regression, and establish its asymptotic distribution under a sequence of local alternatives. Compared to other existing kernel‐based tests, the proposed test is shown to effectively ameliorate the influence from estimation bias of the additive component of the nonparametric regression, and hence increase its efficiency. Most importantly, it avoids the tuning difficulties by using estimation‐based optimal criteria, while there is no direct tuning strategy for other existing kernel‐based testing methods. We discuss the usage of the new test and give numerical examples to demonstrate the practical performance of the test. The Canadian Journal of Statistics 39: 632–655; 2011. © 2011 Statistical Society of Canada  相似文献   

4.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   

5.
In this article, we develop regression models with cross‐classified responses. Conditional independence structures can be explored/exploited through the selective inclusion/exclusion of terms in a certain functional ANOVA decomposition, and the estimation is done nonparametrically via the penalized likelihood method. A cohort of computational and data analytical tools are presented, which include cross‐validation for smoothing parameter selection, Kullback–Leibler projection for model selection, and Bayesian confidence intervals for odds ratios. Random effects are introduced to model possible correlations such as those found in longitudinal and clustered data. Empirical performances of the methods are explored in simulation studies of limited scales, and a real data example is presented using some eyetracking data from linguistic studies. The techniques are implemented in a suite of R functions, whose usage is briefly described in the appendix. The Canadian Journal of Statistics 39: 591–609; 2011. © 2011 Statistical Society of Canada  相似文献   

6.
In this paper, we consider the partial linear model with the covariables missing at random. Empirical likelihood ratios for the regression coefficients and the baseline function are investigated, the empirical log-likelihood ratios are proven to be asymptotically chi-squared and the corresponding confidence regions for the parameters of interest are then constructed. The finite sample behavior of the proposed method is evaluated with simulation and illustrated with an AIDS clinical trial dataset.  相似文献   

7.
This paper discusses asymptotically distribution free tests for the lack-of-fit of a parametric regression model in the Berkson measurement error model. These tests are based on a martingale transform of a certain marked empirical process of calibrated residuals. A simulation study is included to assess the effect of measurement error on the proposed test. It is observed that empirical level is more stable across the chosen measurement error variances when fitting a linear model compared to when fitting a nonlinear model, while, in both cases, the empirical power decreases as this error variance increases, against all chosen alternatives.  相似文献   

8.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

9.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

10.
We propose using the weighted likelihood method to fit a general relative risk regression model for the current status data with missing data as arise, for example, in case‐cohort studies. The missingness probability is either known or can be reasonably estimated. Asymptotic properties of the weighted likelihood estimators are established. For the case of using estimated weights, we construct a general theorem that guarantees the asymptotic normality of the M‐estimator of a finite dimensional parameter in a class of semiparametric models, where the infinite dimensional parameter is allowed to converge at a slower than parametric rate, and some other parameters in the objective function are estimated a priori. The weighted bootstrap method is employed to estimate the variances. Simulations show that the proposed method works well for finite sample sizes. A motivating example of the case‐cohort study from an HIV vaccine trial is used to demonstrate the proposed method. The Canadian Journal of Statistics 39: 557–577; 2011. © 2011 Statistical Society of Canada  相似文献   

11.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

12.
We propose a new procedure for combining multiple tests in samples of right-censored observations. The new method is based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. We prove a version of Wilks’ theorem for the multiple constrained censored empirical likelihood ratio, which provides a simple reference distribution for the test statistic of our proposed method. A useful application of the proposed method is, for example, examining the survival experience of different populations by combining different weighted log-rank tests. Real data examples are given using the log-rank and Gehan-Wilcoxon tests. In a simulation study of two sample survival data, we compare the proposed method of combining tests to previously developed procedures. The results demonstrate that, in addition to its computational simplicity, the combined test performs comparably to, and in some situations more reliably than previously developed procedures. Statistical software is available in the R package ‘emplik’.  相似文献   

13.
The authors present a consistent lack‐of‐fit test in nonlinear regression models. The proposed procedure possesses some nice properties of Zheng's test such as the consistency, the ability to detect any local alternatives approaching the null at rates slower than the parametric rate. What's more, for a predetermined kernel function, the proposed test is more powerful than Zheng's test and the validity of these findings is confirmed by the simulation studies and a real data example. In addition, the authors find out a close connection between the choices of normal kernel functions and the bandwidths. The Canadian Journal of Statistics 39: 108–125; 2011 © 2011 Statistical Society of Canada  相似文献   

14.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

15.
Rényi divergences are used to propose some statistics for testing general hypotheses in mixed linear regression models. The asymptotic distribution of these tests statistics, of the Kullback–Leibler and of the likelihood ratio statistics are provided, assuming that the sample size and the number of levels of the random factors tend to infinity. A simulation study is carried out to analyze and compare the behavior of the proposed tests when the sample size and number of levels are small.  相似文献   

16.
Biased sampling occurs often in observational studies. With one biased sample, the problem of nonparametrically estimating both a target density function and a selection bias function is unidentifiable. This paper studies the nonparametric estimation problem when there are two biased samples that have some overlapping observations (i.e. recaptures) from a finite population. Since an intelligent subject sampled previously may experience a memory effect if sampled again, two general 2-stage models that incorporate both a selection bias and a possible memory effect are proposed. Nonparametric estimators of the target density, selection bias, and memory functions, as well as the population size are developed. Asymptotic properties of these estimators are studied and confidence bands for the selection function and memory function are provided. Our procedures are compared with those ignoring the memory effect or the selection bias in finite sample situations. A nonparametric model selection procedure is also given for choosing a model from the two 2-stage models and a mixture of these two models. Our procedures work well with or without a memory effect, and with or without a selection bias. The paper concludes with an application to a real survey data set.  相似文献   

17.
We consider robust permutation tests for a location shift in the two sample case based on estimating equations, comparing the test statistics based on a score function and an M-estimate. First we obtain a form for both tests so that the exact tests may be carried out using the same algorithms as used for permutation tests based on the mean. Then we obtain the Bahadur slopes of the tests in these two statistics, giving numerical results for two cases equivalent to a test based on Huber scores and a particular case of this related to a median test. We show that they have different Bahadur slopes with neither exceeding the other over the whole range. Finally, we give some numerical results illustrating the robustness properties of the tests and confirming the theoretical results on Bahadur slopes.  相似文献   

18.
We study an autoregressive time series model with a possible change in the regression parameters. Approximations to the critical values for change-point tests are obtained through various bootstrapping methods. Theoretical results show that the bootstrapping procedures have the same limiting behavior as their asymptotic counterparts discussed in Hušková et al. [2007. On the detection of changes in autoregressive time series, I. Asymptotics. J. Statist. Plann. Inference 137, 1243–1259]. In fact, a small simulation study illustrates that the bootstrap tests behave better than the original asymptotic tests if performance is measured by the αα- and ββ-errors, respectively.  相似文献   

19.
Outliers that commonly occur in business sample surveys can have large impacts on domain estimates. The authors consider an outlier‐robust design and smooth estimation approach, which can be related to the so‐called “Surprise stratum” technique [Kish, “Survey Sampling,” Wiley, New York (1965)]. The sampling design utilizes a threshold sample consisting of previously observed outliers that are selected with probability one, together with stratified simple random sampling from the rest of the population. The domain predictor is an extension of the Winsorization‐based estimator proposed by Rivest and Hidiroglou [Rivest and Hidiroglou, “Outlier Treatment for Disaggregated Estimates,” in “Proceedings of the Section on Survey Research Methods,” American Statistical Association (2004), pp. 4248–4256], and is similar to the estimator for skewed populations suggested by Fuller [Fuller, Statistica Sinica 1991;1:137–158]. It makes use of a domain Winsorized sample mean plus a domain‐specific adjustment of the estimated overall mean of the excess values on top of that. The methods are studied in theory from a design‐based perspective and by simulations based on the Norwegian Research and Development Survey data. Guidelines for choosing the threshold values are provided. The Canadian Journal of Statistics 39: 147–164; 2011 © 2010 Statistical Society of Canada  相似文献   

20.
This article presents new nonparametric tests for heteroscedasticity in nonlinear and nonparametric regression models. The tests have an asymptotic standard normal distribution under the null hypothesis of homoscedasticity and are robust against any form of heteroscedasticity. A Monte Carlo simulation with critical values obtained from the wild bootstrap procedure is provided to asses the finite sample performances of the tests. A real application of testing interest rate volatility functions illustrates the usefulness of the tests proposed. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号