首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Abstract. To increase the predictive abilities of several plasma biomarkers on the coronary artery disease (CAD)‐related vital statuses over time, our research interest mainly focuses on seeking combinations of these biomarkers with the highest time‐dependent receiver operating characteristic curves. An extended generalized linear model (EGLM) with time‐varying coefficients and an unknown bivariate link function is used to characterize the conditional distribution of time to CAD‐related death. Based on censored survival data, two non‐parametric procedures are proposed to estimate the optimal composite markers, linear predictors in the EGLM model. Estimation methods for the classification accuracies of the optimal composite markers are also proposed. In the article we establish theoretical results of the estimators and examine the corresponding finite‐sample properties through a series of simulations with different sample sizes, censoring rates and censoring mechanisms. Our optimization procedures and estimators are further shown to be useful through an application to a prospective cohort study of patients undergoing angiography.  相似文献   

2.
The area under the receiver operating characteristic (ROC) curve (AUC) is one of the commonly used measure to evaluate or compare the predictive ability of markers to the disease status. Motivated by an angiographic coronary artery disease (CAD) study, our objective is mainly to evaluate and compare the performance of several baseline plasma levels in the prediction of CAD-related vital status over time. Based on censored survival data, the non-parametric estimators are proposed for the time-dependent AUC. The limiting Gaussian processes of the estimators and the estimated asymptotic variance–covariance functions enable us to further construct confidence bands and develop testing procedures. Applications and finite sample properties of the proposed estimation methods and inference procedures are demonstrated through the CAD-related death data from the British Columbia Vital Statistics Agency and Monte Carlo simulations.  相似文献   

3.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

4.
The generalized semiparametric mixed varying‐coefficient effects model for longitudinal data can accommodate a variety of link functions and flexibly model different types of covariate effects, including time‐constant, time‐varying and covariate‐varying effects. The time‐varying effects are unspecified functions of time and the covariate‐varying effects are nonparametric functions of a possibly time‐dependent exposure variable. A semiparametric estimation procedure is developed that uses local linear smoothing and profile weighted least squares, which requires smoothing in the two different and yet connected domains of time and the time‐dependent exposure variable. The asymptotic properties of the estimators of both nonparametric and parametric effects are investigated. In addition, hypothesis testing procedures are developed to examine the covariate effects. The finite‐sample properties of the proposed estimators and testing procedures are examined through simulations, indicating satisfactory performances. The proposed methods are applied to analyze the AIDS Clinical Trial Group 244 clinical trial to investigate the effects of antiretroviral treatment switching in HIV‐infected patients before and after developing the T215Y antiretroviral drug resistance mutation. The Canadian Journal of Statistics 47: 352–373; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

6.
The Receiver Operating Characteristic (ROC) curve and the Area Under the ROC Curve (AUC) are effective statistical tools for evaluating the accuracy of diagnostic tests for binary‐class medical data. However, many real‐world biomedical problems involve more than two categories. The Volume Under the ROC Surface (VUS) and Hypervolume Under the ROC Manifold (HUM) measures are extensions for the AUC under three‐class and multiple‐class models. Inference methods for such measures have been proposed recently. We develop a method of constructing a linear combination of markers for which the VUS or HUM of the combined markers is maximized. Asymptotic validity of the estimator is justified by extending the results for maximum rank correlation estimation that are well known in econometrics. A bootstrap resampling method is then applied to estimate the sampling variability. Simulations and examples are provided to demonstrate our methods.  相似文献   

7.
Abstract. Generalized autoregressive conditional heteroscedastic (GARCH) models have been widely used for analyzing financial time series with time‐varying volatilities. To overcome the defect of the Gaussian quasi‐maximum likelihood estimator (QMLE) when the innovations follow either heavy‐tailed or skewed distributions, Berkes & Horváth (Ann. Statist., 32, 633, 2004) and Lee & Lee (Scand. J. Statist. 36, 157, 2009) considered likelihood methods that use two‐sided exponential, Cauchy and normal mixture distributions. In this paper, we extend their methods for Box–Cox transformed threshold GARCH model by allowing distributions used in the construction of likelihood functions to include parameters and employing the estimated quasi‐likelihood estimators (QELE) to handle those parameters. We also demonstrate that the proposed QMLE and QELE are consistent and asymptotically normal under regularity conditions. Simulation results are provided for illustration.  相似文献   

8.
Abstract. Motivated by applications of Poisson processes for modelling periodic time‐varying phenomena, we study a semi‐parametric estimator of the period of cyclic intensity function of a non‐homogeneous Poisson process. There are no parametric assumptions on the intensity function which is treated as an infinite dimensional nuisance parameter. We propose a new family of estimators for the period of the intensity function, address the identifiability and consistency issues and present simulations which demonstrate good performance of the proposed estimation procedure in practice. We compare our method to competing methods on synthetic data and apply it to a real data set from a call center.  相似文献   

9.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

10.
To enhance modeling flexibility, the authors propose a nonparametric hazard regression model, for which the ordinary and weighted least squares estimation and inference procedures are studied. The proposed model does not assume any parametric specifications on the covariate effects, which is suitable for exploring the nonlinear interactions between covariates, time and some exposure variable. The authors propose the local ordinary and weighted least squares estimators for the varying‐coefficient functions and establish the corresponding asymptotic normality properties. Simulation studies are conducted to empirically examine the finite‐sample performance of the new methods, and a real data example from a recent breast cancer study is used as an illustration. The Canadian Journal of Statistics 37: 659–674; 2009 © 2009 Statistical Society of Canada  相似文献   

11.
Aalen's nonparametric additive model in which the regression coefficients are assumed to be unspecified functions of time is a flexible alternative to Cox's proportional hazards model when the proportionality assumption is in doubt. In this paper, we incorporate a general linear hypothesis into the estimation of the time‐varying regression coefficients. We combine unrestricted least squares estimators and estimators that are restricted by the linear hypothesis and produce James‐Stein‐type shrinkage estimators of the regression coefficients. We develop the asymptotic joint distribution of such restricted and unrestricted estimators and use this to study the relative performance of the proposed estimators via their integrated asymptotic distributional risks. We conduct Monte Carlo simulations to examine the relative performance of the estimators in terms of their integrated mean square errors. We also compare the performance of the proposed estimators with a recently devised LASSO estimator as well as with ridge‐type estimators both via simulations and data on the survival of primary billiary cirhosis patients.  相似文献   

12.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

13.
A goodness‐of‐fit procedure is proposed for parametric families of copulas. The new test statistics are functionals of an empirical process based on the theoretical and sample versions of Spearman's dependence function. Conditions under which this empirical process converges weakly are seen to hold for many families including the Gaussian, Frank, and generalized Farlie–Gumbel–Morgenstern systems of distributions, as well as the models with singular components described by Durante [Durante ( 2007 ) Comptes Rendus Mathématique. Académie des Sciences. Paris, 344, 195–198]. Thanks to a parametric bootstrap method that allows to compute valid P‐values, it is shown empirically that tests based on Cramér–von Mises distances keep their size under the null hypothesis. Simulations attesting the power of the newly proposed tests, comparisons with competing procedures and complete analyses of real hydrological and financial data sets are presented. The Canadian Journal of Statistics 37: 80‐101; 2009 © 2009 Statistical Society of Canada  相似文献   

14.
We propose a new procedure for combining multiple tests in samples of right-censored observations. The new method is based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. We prove a version of Wilks’ theorem for the multiple constrained censored empirical likelihood ratio, which provides a simple reference distribution for the test statistic of our proposed method. A useful application of the proposed method is, for example, examining the survival experience of different populations by combining different weighted log-rank tests. Real data examples are given using the log-rank and Gehan-Wilcoxon tests. In a simulation study of two sample survival data, we compare the proposed method of combining tests to previously developed procedures. The results demonstrate that, in addition to its computational simplicity, the combined test performs comparably to, and in some situations more reliably than previously developed procedures. Statistical software is available in the R package ‘emplik’.  相似文献   

15.
In this paper, we investigate the problem of testing semiparametric hypotheses in locally stationary processes. The proposed method is based on an empirical version of the L2‐distance between the true time varying spectral density and its best approximation under the null hypothesis. As this approach only requires estimation of integrals of the time varying spectral density and its square, we do not have to choose a smoothing bandwidth for the local estimation of the spectral density – in contrast to most other procedures discussed in the literature. Asymptotic normality of the test statistic is derived both under the null hypothesis and the alternative. We also propose a bootstrap procedure to obtain critical values in the case of small sample sizes. Additionally, we investigate the finite sample properties of the new method and compare it with the currently available procedures by means of a simulation study. Finally, we illustrate the performance of the new test in two data examples, one regarding log returns of the S&P 500 and the other a well‐known series of weekly egg prices.  相似文献   

16.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

17.
We consider a recurrent event wherein the inter‐event times are independent and identically distributed with a common absolutely continuous distribution function F. In this article, interest is in the problem of testing the null hypothesis that F belongs to some parametric family where the q‐dimensional parameter is unknown. We propose a general Chi‐squared test in which cell boundaries are data dependent. An estimator of the parameter obtained by minimizing a quadratic form resulting from a properly scaled vector of differences between Observed and Expected frequencies is used to construct the test. This estimator is known as the minimum chi‐square estimator. Large sample properties of the proposed test statistic are established using empirical processes tools. A simulation study is conducted to assess the performance of the test under parameter misspecification, and our procedures are applied to a fleet of Boeing 720 jet planes' air conditioning system failures.  相似文献   

18.
The semi‐Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end‐point of the support of the censoring time is strictly less than the right end‐point of the support of the semi‐Markov kernel, the transition probability of the semi‐Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi‐Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study. The Canadian Journal of Statistics 41: 237–256; 2013 © 2013 Statistical Society of Canada  相似文献   

19.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   

20.
The authors propose to estimate nonlinear small area population parameters by using the empirical Bayes (best) method, based on a nested error model. They focus on poverty indicators as particular nonlinear parameters of interest, but the proposed methodology is applicable to general nonlinear parameters. They use a parametric bootstrap method to estimate the mean squared error of the empirical best estimators. They also study small sample properties of these estimators by model‐based and design‐based simulation studies. Results show large reductions in mean squared error relative to direct area‐specific estimators and other estimators obtained by “simulated” censuses. The authors also apply the proposed method to estimate poverty incidences and poverty gaps in Spanish provinces by gender with mean squared errors estimated by the mentioned parametric bootstrap method. For the Spanish data, results show a significant reduction in coefficient of variation of the proposed empirical best estimators over direct estimators for practically all domains. The Canadian Journal of Statistics 38: 369–385; 2010 © 2010 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号