首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider a nonlinear censored regression problem with a vector of predictors. With censoring, high-dimensional regression analysis becomes much more complicated. Since censoring can cause severe bias in estimation, modification to adjust such bias is needed to be made. Based on the weight adjustment, we develop the modification of sliced average variance estimation for estimating the lifetime central subspace without requiring a prespecified parametric model. Our proposed method preserves as much regression information as possible. Simulation results are reported and comparisons are made with the sliced inverse regression of Li et al. (1999 Li , K. C. , Wang , J. L. , Chen , C. H. ( 1999 ). Dimension reduction for censored regression data . Ann. Statist. 27 : 123 . [Google Scholar]).  相似文献   

2.
In this article we propose a modification of the recently introduced divergence information criterion (DIC, Mattheou et al., 2009 Mattheou , K. , Lee , S. , Karagrigoriou , A. ( 2009 ). A model selection criterion based on the BHHJ measure of divergence . Journal of Statistical Planning and Inference 139 : 128135 .[Crossref], [Web of Science ®] [Google Scholar]) for the determination of the order of an autoregressive process and show that it is an asymptotically unbiased estimator of the expected overall discrepancy, a nonnegative quantity that measures the distance between the true unknown model and a fitted approximating model. Further, we use Monte Carlo methods and various data generating processes for small, medium, and large sample sizes in order to explore the capabilities of the new criterion in selecting the optimal order in autoregressive processes and in general in a time series context. The new criterion shows remarkably good results by choosing the correct model more frequently than traditional information criteria.  相似文献   

3.
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010 Ahmadi , J. , Doostparast , M. , Parsian , A. ( 2010 ). Bayes estimation based on random censored data for some life time models under symmetric and asymmetric loss functions . Communcations in Statistics-Theory and Methods , 39 : 30583071 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001 Lam , K. F. , Leung , T. L. ( 2001 ). Marginal likelihood estimation for proportional odds models with right censored data . Lifetime Data Analysis 7 : 3954 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003 Lawless , J. F. (2003). Statistical Models and Methods for Lifetime Data. , 2nd ed. New York : John Wiley & Sons. [Google Scholar], p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002 Smith , P. J. ( 2002 ). Analysis of Failure and Survival Data . London : Chapman & Hall, CRC . [Google Scholar]). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained.  相似文献   

4.
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002 Chen , K. , Jin , Z. , Ying , Z. ( 2002 ). Semiparametric analysis of transformation models with censored data . Biometrika 89 : 659668 .[Crossref], [Web of Science ®] [Google Scholar]). It is demonstrated that both the approach of Lu and Liang (2006 Lu , W. , Liang , Y. ( 2006 ). Empirical likelihood inference for linear transformation models . Journal of Multivariate Analysis 97 : 15861599 .[Crossref], [Web of Science ®] [Google Scholar]) and that of Yu et al. (2011 Yu , W. , Sun , Y. , Zheng , M. ( 2011 ). Empirical likelihood method for linear transformation models . Annals of the Institute of Statistical Mathematics 63 : 331346 .[Crossref], [Web of Science ®] [Google Scholar]) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods.  相似文献   

5.
Abstract

In this article, we improvise Singh and Grewal (2013 Singh, S., and I. S. Grewal. 2013. Geometric distribution as a randomization device implemented in the Kuk’s model. International Journal of Contemporary Mathematical Sciences 8:2438.[Crossref] [Google Scholar]) and Hussain et al. (2016 Hussain, Z., J. Shabbir, Z. Pervez, S. F. Shah, and M. Khan. 2016. Generalized geometric distribution of order k: A flexible choice to randomize the response. Communications in Statistics: Simulation and Computation 46:470821.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990 Kuk, A. Y. C. 1990. Asking sensitive questions indirectly. Biometrika 77 (2):4368.[Crossref], [Web of Science ®] [Google Scholar]), Singh and Grewal (2013 Singh, S., and I. S. Grewal. 2013. Geometric distribution as a randomization device implemented in the Kuk’s model. International Journal of Contemporary Mathematical Sciences 8:2438.[Crossref] [Google Scholar]) and Hussain et al. (2016 Hussain, Z., J. Shabbir, Z. Pervez, S. F. Shah, and M. Khan. 2016. Generalized geometric distribution of order k: A flexible choice to randomize the response. Communications in Statistics: Simulation and Computation 46:470821.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided.  相似文献   

6.
In this article, we establish several recurrence relations for the single and product moments of progressively Type-II right censored order statistics from a log-logistic distribution. The use of these relations in a systematic recursive manner would enable the computation of all the means, variances and covariances of progressively Type-II right censored order statistics from the log-logistic distribution for all sample sizes n, effective sample sizes m, and all progressive censoring schemes (R 1,…, R m ). The results established here generalize the corresponding results for the usual order statistics due to Balakrishnan and Malik (1987 Balakrishnan , N. , Malik , H. J. ( 1987 ). Moments of order statistics from truncated log-logistic distribution . J. Statist. Plann. Infer. 17 : 251267 .[Crossref], [Web of Science ®] [Google Scholar]) and Balakrishnan et al. (1987 Balakrishnan , N. , Malik , H. J. , Puthenpura , S. ( 1987 ). Best linear unbiased estimation of location and scale parameters of the log-logistic distribution . Commun. Statist. Theor. Meth. 16 : 34773495 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). The moments so determined are then utilized to derive best linear unbiased estimators for the scale- and location-scale log-logistic distributions. A comparison of these estimates with the maximum likelihood estimates is made through Monte Carlo simulation. The best linear unbiased predictors of progressively censored failure times is then discussed briefly. Finally, a numerical example is presented to illustrate all the methods of inference developed here.  相似文献   

7.
Motivated by covariate-adjusted regression (CAR) proposed by Sentürk and Müller (2005 Sentürk , D. , Müller , H. G. ( 2005 ). Covariate-adjusted regression . Biometrika 92 : 7589 .[Crossref], [Web of Science ®] [Google Scholar]) and an application problem, in this article we introduce and investigate a covariate-adjusted partially linear regression model (CAPLM), in which both response and predictor vector can only be observed after being distorted by some multiplicative factors, and an additional variable such as age or period is taken into account. Although our model seems to be a special case of covariate-adjusted varying coefficient model (CAVCM) given by Sentürk (2006 Sentürk , D. ( 2006 ). Covariate-adjusted varying coefficient models . Biostatistics 7 : 235251 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), the data types of CAPLM and CAVCM are basically different and then the methods for inferring the two models are different. In this article, the estimate method motivated by Cui et al. (2008 Cui , X. , Guo , W. S. , Lin , L. , Zhu , L. X. ( 2008 ). Covariate-adjusted nonlinear regression . Ann. Statist. 37 : 18391870 . [Google Scholar]) is employed to infer the new model. Furthermore, under some mild conditions, the asymptotic normality of estimator for the parametric component is obtained. Combined with the consistent estimate of asymptotic covariance, we obtain confidence intervals for the regression coefficients. Also, some simulations and a real data analysis are made to illustrate the new model and methods.  相似文献   

8.
9.
Difference-based estimators for the error variance are popular since they do not require the estimation of the mean function. Unlike most existing difference-based estimators, new estimators proposed by Müller et al. (2003 Müller , U. , Schick , A. , Wefelmeyer , W. ( 2003 ). Estimating the error variance in nonparametric regression by a covariate-matched U-statistic . Statistics 37 : 179188 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and Tong and Wang (2005 Tong , T. , Wang , Y. ( 2005 ). Estimating residual variance in nonparametric regression using least squares . Biometrika 92 : 821830 .[Crossref], [Web of Science ®] [Google Scholar]) achieved the asymptotic optimal rate as residual-based estimators. In this article, we study the relative errors of these difference-based estimators which lead to better understanding of the differences between them and residual-based estimators. To compute the relative error of the covariate-matched U-statistic estimator proposed by Müller et al. (2003 Müller , U. , Schick , A. , Wefelmeyer , W. ( 2003 ). Estimating the error variance in nonparametric regression by a covariate-matched U-statistic . Statistics 37 : 179188 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), we develop a modified version by using simpler weights. We further investigate its asymptotic property for both equidistant and random designs and show that our modified estimator is asymptotically efficient.  相似文献   

10.
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000 Lichtenstein , P. , Holm , N. , Verkasalo , P. , Iliadou , A. , Kaprio , J. , Koskenvuo , M. , Pukkala , E. , Skytthe , A. , Hemminki , K. ( 2000 ). Environmental and heritable factors in the causation of cancer . New England Journal of Medicine 343 : 7885 .[Crossref], [Web of Science ®] [Google Scholar]; Ahlbom et al., 1997 Ahlbom , A. , Lichtenstein , P. , Malmström , H. , Feychting , M. , Hemminki , K. , Pedersen , N. L. ( 1997 ). Cancer in twins: genetic and nongenetic familial risk factors . Journal of the National Cancer Institute 89 : 28793 . [Google Scholar]; Ramakrishnan et al., 1992 Ramakrishnan , V. , Goldberg , J. , Henderson , W. , Elsen , S. , True , W. , Lyons , M. , Tsuang , M. ( 1992 ). Elementary methods for the analysis of dichotomous outcomes in unselected samples of twins . Genetic Epidemiology 9 : 273287 . [Google Scholar], 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973 Leviton , A. ( 1973 ). Definitions of attributable risk . American Journal of Epidemiology 98 : 231 . [Google Scholar]). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability.  相似文献   

11.
In this article, several methods to make inferences about the parameters of a finite mixture of distributions in the context of centrally censored data with partial identification are revised. These methods are an adaptation of the work in Contreras-Cristán, Gutiérrez-Peña, and O'Reilly (2003 Contreras-Cristán , A. , Gutiérrez-Peña , E. , O'Reilly , F. ( 2003 ). Inferences using latent variables for mixtures of distributions for censored data with partial identification . Comm. Stat. Theor. Meth. 32 ( 4 ): 749774 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) in the case of right censoring. The first method focuses on an asymptotic approximation to a suitably simplified likelihood using some latent quantities; the second method is based on the expectation-maximization (EM) algorithm. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to non-Bayesian methods that deal directly with the full likelihood of the mixture appealing to its asymptotic approximation. The third method, from a Bayesian perspective, uses data augmentation to work with an uncensored sample. This last method is related to a recently proposed Bayesian method in Baker, Mengersen, and Davis (2005 Baker , P. , Mengersen , K. , Davis , G. ( 2005 ). A Bayesian solution to reconstructing centrally censored distributions . J. Agr. Biol. Environ. Stat. 1 : 6184 . [Google Scholar]). Our proposal of the three adapted methods is shown to provide similar inferential answers, thus offering alternative analyses.  相似文献   

12.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

13.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]; Martens et al., 2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004 Lieberman , O. , Phillips , P. C. B. ( 2004 ). Expansions for the distribution of the maximum likelihood estimator of the fractional difference parameter . Econometric Theory 20 ( 3 ): 464484 . [Google Scholar], 2005 Lieberman , O. , Phillips , P. C. B. ( 2005 ). Expansions for approximate maximum likelihood estimators of the fractional difference parameter . The Econometrics Journal 8 : 367379 . [Google Scholar]) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and Martens et al. (2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   

14.
We propose a method of including polynomial and interaction terms in Distance-Based Regression (Cuadras and Arenas, 1990 Cuadras , C. M. , Arenas , C. ( 1990 ). A distance based regression model for prediction with mixed data . Commun. Statist. A Theor. Meth. 19 : 22612279 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), relying on properties of a semi-Hadamard or Khatri-Rao product of matrices. We demonstrate its application to real data examples.  相似文献   

15.
In Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014b ). A bottom-up dynamic model of portfolio credit risk - Part I: Markov copula perspective . In: Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.1844574) . [Google Scholar],c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011 Assefa , S. , Bielecki , T. R. , Crépey , S. , Jeanblanc , M. ( 2011 ). CVA computation for counterparty risk assessment in credit portfolios . In: Bielecki , T.R. , Brigo , D. , Patras , F. , Eds., Credit Risk Frontiers . Hoboken : Wiley/Bloomberg-Press . [Google Scholar]; Bielecki et al., 2012 Bielecki , T. R. , Crépey , S. , Jeanblanc , M. , Zargari , B. ( 2012 ). Valuation and Hedging of CDS counterparty exposure in a markov copula model . Int. J. Theoret. Appl. Fin. 15 ( 1 ): 1250004 .[Crossref] [Google Scholar]), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]), Bielecki et al. (2014b Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014b ). A bottom-up dynamic model of portfolio credit risk - Part I: Markov copula perspective . In: Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.1844574) . [Google Scholar]) and Bielecki et al. (2014c Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014c ). A bottom-up dynamic model of portfolio credit risk - Part II: Common-shock interpretation, calibration and hedging issues . Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.2245130) . [Google Scholar]).  相似文献   

16.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

17.
Two types of estimates of process level, namely repeated median estimates (Siegel, 1982 Siegel , A. F. ( 1982 ). Robust regression using repeated medians . Biometrika 69 : 242244 .[Crossref], [Web of Science ®] [Google Scholar]) and full online estimates (Gather et al., 2006 Gather , U. , Schettlinger , K. , Fried , R. ( 2006 ). Online signal extraction by robust linear regression . Computational Statistics 21 : 3351 .[Crossref], [Web of Science ®] [Google Scholar]) based on repeated median filters, are used to develop control charts. The distributional properties of the estimates are studied using simulation and these are found to closely follow normal distribution. The repeated median being robust against outliers with asymptotically 50% breakdown value and having small standard deviation is found to be useful as a basis for monitoring process averages. The control charts using repeated median estimates have been recommended for general use.  相似文献   

18.
By using the medical data analyzed by Kang et al. (2007 Kang, C.W., Lee, M.S., Seong, Y.J., Hawkins, D.M. (2007). A control chart for the coefficient of variation. J. Qual. Technol. 39(2):151158.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), a Bayesian procedure is applied to obtain control limits for the coefficient of variation. Reference and probability matching priors are derived for a common coefficient of variation across the range of sample values. By simulating the posterior predictive density function of a future coefficient of variation, it is shown that the control limits are effectively identical to those obtained by Kang et al. (2007 Kang, C.W., Lee, M.S., Seong, Y.J., Hawkins, D.M. (2007). A control chart for the coefficient of variation. J. Qual. Technol. 39(2):151158.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) for the specific dataset they used. This article illustrates the flexibility and unique features of the Bayesian simulation method for obtaining posterior distributions, predictive intervals, and run-lengths in the case of the coefficient of variation. A simulation study shows that the 95% Bayesian confidence intervals for the coefficient of variation have the correct frequentist coverage.  相似文献   

19.
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000 Henderson , R. , Diggle , P. , Dobson , A. ( 2000 ). Joint modelling of longitudinal measurements and event time data . Biostatistics 1 ( 4 ): 465480 .[Crossref], [PubMed] [Google Scholar], 2002 Henderson , R. , Diggle , P. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000 Henderson , R. , Diggle , P. , Dobson , A. ( 2000 ). Joint modelling of longitudinal measurements and event time data . Biostatistics 1 ( 4 ): 465480 .[Crossref], [PubMed] [Google Scholar], 2002 Henderson , R. , Diggle , P. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function.

Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients.  相似文献   

20.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005 Braun , J. , Duchesne , T. , Stafford , J. E. ( 2005 ). Local likelihood density estimation for interval censored data . Canadian Journal of Statistics 33 : 3960 .[Crossref], [Web of Science ®] [Google Scholar]). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号