首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The penalized likelihood approach of Fan and Li (2001 Fan, J., Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Association 96:13481360.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 2002 Fan, J., Li, R. (2002). Variable selection for Cox’s proportional hazards model and frailty model. The Annals of Statistics 30:7499.[Crossref], [Web of Science ®] [Google Scholar]) differs from the traditional variable selection procedures in that it deletes the non-significant variables by estimating their coefficients as zero. Nevertheless, the desirable performance of this shrinkage methodology relies heavily on an appropriate selection of the tuning parameter which is involved in the penalty functions. In this work, new estimates of the norm of the error are firstly proposed through the use of Kantorovich inequalities and, subsequently, applied to the frailty models framework. These estimates are used in order to derive a tuning parameter selection procedure for penalized frailty models and clustered data. In contrast with the standard methods, the proposed approach does not depend on resampling and therefore results in a considerable gain in computational time. Moreover, it produces improved results. Simulation studies are presented to support theoretical findings and two real medical data sets are analyzed.  相似文献   

2.
Recently, the ensemble learning approaches have been proven to be quite effective for variable selection in linear regression models. In general, a good variable selection ensemble should consist of a diverse collection of strong members. Based on the parallel genetic algorithm (PGA) proposed in [41 M. Zhu and H.A. Chipman, Darwinian evolution in parallel universes: A parallel genetic algorithm for variable selection, Technometrics 48(4) (2006), pp. 491502. doi: 10.1198/004017006000000093[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]], in this paper, we propose a novel method RandGA through injecting randomness into PGA with the aim to increase the diversity among ensemble members. Using a number of simulated data sets, we show that the newly proposed method RandGA compares favorably with other variable selection techniques. As a real example, the new method is applied to the diabetes data.  相似文献   

3.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

4.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

5.
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007 Zhao , H. B. ( 2007 ). Comparing several treatments with a control . J. Statist. Plann. Infer. 137 : 29963006 .[Crossref], [Web of Science ®] [Google Scholar]) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000 Hayter , A. J. , Miwa , T. , Liu , W. ( 2000 ). Combining the advantages of one-sided and two-sided procedures for comparing several treatments with a control . J. Statist. Plann. Infer. 86 : 8199 .[Crossref], [Web of Science ®] [Google Scholar]), when the sample size is balanced. We also illustrate the new procedure by an example.  相似文献   

6.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

7.
The problem of finding D-optimal designs in the presence of a number of covariates has been considered in the one-way set-up. This is an extension of Dey and Mukerjee (2006 Dey , A. , Mukerjee , R. ( 2006 ). D-optimal designs for covariate models . Statistics 40 : 297305 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) in the sense that for fixed replication numbers of each treatment, an alternative upper bound to the determinant of the information matrix has been found through completely symmetric C-matrices for the regression coefficients; this upper bound includes the upper bound given in Dey and Mukerjee (2006 Dey , A. , Mukerjee , R. ( 2006 ). D-optimal designs for covariate models . Statistics 40 : 297305 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) obtained through diagonal C-matrices. Because of the fact that a smaller class of C-matrices was used at the intermediate stage where the replication numbers were fixed, ultimately some optimal designs remained unidentified there. These designs have been identified here and thereby the conjecture made in Dey and Mukerjee (2006 Dey , A. , Mukerjee , R. ( 2006 ). D-optimal designs for covariate models . Statistics 40 : 297305 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) has been settled.  相似文献   

8.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008 Cysneiros , F. J. A. , Vanegas , L. H. ( 2008 ). Residuals and their statistical properties in symmetrical nonlinear models . Statist. Probab. Lett. 78 : 32693273 .[Crossref], [Web of Science ®] [Google Scholar]) and Vanegas and Cysneiros (2010 Vanegas , L. H. , Cysneiros , F. J. A. ( 2010 ). Assesment of diagnostic procedures in symmetrical nonlinear regression models . Computat. Statist. Data Anal. 54 : 10021016 .[Crossref], [Web of Science ®] [Google Scholar]), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002 Venables , W. N. , Ripley , B. D. ( 2002 ). Modern Applied with S. , 4th ed. New York : Springer .[Crossref] [Google Scholar]), is also analyzed.  相似文献   

9.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a Demetrescu, M., Hanck, C. (2012a). A simple nonstationary-volatility robust panel unit root test. Economics Letters 117(2):1013.[Crossref], [Web of Science ®] [Google Scholar]). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary.  相似文献   

10.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014 Örsal, D. D. K., Droge, B. (2014). Panel cointegration testing in the presence of a time trend. Computational Statistics and Data Analysis 76:377390.[Crossref], [Web of Science ®] [Google Scholar]) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004 Bai, J., Ng, S. (2004). A PANIC attack on unit roots and cointegration. Econometrica 72(4):11271177.[Crossref], [Web of Science ®] [Google Scholar]) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples.  相似文献   

11.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

12.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented.  相似文献   

13.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

14.
The efficiency of the penalized methods (Fan and Li, 2001 Fan , J. , Li , R. ( 2001 ). Variable selection via nonconcave penalized likelihood and its oracle properties . Journal of the American Statistical Association 96 : 13481360 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) depends strongly on a tuning parameter due to the fact that it controls the extent of penalization. Therefore, it is important to select it appropriately. In general, tuning parameters are chosen by data-driven approaches, such as the commonly used generalized cross validation. In this article, we propose an alternative method for the derivation of the tuning parameter selector in penalized least squares framework, which can lead to an ameliorated estimate. Simulation studies are presented to support theoretical findings and a comparison of the Type I and Type II error rates, considering the L 1, the hard thresholding and the Smoothly Clipped Absolute Deviation penalty functions, is performed. The results are given in tables and discussion follows.  相似文献   

15.
Karlis and Santourian [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24 M.-S. Yang, C.-Y. Lai, and C.-Y. Lin, A robust EM clustering algorithm for Gaussian mixture models, Pattern Recognit. 45 (2012), pp. 39503961. doi: 10.1016/j.patcog.2012.04.031[Crossref], [Web of Science ®] [Google Scholar]]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9 I.G. Jiang, W.H. Ip, and L.C. Yeh, On the fate of close-in extrasolar planets, Astrophys. J. 582 (2003), pp. 449454. doi: 10.1086/344590[Crossref], [Web of Science ®] [Google Scholar]]); and (2) there are two clusters in ln?P?e space.  相似文献   

16.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

17.
In this article, we establish the complete moment convergence of a moving-average process generated by a class of random variables satisfying the Rosenthal-type maximal inequality and the week mean dominating condition. On the one hand, we give the correct proof for the case p = 1 in Ko (2015 Ko, M.H. (2015). Complete moment convergence of moving average process generated by a class of random variables. J. Inequalities Appl. 2015(1):19. Article ID 225.[Crossref], [Web of Science ®] [Google Scholar]); on the other hand, we also consider the case αp = 1 which was not considered in Ko (2015 Ko, M.H. (2015). Complete moment convergence of moving average process generated by a class of random variables. J. Inequalities Appl. 2015(1):19. Article ID 225.[Crossref], [Web of Science ®] [Google Scholar]). The results obtained in this article generalize some corresponding ones for some dependent sequences.  相似文献   

18.
This paper proposes an intuitive clustering algorithm capable of automatically self-organizing data groups based on the original data structure. Comparisons between the propopsed algorithm and EM [1 A. Banerjee, I.S. Dhillon, J. Ghosh, and S. Sra, Clustering on the unit hypersphere using von Mises–Fisher distribution, J. Mach. Learn. Res. 6 (2005), pp. 139. [Google Scholar]] and spherical k-means [7 I.S. Dhillon and D.S. Modha, Concept decompositions for large sparse text data using clustering, Mach. Learn. 42 (2001), pp. 143175. doi: 10.1023/A:1007612920971[Crossref], [Web of Science ®] [Google Scholar]] algorithms are given. These numerical results show the effectiveness of the proposed algorithm, using the correct classification rate and the adjusted Rand index as evaluation criteria [5 J.-M. Chiou and P.-L. Li, Functional clustering and identifying substructures of longitudinal data, J. R. Statist. Soc. Ser. B. 69 (2007), pp. 679699. doi: 10.1111/j.1467-9868.2007.00605.x[Crossref] [Google Scholar],6 J.-M. Chiou and P.-L. Li, Correlation-based functional clustering via subspace projection, J. Am. Statist. Assoc. 103 (2008), pp. 16841692. doi: 10.1198/016214508000000814[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. In 1995, Mayor and Queloz announced the detection of the first extrasolar planet (exoplanet) around a Sun-like star. Since then, observational efforts of astronomers have led to the detection of more than 1000 exoplanets. These discoveries may provide important information for understanding the formation and evolution of planetary systems. The proposed clustering algorithm is therefore used to study the data gathered on exoplanets. Two main implications are also suggested: (1) there are three major clusters, which correspond to the exoplanets in the regimes of disc, ongoing tidal and tidal interactions, respectively, and (2) the stellar metallicity does not play a key role in exoplanet migration.  相似文献   

19.
By applying the recursion of Huffer (1988 Huffer, F. 1988. Divided differences and the joint distribution of linear combinations of spacings. Journal of Applied Probability, 25: 346354. [Crossref], [Web of Science ®] [Google Scholar]) repeatedly, we propose an algorithm for evaluating the null joint distribution of Dixon-type test statistics for testing discordancy of k upper outliers in exponential samples. By using the critical values of Dixon-type test statistics determined from the proposed algorithm and those of Cochran-type test statistics presented earlier by Lin and Balakrishnan (2009 Lin, C. T. and Balakrishnan, N. 2009. Exact computation of the null distribution of a test for multiple outliers in an exponential sample. Computational Statistics & Data Analysis, 53: 32813290. [Crossref], [Web of Science ®] [Google Scholar]), we carry out an extensive Monte Carlo study to investigate the powers and the error probabilities for the effects of masking and swamping when the number of outliers k = 2 and 3. Based on our empirical findings, we recommend Rosner’s (1975 Rosner, B. 1975. On the detection of many outliers. Technometrics, 17: 221227. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) sequential test procedure based on Dixon-type test statistics for testing multiple outliers from an exponential distribution.  相似文献   

20.
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3 N. Davarzani and A. Parsian, Statistical inference for discrete middle-censored data, J. Statist. Plan. Inference 141 (2011), pp. 14551462. doi: 10.1016/j.jspi.2010.10.012[Crossref], [Web of Science ®] [Google Scholar]], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6 S.K. Iyer, S.R. Jammalamadaka, and D. Kundu, Analysis of middle censored data with exponential lifetime distributions, J. Statist. Plan. Inference 138 (2008), pp. 35503560. doi: 10.1016/j.jspi.2007.03.062[Crossref], [Web of Science ®] [Google Scholar]]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1 D.D. Baird and A.J. Wilcox, Cigarette smoking associated with delayed conception, J, Am. Med. Assoc. 253 (1985), pp. 29792983. doi: 10.1001/jama.1985.03350440057031[Crossref], [Web of Science ®] [Google Scholar]] is included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号