首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The logistic distribution has been used to model growth curves in survival analysis and biological studies. In this article, we propose a goodness-of-fit test for the logistic distribution based on the empirical likelihood ratio. The test is constructed based on the methodology introduced by Vexler and Gurevich [17 A. Vexler and G. Gurevich, Empirical likelihood ratios applied to goodness-of-fit tests based on sample entropy, Comput. Stat. Data Anal. 54 (2010), pp. 531545. doi: 10.1016/j.csda.2009.09.025[Crossref], [Web of Science ®] [Google Scholar]]. In order to compute the test statistic, parameters of the distribution are estimated by the method of maximum likelihood. Power comparisons of the proposed test with some known competing tests are carried out via simulations. Finally, an illustrative example is presented and analyzed.  相似文献   

2.
Coppi et al. [7 R. Coppi, P. D'Urso, and P. Giordani, Fuzzy and possibilistic clustering for fuzzy data, Comput. Stat. Data Anal. 56 (2012), pp. 915927. doi: 10.1016/j.csda.2010.09.013[Crossref], [Web of Science ®] [Google Scholar]] applied Yang and Wu's [20 M.-S. Yang and K.-L. Wu, Unsupervised possibilistic clustering, Pattern Recognit. 30 (2006), pp. 521. doi: 10.1016/j.patcog.2005.07.005[Crossref], [Web of Science ®] [Google Scholar]] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm.  相似文献   

3.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010 Chen, S. X., Delaigle, A., Hall, P. (2010). Nonparametric estimation for a class of Lévy processes. Journal of Econometrics 157:257271.[Crossref], [Web of Science ®] [Google Scholar]). The estimators proposed here also have fast convergence rates and are simple to implement.  相似文献   

4.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014 Örsal, D. D. K., Droge, B. (2014). Panel cointegration testing in the presence of a time trend. Computational Statistics and Data Analysis 76:377390.[Crossref], [Web of Science ®] [Google Scholar]) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004 Bai, J., Ng, S. (2004). A PANIC attack on unit roots and cointegration. Econometrica 72(4):11271177.[Crossref], [Web of Science ®] [Google Scholar]) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples.  相似文献   

5.
The density power divergence (DPD) measure, defined in terms of a single parameter α, has proved to be a popular tool in the area of robust estimation [1 A. Basu, I.R. Harris, N.L. Hjort and M.C. Jones, Robust and efficient estimation by minimizing a density power divergence, Biometrika 85 (1998), pp. 549559. doi: 10.1093/biomet/85.3.549[Crossref], [Web of Science ®] [Google Scholar]]. Recently, Ghosh and Basu [5 A. Ghosh and A. Basu, Robust estimation for independent non-homogeneous observations using density power divergence with applications to linear regression, Electron. J. Stat. 7 (2013), pp. 24202456. doi: 10.1214/13-EJS847[Crossref], [Web of Science ®] [Google Scholar]] rigorously established the asymptotic properties of the MDPDEs in case of independent non-homogeneous observations. In this paper, we present an extensive numerical study to describe the performance of the method in the case of linear regression, the most common setup under the case of non-homogeneous data. In addition, we extend the existing methods for the selection of the optimal robustness tuning parameter from the case of independent and identically distributed (i.i.d.) data to the case of non-homogeneous observations. Proper selection of the tuning parameter is critical to the appropriateness of the resulting analysis. The selection of the optimal robustness tuning parameter is explored in the context of the linear regression problem with an extensive numerical study involving real and simulated data.  相似文献   

6.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008 Cysneiros , F. J. A. , Vanegas , L. H. ( 2008 ). Residuals and their statistical properties in symmetrical nonlinear models . Statist. Probab. Lett. 78 : 32693273 .[Crossref], [Web of Science ®] [Google Scholar]) and Vanegas and Cysneiros (2010 Vanegas , L. H. , Cysneiros , F. J. A. ( 2010 ). Assesment of diagnostic procedures in symmetrical nonlinear regression models . Computat. Statist. Data Anal. 54 : 10021016 .[Crossref], [Web of Science ®] [Google Scholar]), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002 Venables , W. N. , Ripley , B. D. ( 2002 ). Modern Applied with S. , 4th ed. New York : Springer .[Crossref] [Google Scholar]), is also analyzed.  相似文献   

7.
Adaptive clinical trial designs can often improve drug-study efficiency by utilizing data obtained during the course of the trial. We present a novel Bayesian two-stage adaptive design for Phase II clinical trials with Poisson-distributed outcomes that allows for person-observation-time adjustments for early termination due to either futility or efficacy. Our design is motivated by the adaptive trial from [9 V. Sambucini, A Bayesian predictive two-stage design for Phase II clinical trials, Stat. Med. 27 (2008), pp. 11991224. doi: 10.1002/sim.3021[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], which uses binomial data. Although many frequentist and Bayesian two-stage adaptive designs for count data have been proposed in the literature, many designs do not allow for person-time adjustments after the first stage. This restriction limits flexibility in the study design. However, our proposed design allows for such flexibility by basing the second-stage person-time on the first-stage observed-count data. We demonstrate the implementation of our Bayesian predictive adaptive two-stage design using a hypothetical Phase II trial of Immune Globulin (Intravenous).  相似文献   

8.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

9.
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3 N. Davarzani and A. Parsian, Statistical inference for discrete middle-censored data, J. Statist. Plan. Inference 141 (2011), pp. 14551462. doi: 10.1016/j.jspi.2010.10.012[Crossref], [Web of Science ®] [Google Scholar]], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6 S.K. Iyer, S.R. Jammalamadaka, and D. Kundu, Analysis of middle censored data with exponential lifetime distributions, J. Statist. Plan. Inference 138 (2008), pp. 35503560. doi: 10.1016/j.jspi.2007.03.062[Crossref], [Web of Science ®] [Google Scholar]]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1 D.D. Baird and A.J. Wilcox, Cigarette smoking associated with delayed conception, J, Am. Med. Assoc. 253 (1985), pp. 29792983. doi: 10.1001/jama.1985.03350440057031[Crossref], [Web of Science ®] [Google Scholar]] is included.  相似文献   

10.
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010 Han, C., Phillips, P. C. B. (2010). Gmm estimation for dynamic panels with fixed effects and strong instruments at unity. Econometric Theory 26:119151.[Crossref], [Web of Science ®] [Google Scholar]) are provided. Furthermore, we simplify the analysis of Binder et al. (2005 Binder, M., Hsiao, C., Pesaran, M. H. (2005). Estimation and inference in short panel vector autoregressions with unit root and cointegration. Econometric Theory 21:795837.[Crossref], [Web of Science ®] [Google Scholar]) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study.  相似文献   

11.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997 Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics 79:551563.[Crossref], [Web of Science ®] [Google Scholar]) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015 Eo, Y., Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. Quantitative Economics 6:463497.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) method.  相似文献   

12.
Time trend resistant fractional factorial experiments have often been based on regular fractionated designs where several algorithms exist for sequencing their runs in minimum number of factor-level changes (i.e. minimum cost) such that main effects and/or two-factor interactions are orthogonal to and free from aliasing with the time trend, which may be present in the sequentially generated responses. On the other hand, only one algorithm exists for sequencing runs of the more economical non-regular fractional factorial experiments, namely Angelopoulos et al. [1 P. Angelopoulos, H. Evangelaras, and C. Koukouvinos, Run orders for efficient two-level experimental plans with minimum factor level changes robust to time trends, J. Statist. Plann. Inference 139 (2009), pp. 37183724. doi: 10.1016/j.jspi.2009.05.002[Crossref], [Web of Science ®] [Google Scholar]]. This research studies sequential factorial experimentation under non-regular fractionated designs and constructs a catalog of 8 minimum cost linear trend-free 12-run designs (of resolution III) in 4 up to 11 two-level factors by applying the interactions-main effects assignment technique of Cheng and Jacroux [3 C.S. Cheng and M. Jacroux, The construction of trend-free run orders of two-level factorial designs, J. Amer. Statist. Assoc. 83 (1988), pp. 11521158. doi: 10.1080/01621459.1988.10478713[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] on the standard 12-run Plackett–Burman design, where factor-level changes between runs are minimal and where main effects are orthogonal to the linear time trend. These eight 12-run designs are non-orthogonal but are more economical than the linear trend-free designs of Angelopoulos et al. [1 P. Angelopoulos, H. Evangelaras, and C. Koukouvinos, Run orders for efficient two-level experimental plans with minimum factor level changes robust to time trends, J. Statist. Plann. Inference 139 (2009), pp. 37183724. doi: 10.1016/j.jspi.2009.05.002[Crossref], [Web of Science ®] [Google Scholar]], where they can accommodate larger number of two-level factors in smaller number of experimental runs. These non-regular designs are also more economical than many regular trend-free designs. The following will be provided for each proposed systematic design:
  • (1) The run order in minimum number of factor-level changes.

  • (2) The total number of factor-level changes between the 12 runs (i.e. the cost).

  • (3) The closed-form least-squares contrast estimates for all main effects as well as their closed-form variance–covariance structure.

In addition, combined designs of each of these 8 designs that can be generated by either complete or partial foldover allow for the estimation of two-factor interactions involving one of the factors (i.e. the most influential).  相似文献   

13.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

14.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

15.
Developing statistical methods to model hydrologic events is always interesting for both statisticians and hydrologists, because of its importance in hydraulic structures design and water resource planning. Because of this, a flexible 3-parameter generalization of the exponential distribution is introduced based on the binomial exponential 2 (BE2) distribution [2 H.S. Bakouch, M. Aghababaei Jazi, S. Nadarajah, A. Dolati, and R. Roozegar, A lifetime model with increasing failure rate, Appl. Math. Model. 38 (2014), pp. 53925406. doi: 10.1016/j.apm.2014.04.028[Crossref], [Web of Science ®] [Google Scholar]]. The proposed distribution involving the exponential, gamma and BE2 distributions as submodels; and it exhibits decreasing, increasing and bathtub-shaped hazard rates, so it turns out to be quite flexible for analyzing non-negative real life data. Some statistical properties, parameters estimation and information matrix of the distribution are investigated. The proposed distribution, Gumbel, generalized Logistic and other distributions are utilized to model and fit two hydrologic data sets. The distribution is shown to be more appropriate to the data than the compared distributions using the selection criteria: average scaled absolute error, Akaike information criterion, Bayesian information criterion and Kolmogorov–Smirnov statistics. As a result, some hydrologic parameters of the data are obtained such as return level, conditional mean, mean deviation about the return level and the rth moments of order statistics.  相似文献   

16.
The analysis of categorical response data through the multinomial model is very frequent in many statistical, econometric, and biometric applications. However, one of the main problems is the precise estimation of the model parameters when the number of observations is very low. We propose a new Bayesian estimation approach where the prior distribution is constructed through the transformation of the multivariate beta of Olkin and Liu (2003 Olkin , I. , Liu , R. ( 2003 ). A bivariate beta distribution . Stat. Probab. Lett. 62 : 407412 .[Crossref], [Web of Science ®] [Google Scholar]). Moreover, the application of the zero-variance principle allows us to estimate moments in Monte Carlo simulations with a dramatic reduction of their variances. We show the advantages of our approach through applications to some toy examples, where we get efficient parameter estimates.  相似文献   

17.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

18.
Breitung and Candelon (2006 Breitung , J. , Candelon , B. ( 2006 ). Testing for short- and long-run causality: A frequency-domain approach . Journal of Econometrics 132 : 363378 .[Crossref], [Web of Science ®] [Google Scholar]) in Journal of Econometrics proposed a simple statistical testing procedure for the noncausality hypothesis at a given frequency. In their paper, however, they reported some theoretical results indicating that their test severely suffers from quite low power when the noncausality hypothesis is tested at a frequency close to 0 or pi. This paper examines whether or not these results indicate their procedure is useless at such frequencies.  相似文献   

19.
ABSTRACT

The concept of generalized order statistics was introduced by Kamps (1995 Kamps , U. ( 1995 ) A Concept of Generalized Order Statistics . Germany : B. G. Teubner Stuttgart [Crossref] [Google Scholar]) to unify several concepts that have been used in statistics such as order statistics, record values, and sequential order statistics. Estimation of the parameters of the Burr type XII distribution are obtained based on generalized order statistics. The maximum likelihood and Bayes methods of estimation are used for this purposes. The Bayes estimates are derived by using the approximation form of Lindley (1980 Lindley , D. V. ( 1980 ). Approximate Bayesian methods . J. Trabajos de Estadistica 31 : 223237 .[Crossref] [Google Scholar]). Estimation based on upper records from the Burr model is obtained and compared by using Monte Carlo simulation study. Our results are specialized to the results of AL-Hussaini and Jaheen (1992 AL-Hussaini , E. K. , Jaheen , Z. F. ( 1992 ). Bayesian estimation of the parameters, reliability and failure rate functions of the Burr type XII failure model . J. Statist. Comput. Simul. 41 : 3140 .[Taylor &; Francis Online] [Google Scholar]) which are based on ordinary order statistics.  相似文献   

20.
Analysis of covariance (ANCOVA) is the standard procedure for comparing several treatments when the response variable depends on one or more covariates. We consider the problem of testing the equality of treatment effects when the variances are not assumed to be equal. It is well known that classical F test is not robust with respect to the assumption of equal variances and may lead to misleading conclusions if the variances are not equal. Ananda (1998 Ananda , M. M. A. ( 1998 ). Bayesian and non-Bayesian solutions to analysis of covariance models under heteroscedasticity . J. Econometrics 86 : 177192 .[Crossref], [Web of Science ®] [Google Scholar]) developed a generalized F test for testing the equality of treatment effects. However, simulation studies show that the actual size of this test can be much higher than the nominal level when the sample sizes are small, particularly when the number of treatments is large. In this article, we develop a test using the parametric bootstrap approach of Krishnamoorthy et al. (2007 Krishnamoorthy , K. , Lu , F. , Mathew , T. ( 2007 ). A parametric bootstrap approach for ANOVA with unequal variances: Fixed and random models . Computat. Statist. Data Anal. 51 : 57315742 .[Crossref], [Web of Science ®] [Google Scholar]). Our simulations show that the actual size of our proposed test is close to the nominal level, irrespective of the number of treatments and sample sizes. Our simulations also indicate that our proposed PB test is more robust, with respect to the assumption of normality, than the generalized F test. Therefore, our proposed PB test provides a satisfactory alternative to the generalized F test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号