首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

2.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

3.
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design.  相似文献   

4.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

5.
This article proposes wild and the independent and identically distibuted (i.i.d.) parametric bootstrap implementations of the time-varying cointegration test of Bierens and Martins (2010 Bierens, H. J., Martins, L. F. (2010). Time varying cointegration. Econometric Theory 26:14531490.[Crossref], [Web of Science ®] [Google Scholar]). The bootstrap statistics and the original likelihood ratio test share the same first-order asymptotic null distribution. Monte Carlo results suggest that the bootstrap approximation to the finite-sample distribution is very accurate, in particular for the wild bootstrap case. The tests are applied to study the purchasing power parity hypothesis for twelve Organisation for Economic Cooperation and Development (OECD) countries and we only find evidence of a constant long-term equilibrium for the U.S.–U.K. relationship.  相似文献   

6.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a Demetrescu, M., Hanck, C. (2012a). A simple nonstationary-volatility robust panel unit root test. Economics Letters 117(2):1013.[Crossref], [Web of Science ®] [Google Scholar]). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary.  相似文献   

7.
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997 Wulfsohn , M. S. , Tsiatis , A. A. ( 1997 ). A joint model for survival and longitudinal data measured with error . Biometrics 53 ( 1 ): 330339 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) and Song et al. (2002 Song , X. , Davidian , M. , Tsiatis , A. A. ( 2002 ). A Semiparametric likelihood approach to joint modeling of longitudinal and time-to-event data . Biometrics 58 ( 4 ): 742753 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In this article, similar to Henderson et al. (2002 Henderson , R. , Diggle , P. J. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data.  相似文献   

8.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

9.
In this article, we propose a weighted simulated integrated conditional moment (WSICM) test of the validity of parametric specifications of conditional distribution models for stationary time series data, by combining the weighted integrated conditional moment (ICM) test of Bierens (1984 Bierens, H. J. (1984). Model specification testing of time series regressions. Journal of Econometrics 26:323353.[Crossref], [Web of Science ®] [Google Scholar]) for time series regression models with the simulated ICM test of Bierens and Wang (2012 Bierens, H. J., Wang, L. (2012). Integrated conditional moment tests for parametric conditional distributions. Econometric Theory 28:328362.[Crossref], [Web of Science ®] [Google Scholar]) of conditional distribution models for cross-section data. To the best of our knowledge, no other consistent test for parametric conditional time series distributions has been proposed yet in the literature, despite consistency claims made by some authors.  相似文献   

10.
Karlis and Santourian [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24 M.-S. Yang, C.-Y. Lai, and C.-Y. Lin, A robust EM clustering algorithm for Gaussian mixture models, Pattern Recognit. 45 (2012), pp. 39503961. doi: 10.1016/j.patcog.2012.04.031[Crossref], [Web of Science ®] [Google Scholar]]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9 I.G. Jiang, W.H. Ip, and L.C. Yeh, On the fate of close-in extrasolar planets, Astrophys. J. 582 (2003), pp. 449454. doi: 10.1086/344590[Crossref], [Web of Science ®] [Google Scholar]]); and (2) there are two clusters in ln?P?e space.  相似文献   

11.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

12.
In this article, we discuss about the stochastic comparisons and optimal allocation for policy limits and deductibles. We order the total retained losses of a policyholder in the usual stochastic order under more general conditions of X i (i = 1,…, n), based on which the optimal allocation of policy limits and deductibles are achieved in some special cases. Several results in Cheung (2007 Cheung , K. C. ( 2007 ). Optimal allocation of policy limits and deductibles . Insur. Math. Econ. 41 : 291382 .[Crossref], [Web of Science ®] [Google Scholar]) and Lu and Meng (2011 Lu , Z. , Meng , L. ( 2011 ). Stochastic comparisons for allocations of policy limits and deductibles with applications . Insur. Math. Econ. 48 : 338343 .[Crossref], [Web of Science ®] [Google Scholar]) are generalized here.  相似文献   

13.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008 Cysneiros , F. J. A. , Vanegas , L. H. ( 2008 ). Residuals and their statistical properties in symmetrical nonlinear models . Statist. Probab. Lett. 78 : 32693273 .[Crossref], [Web of Science ®] [Google Scholar]) and Vanegas and Cysneiros (2010 Vanegas , L. H. , Cysneiros , F. J. A. ( 2010 ). Assesment of diagnostic procedures in symmetrical nonlinear regression models . Computat. Statist. Data Anal. 54 : 10021016 .[Crossref], [Web of Science ®] [Google Scholar]), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002 Venables , W. N. , Ripley , B. D. ( 2002 ). Modern Applied with S. , 4th ed. New York : Springer .[Crossref] [Google Scholar]), is also analyzed.  相似文献   

14.
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3 N. Davarzani and A. Parsian, Statistical inference for discrete middle-censored data, J. Statist. Plan. Inference 141 (2011), pp. 14551462. doi: 10.1016/j.jspi.2010.10.012[Crossref], [Web of Science ®] [Google Scholar]], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6 S.K. Iyer, S.R. Jammalamadaka, and D. Kundu, Analysis of middle censored data with exponential lifetime distributions, J. Statist. Plan. Inference 138 (2008), pp. 35503560. doi: 10.1016/j.jspi.2007.03.062[Crossref], [Web of Science ®] [Google Scholar]]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1 D.D. Baird and A.J. Wilcox, Cigarette smoking associated with delayed conception, J, Am. Med. Assoc. 253 (1985), pp. 29792983. doi: 10.1001/jama.1985.03350440057031[Crossref], [Web of Science ®] [Google Scholar]] is included.  相似文献   

15.
We extend Hansen's (2005) recentering method to a continuum of inequality constraints to construct new Kolmogorov–Smirnov tests for stochastic dominance of any pre-specified order. We show that our tests have correct size asymptotically, are consistent against fixed alternatives and are unbiased against some N?1/2 local alternatives. It is shown that by avoiding the use of the least favorable configuration, our tests are less conservative and more powerful than Barrett and Donald's (2003) and in some simulation examples we consider, we find that our tests can be more powerful than the subsampling test of Linton et al. (2005 Linton, O., Maasoumi, E., Whang, Y.-J. (2005). Consistent testing for stochastic dominance under general sampling schemes. The Review of Economic Studies 72:735765.[Crossref], [Web of Science ®] [Google Scholar]). We apply our method to test stochastic dominance relations between Canadian income distributions in 1978 and 1986 as considered in Barrett and Donald (2003 Barrett, G. F., Donald, S. G. (2003). Consistent tests for stochastic dominance. Econometrica 71: 71104.[Crossref], [Web of Science ®] [Google Scholar]) and find that some of the hypothesis testing results are different using the new method.  相似文献   

16.
Since the seminal paper by Cook and Weisberg [9 R.D. Cook and S. Weisberg, Residuals and Influence in Regression, Chapman &; Hall, London, 1982. [Google Scholar]], local influence, next to case deletion, has gained popularity as a tool to detect influential subjects and measurements for a variety of statistical models. For the linear mixed model the approach leads to easily interpretable and computationally convenient expressions, not only highlighting influential subjects, but also which aspect of their profile leads to undue influence on the model's fit [17 E. Lesaffre and G. Verbeke, Local influence in linear mixed models, Biometrics 54 (1998), pp. 570582. doi: 10.2307/3109764[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Ouwens et al. [24 M.J.N.M. Ouwens, F.E.S. Tan, and M.P.F. Berger, Local influence to detect influential data structures for generalized linear mixed models, Biometrics 57 (2001), pp. 11661172. doi: 10.1111/j.0006-341X.2001.01166.x[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] applied the method to the Poisson-normal generalized linear mixed model (GLMM). Given the model's nonlinear structure, these authors did not derive interpretable components but rather focused on a graphical depiction of influence. In this paper, we consider GLMMs for binary, count, and time-to-event data, with the additional feature of accommodating overdispersion whenever necessary. For each situation, three approaches are considered, based on: (1) purely numerical derivations; (2) using a closed-form expression of the marginal likelihood function; and (3) using an integral representation of this likelihood. Unlike when case deletion is used, this leads to interpretable components, allowing not only to identify influential subjects, but also to study the cause thereof. The methodology is illustrated in case studies that range over the three data types mentioned.  相似文献   

17.
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6 Cox, C., Chu, H., Schneider, M. F. and Munoz, A. 2007. Parametric survival analysis and taxonomy of hazard functions for the generalized gamma distribution. Stat. Med., 26: 43524374. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered.  相似文献   

18.
This paper aimed at providing an efficient new unbiased estimator for estimating the proportion of a potentially sensitive attribute in survey sampling. The suggested randomization device makes use of the means, variances of scrambling variables, and the two scalars lie between “zero” and “one.” Thus, the same amount of information has been used at the estimation stage. The variance formula of the suggested estimator has been obtained. We have compared the proposed unbiased estimator with that of Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. Relevant conditions are obtained in which the proposed estimator is more efficient than Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. The optimum estimator (OE) in the proposed class of estimators has been identified which finally depends on moments ratios of the scrambling variables. The variance of the optimum estimator has been obtained and compared with that of the Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) estimator and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. It is interesting to mention that the “optimum estimator” of the class of estimators due to Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) depends on the parameter π under investigation which limits the use of Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) OE in practice while the proposed OE in this paper is free from such a constraint. The proposed OE depends only on the moments ratios of scrambling variables. This is an advantage over the Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. Numerical illustrations are given in the support of the present study when the scrambling variables follow normal distribution. Theoretical and empirical results are very sound and quite illuminating in the favor of the present study.  相似文献   

19.
Analysis of covariance (ANCOVA) is the standard procedure for comparing several treatments when the response variable depends on one or more covariates. We consider the problem of testing the equality of treatment effects when the variances are not assumed to be equal. It is well known that classical F test is not robust with respect to the assumption of equal variances and may lead to misleading conclusions if the variances are not equal. Ananda (1998 Ananda , M. M. A. ( 1998 ). Bayesian and non-Bayesian solutions to analysis of covariance models under heteroscedasticity . J. Econometrics 86 : 177192 .[Crossref], [Web of Science ®] [Google Scholar]) developed a generalized F test for testing the equality of treatment effects. However, simulation studies show that the actual size of this test can be much higher than the nominal level when the sample sizes are small, particularly when the number of treatments is large. In this article, we develop a test using the parametric bootstrap approach of Krishnamoorthy et al. (2007 Krishnamoorthy , K. , Lu , F. , Mathew , T. ( 2007 ). A parametric bootstrap approach for ANOVA with unequal variances: Fixed and random models . Computat. Statist. Data Anal. 51 : 57315742 .[Crossref], [Web of Science ®] [Google Scholar]). Our simulations show that the actual size of our proposed test is close to the nominal level, irrespective of the number of treatments and sample sizes. Our simulations also indicate that our proposed PB test is more robust, with respect to the assumption of normality, than the generalized F test. Therefore, our proposed PB test provides a satisfactory alternative to the generalized F test.  相似文献   

20.
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007 Zhao , H. B. ( 2007 ). Comparing several treatments with a control . J. Statist. Plann. Infer. 137 : 29963006 .[Crossref], [Web of Science ®] [Google Scholar]) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000 Hayter , A. J. , Miwa , T. , Liu , W. ( 2000 ). Combining the advantages of one-sided and two-sided procedures for comparing several treatments with a control . J. Statist. Plann. Infer. 86 : 8199 .[Crossref], [Web of Science ®] [Google Scholar]), when the sample size is balanced. We also illustrate the new procedure by an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号