共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
2.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
3.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
4.
Kanti V. Mardia 《统计学通讯:理论与方法》2014,43(6):1132-1144
In application areas like bioinformatics, multivariate distributions on angles are encountered which show significant clustering. One approach to statistical modeling of such situations is to use mixtures of unimodal distributions. In the literature (Mardia et al., 2012), the multivariate von Mises distribution, also known as the multivariate sine distribution, has been suggested for components of such models, but work in the area has been hampered by the fact that no good criteria for the von Mises distribution to be unimodal were available. In this article we study the question about when a multivariate von Mises distribution is unimodal. We give sufficient criteria for this to be the case and show examples of distributions with multiple modes when these criteria are violated. In addition, we propose a method to generate samples from the von Mises distribution in the case of high concentration. 相似文献
5.
Siti Haslinda Mohd Din Marek Molas Jolanda Luime Emmanuel Lesaffre 《Journal of applied statistics》2014,41(8):1627-1644
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16,21,28] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed. 相似文献
6.
In this paper we propose a new lifetime model for multivariate survival data in presence of surviving fractions and examine some of its properties. Its genesis is based on situations in which there are m types of unobservable competing causes, where each cause is related to a time of occurrence of an event of interest. Our model is a multivariate extension of the univariate survival cure rate model proposed by Rodrigues et al. [37]. The inferential approach exploits the maximum likelihood tools. We perform a simulation study in order to verify the asymptotic properties of the maximum likelihood estimators. The simulation study also focus on size and power of the likelihood ratio test. The methodology is illustrated on a real data set on customer churn data. 相似文献
7.
Analysis of discrete lifetime data under middle-censoring and in the presence of covariates 总被引:1,自引:0,他引:1
S. Rao Jammalamadaka 《Journal of applied statistics》2015,42(4):905-913
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1] is included. 相似文献
8.
Analysis of covariance (ANCOVA) is the standard procedure for comparing several treatments when the response variable depends on one or more covariates. We consider the problem of testing the equality of treatment effects when the variances are not assumed to be equal. It is well known that classical F test is not robust with respect to the assumption of equal variances and may lead to misleading conclusions if the variances are not equal. Ananda (1998) developed a generalized F test for testing the equality of treatment effects. However, simulation studies show that the actual size of this test can be much higher than the nominal level when the sample sizes are small, particularly when the number of treatments is large. In this article, we develop a test using the parametric bootstrap approach of Krishnamoorthy et al. (2007). Our simulations show that the actual size of our proposed test is close to the nominal level, irrespective of the number of treatments and sample sizes. Our simulations also indicate that our proposed PB test is more robust, with respect to the assumption of normality, than the generalized F test. Therefore, our proposed PB test provides a satisfactory alternative to the generalized F test. 相似文献
9.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008) and Vanegas and Cysneiros (2010), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002), is also analyzed. 相似文献
10.
Biao Zhang 《Econometric Reviews》2016,35(2):201-231
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994, 1995) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994, 1995). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. 相似文献
11.
Developing statistical methods to model hydrologic events is always interesting for both statisticians and hydrologists, because of its importance in hydraulic structures design and water resource planning. Because of this, a flexible 3-parameter generalization of the exponential distribution is introduced based on the binomial exponential 2 (BE2) distribution [2]. The proposed distribution involving the exponential, gamma and BE2 distributions as submodels; and it exhibits decreasing, increasing and bathtub-shaped hazard rates, so it turns out to be quite flexible for analyzing non-negative real life data. Some statistical properties, parameters estimation and information matrix of the distribution are investigated. The proposed distribution, Gumbel, generalized Logistic and other distributions are utilized to model and fit two hydrologic data sets. The distribution is shown to be more appropriate to the data than the compared distributions using the selection criteria: average scaled absolute error, Akaike information criterion, Bayesian information criterion and Kolmogorov–Smirnov statistics. As a result, some hydrologic parameters of the data are obtained such as return level, conditional mean, mean deviation about the return level and the rth moments of order statistics. 相似文献
12.
Ye Li 《Econometric Reviews》2017,36(1-3):289-353
We consider issues related to inference about locally ordered breaks in a system of equations, as originally proposed by Qu and Perron (2007). These apply when break dates in different equations within the system are not separated by a positive fraction of the sample size. This allows constructing joint confidence intervals of all such locally ordered break dates. We extend the results of Qu and Perron (2007) in several directions. First, we allow the covariates to be any mix of trends and stationary or integrated regressors. Second, we allow for breaks in the variance-covariance matrix of the errors. Third, we allow for multiple locally ordered breaks, each occurring in a different equation within a subset of equations in the system. Via some simulation experiments, we show first that the limit distributions derived provide good approximations to the finite sample distributions. Second, we show that forming confidence intervals in such a joint fashion allows more precision (tighter intervals) compared to the standard approach of forming confidence intervals using the method of Bai and Perron (1998) applied to a single equation. Simulations also indicate that using the locally ordered break confidence intervals yields better coverage rates than using the framework for globally distinct breaks when the break dates are separated by roughly 10% of the total sample size. 相似文献
13.
Fernanda B. Rizzato Roseli A. Leandro Clarice G.B. Demétrio 《Journal of applied statistics》2016,43(11):2085-2109
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17,18] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12]. 相似文献
14.
Artūras Juodis 《Econometric Reviews》2018,37(6):650-693
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010) are provided. Furthermore, we simplify the analysis of Binder et al. (2005) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study. 相似文献
15.
Stephen G. Donald 《Econometric Reviews》2016,35(4):553-585
We extend Hansen's (2005) recentering method to a continuum of inequality constraints to construct new Kolmogorov–Smirnov tests for stochastic dominance of any pre-specified order. We show that our tests have correct size asymptotically, are consistent against fixed alternatives and are unbiased against some N?1/2 local alternatives. It is shown that by avoiding the use of the least favorable configuration, our tests are less conservative and more powerful than Barrett and Donald's (2003) and in some simulation examples we consider, we find that our tests can be more powerful than the subsampling test of Linton et al. (2005). We apply our method to test stochastic dominance relations between Canadian income distributions in 1978 and 1986 as considered in Barrett and Donald (2003) and find that some of the hypothesis testing results are different using the new method. 相似文献
16.
In this article, we propose a weighted simulated integrated conditional moment (WSICM) test of the validity of parametric specifications of conditional distribution models for stationary time series data, by combining the weighted integrated conditional moment (ICM) test of Bierens (1984) for time series regression models with the simulated ICM test of Bierens and Wang (2012) of conditional distribution models for cross-section data. To the best of our knowledge, no other consistent test for parametric conditional time series distributions has been proposed yet in the literature, despite consistency claims made by some authors. 相似文献
17.
Classification and regression tree has been useful in medical research to construct algorithms for disease diagnosis or prognostic prediction. Jin et al. 7 developed a robust and cost-saving tree (RACT) algorithm with application in classification of hip fracture risk after 5-year follow-up based on the data from the Study of Osteoporotic Fractures (SOF). Although conventional recursive partitioning algorithms have been well developed, they still have some limitations. Binary splits may generate a big tree with many layers, but trinary splits may produce too many nodes. In this paper, we propose a classification approach combining trinary splits and binary splits to generate a trinary–binary tree. A new non-inferiority test of entropy is used to select the binary or trinary splits. We apply the modified method in SOF to construct a trinary–binary classification rule for predicting risk of osteoporotic hip fracture. Our new classification tree has good statistical utility: it is statistically non-inferior to the optimum binary tree and the RACT based on the testing sample and is also cost-saving. It may be useful in clinical applications: femoral neck bone mineral density, age, height loss and weight gain since age 25 can identify subjects with elevated 5-year hip fracture risk without loss of statistical efficiency. 相似文献
18.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary. 相似文献
19.
Abstract The present paper focuses attention on the sensitivity of technical inefficiency to most commonly used one‐sided distributions of the inefficiency error term, namely the truncated normal, the half‐normal, and the exponential distributions. A generalized version of the half‐normal, which does not embody the zero‐mean restriction, is also explored. For each distribution, the likelihood function and the counterpart of the estimator of technical efficiency are explicitly stated (Jondrow, J., Lovell, C. A. K., Materov, I. S., Schmidt, P. ([1982]), On estimation of technical inefficiency in the stochastic frontier production function model, J. Econometrics19:233–238). Based on our panel data set, related to Tunisian manufacturing firms over the period 1983–1993, formal tests lead to a strong rejection of the zero‐mean restriction embodied in the half normal distribution. Our main conclusion is that the degree of measured inefficiency is very sensitive to the postulated assumptions about the distribution of the one‐sided error term. The estimated inefficiency indices are, however, unaffected by the choice of the functional form for the production function. 相似文献
20.
Guillaume Chevillon 《Econometric Reviews》2017,36(5):514-545
Standard tests for the rank of cointegration of a vector autoregressive process present distributions that are affected by the presence of deterministic trends. We consider the recent approach of Demetrescu et al. (2009) who recommend testing a composite null. We assess this methodology in the presence of trends (linear or broken) whose magnitude is small enough not to be always detectable at conventional significance levels. We model them using local asymptotics and derive the properties of the test statistics. We show that whether the trend is orthogonal to the cointegrating vector has a major impact on the distributions but that the test combination approach remains valid. We apply of the methodology to the study of cointegration properties between global temperatures and the radiative forcing of human gas emissions. We find new evidence of Granger Causality. 相似文献