共查询到20条相似文献,搜索用时 31 毫秒
1.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
2.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary. 相似文献
3.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
4.
Haibing Zhao 《统计学通讯:理论与方法》2014,43(6):1179-1191
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000), when the sample size is balanced. We also illustrate the new procedure by an example. 相似文献
5.
Maher Kachour 《统计学通讯:理论与方法》2014,43(2):355-376
In recent years, there has been a growing interest in modelling integred-valued time series. In this article, we propose a modified and generalized version of the first order rounded integer-valued autoregressive RINAR(1) model, originally introduced by Kachour and Yao (2009). Indeed, this class can be considered as an alternative of classical models based on the thinning operators. Using a Markov chain method, conditions for stationarity and the existence of moments are investigated. Least squares estimator of the model parameters is considered and its consistence is established. Finally, we describe the price change data using a model of the new class. 相似文献
6.
Fayçal Hamdi 《统计学通讯:理论与方法》2013,42(22):4182-4199
The purpose of this article is to develop algorithms for computing the exact Fisher information matrix of periodic time-varying state-space models. We first present a relatively simple recursive algorithm which computes the elements of the exact information matrix without involving numerical differentiation, since all required derivatives are analytically evaluated. The proposed algorithm extends the procedure due to Cavanaugh and Shumway (1996) to the periodic state-space framework. Exploiting the approach used in Klein et al. (2000), a second algorithm is proposed in order to obtain the exact information matrix as a whole instead of element by element. The algorithms are first developed in a general framework and then specialized to the case of a periodic Gaussian vector autoregressive moving-average (PVARMA) model. 相似文献
7.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
8.
Fernanda B. Rizzato Roseli A. Leandro Clarice G.B. Demétrio 《Journal of applied statistics》2016,43(11):2085-2109
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17,18] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12]. 相似文献
9.
Stephen G. Donald 《Econometric Reviews》2016,35(4):553-585
We extend Hansen's (2005) recentering method to a continuum of inequality constraints to construct new Kolmogorov–Smirnov tests for stochastic dominance of any pre-specified order. We show that our tests have correct size asymptotically, are consistent against fixed alternatives and are unbiased against some N?1/2 local alternatives. It is shown that by avoiding the use of the least favorable configuration, our tests are less conservative and more powerful than Barrett and Donald's (2003) and in some simulation examples we consider, we find that our tests can be more powerful than the subsampling test of Linton et al. (2005). We apply our method to test stochastic dominance relations between Canadian income distributions in 1978 and 1986 as considered in Barrett and Donald (2003) and find that some of the hypothesis testing results are different using the new method. 相似文献
10.
Heckman's (1976, 1979) sample selection model has been employed in many studies of linear and nonlinear regression applications. It is well known that ignoring the sample selectivity may result in inconsistency of the estimator due to the correlation between the statistical errors in the selection and main equations. In this article, we reconsider the maximum likelihood estimator for the panel sample selection model in Keane et al. (1988). Since the panel data model contains individual effects, such as fixed or random effects, the likelihood function is more complicated than that of the classical Heckman model. As an alternative to the existing derivation of the likelihood function in the literature, we show that the conditional distribution of the main equation follows a closed skew-normal (CSN) distribution, of which the linear transformation is still a CSN. Although the evaluation of the likelihood function involves high-dimensional integration, we show that the integration can be further simplified into a one-dimensional problem and can be evaluated by the simulated likelihood method. Moreover, we also conduct a Monte Carlo experiment to investigate the finite sample performance of the proposed estimator and find that our estimator provides reliable and quite satisfactory results. 相似文献
11.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008) and Vanegas and Cysneiros (2010), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002), is also analyzed. 相似文献
12.
Classification and regression tree has been useful in medical research to construct algorithms for disease diagnosis or prognostic prediction. Jin et al. 7 developed a robust and cost-saving tree (RACT) algorithm with application in classification of hip fracture risk after 5-year follow-up based on the data from the Study of Osteoporotic Fractures (SOF). Although conventional recursive partitioning algorithms have been well developed, they still have some limitations. Binary splits may generate a big tree with many layers, but trinary splits may produce too many nodes. In this paper, we propose a classification approach combining trinary splits and binary splits to generate a trinary–binary tree. A new non-inferiority test of entropy is used to select the binary or trinary splits. We apply the modified method in SOF to construct a trinary–binary classification rule for predicting risk of osteoporotic hip fracture. Our new classification tree has good statistical utility: it is statistically non-inferior to the optimum binary tree and the RACT based on the testing sample and is also cost-saving. It may be useful in clinical applications: femoral neck bone mineral density, age, height loss and weight gain since age 25 can identify subjects with elevated 5-year hip fracture risk without loss of statistical efficiency. 相似文献
13.
Distribution-free tests have been proposed in the literature for comparing the hazard rates of two probability distributions when the available samples are complete. In this article, we generalize the test of Kochar (1981) to the case when the available sample is Type-II censored, and then examine its power properties. 相似文献
14.
I. Ardoino E. M. Biganzoli C. Bajdik P. J. Lisboa P. Boracchi F. Ambrogi 《Journal of applied statistics》2012,39(7):1409-1421
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered. 相似文献
15.
Hsiaw-Chan Yeh 《统计学通讯:理论与方法》2013,42(1):76-87
For studying and modeling the time to failure of a system or component, many reliability practitioners used the hazard rate and its monotone behaviors. However, nowadays, there are two problems. First, the modern components have high reliability and, second, their distributions are usually have non monotone hazard rate, such as, the truncated normal, Burr XII, and inverse Gaussian distributions. So, modeling these data based on the hazard rate models seems to be too stringent. Zimmer et al. (1998) and Wang et al. (2003, 2008) introduced and studied a new time to failure model in continuous distributions based on log-odds rate (LOR) which is comparable to the model based on the hazard rate. There are many components and devices in industry, that have discrete distributions with non monotone hazard rate, so, in this article, we introduce the discrete log-odds rate which is different from its analog in continuous case. Also, an alternative discrete reversed hazard rate which we called it the second reversed rate of failure in discrete times is also defined here. It is shown that the failure time distributions can be characterized by the discrete LOR. Moreover, we show that the discrete logistic and log logistics distributions have property of a constant discrete LOR with respect to t and ln t, respectively. Furthermore, properties of some distributions with monotone discrete LOR, such as the discrete Burr XII, discrete Weibull, and discrete truncated normal are obtained. 相似文献
16.
Karlis and Santourian [14] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9]); and (2) there are two clusters in ln?P?e space. 相似文献
17.
Analysis of discrete lifetime data under middle-censoring and in the presence of covariates 总被引:1,自引:0,他引:1
S. Rao Jammalamadaka 《Journal of applied statistics》2015,42(4):905-913
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1] is included. 相似文献
18.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010). The estimators proposed here also have fast convergence rates and are simple to implement. 相似文献
19.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low. 相似文献
20.
This article proposes Hartley-Ross type unbiased estimators of finite population mean using information on known parameters of auxiliary variate when the study variate and auxiliary variate are positively correlated. The variances of the proposed unbiased estimators are obtained. It has been shown that the proposed estimators are more efficient than the simple mean estimator, usual ratio estimator and estimators proposed by Sisodia and Dwivedi (1981), Kadilar and Cingi (2006), and Kadilar et al. (2007) under certain realistic conditions. Empirical studies are also carried out to demonstrate the merits of the proposed unbiased estimators over other estimators considered in this article. 相似文献