首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Studies of development and change in partisan fortunes in the US emphasize epochs of partisan stability, separated by critical events or turning points. In this paper we study partisan electoral changes in the US Congress using the method of Markov switching. Our estimates are based on election changes from 1854, roughly the date of the establishment of the modern incarnation of the two-party system, to the present. For the Senate, we estimate partisan balance both from 1856 and from 1914, the period of direct elections. Our discrete-state method performs better than one based on smooth cycles, and is more consistent with the existing theory.We use the Markov switching method to estimate an underlying unobserved state parameter, ‘partisan regime’. The method allows the direct estimation of critical transition points between Republican and Democratic partisan coalitions. Republican regimes characterized House elections during three periods: 1860 through 1872, 1894 through 1906, and 1918 through 1928. Senate results were roughly similar. Every recession caused a realignment prior to 1932, but since then none has.For the Senate, the two-state model does not fit adequately. We estimate a three-state model in which a Republican regime dominated from 1914 through 1928; a Democratic regime characterized the period 1930–1934, and a Democratic-leaning regime characterized the period 1938 to the present (1936 is a transition year).The driver for historical realignments up to 1930 seems to have been economic distress, at least for the House. Each recession before 1930 is associated with a realignment, but after 1930, no recession produced a realignment. Since then, however, economic downturns have not led to realignments. We speculate that better economic management since World War II, resulting in shallower recessions, is the cause. For the Senate, however, we detect only one fundamental regime shift: that during the Great Depression.  相似文献   

2.
In this article, an alternative estimation approach is proposed to fit linear mixed effects models where the random effects follow a finite mixture of normal distributions. This heterogeneity linear mixed model is an interesting tool since it relaxes the classical normality assumption and is also perfectly suitable for classification purposes, based on longitudinal profiles. Instead of fitting directly the heterogeneity linear mixed model, we propose to fit an equivalent mixture of linear mixed models under some restrictions which is computationally simpler. Unlike the former model, the latter can be maximized analytically using an EM-algorithm and the obtained parameter estimates can be easily used to compute the parameter estimates of interest.  相似文献   

3.
Abstract

We study alternative models for capturing abrupt structural changes (level shifts) in a times series. The problem is confounded by the presence of transient outliers. We compare the performance of non-Gaussian time-varying parameter models and multiprocess mixture models within a Monte Carlo experimental setup. Our findings suggest that once we incorporate shocks with thick-tailed probability distributions, the superiority of the multiprocess mixture models over the time-varying parameter models, reported in an earlier study, disappears. The behavior of the two models, both in adapting to level shifts and in reacting to transient outliers, is very similar.  相似文献   

4.
Based on the generalized inference idea, a new kind of generalized confidence intervals is derived for the among-group variance component in the heteroscedastic one-way random effects model. We construct structure equations of all variance components in the model based on their minimal sufficient statistics; meanwhile, the fiducial generalized pivotal quantity (FGPQ) can be obtained through solving an implicit equation of the parameter of interest. Then, the confidence interval is derived naturally from the FGPQ. Simulation results demonstrate that the new procedure performs very well in terms of both empirical coverage probability and average interval length.  相似文献   

5.
This article extends the generally weighted moving average (GWMA) technique for detecting changes in process variance. The proposed chart is called the generally weighted moving average variance (GWMAV) chart. Simulation is employed to evaluate the average run length (ARL) characteristics of the GWMAV and EWMA control charts. An extensive comparison of these control charts reveals that the GWMAV chart is more sensitive than the EWMA control charts for detecting small shifts in the variance of a process when the shifts are below 1.35 standard deviations. Additionally, the GWMAV control chart performs little better when the variance shifts are between 1.35 and 1.5 standard deviation, and the 2 charts performs similar when the variance shifts are above 1.5 standard deviation. The design of the GWMAV chart is also discussed.  相似文献   

6.
使用允许长记忆参数d服从区制转换的MS—ARFIMA模型对中国月度通货膨胀路径的动态行为进行新的实证研究,结果显示:中国通货膨胀不仅均值水平和不确定性存在着“低通胀”区制和“高通胀”区制,而且更为重要的是,通货膨胀序列的平稳性也表现出显著的区制转换动态。“低通胀”区制下,长记忆参数d1=0.361,说明通货膨胀是协方差平稳序列,“高通胀”区制下,长记忆参数d2=1.145,说明通货膨胀是非平稳序列。这一新的研究结论意味着中国通货膨胀冲击的持久性效应也存在相应的区制转移变化。这要求央行在管控通货膨胀过程中,既要考虑均值和不确定性的区制变化,又要兼顾平稳性和持久性的区制变化。  相似文献   

7.
《Econometric Reviews》2012,31(1):71-91
Abstract

This paper proposes the Bayesian semiparametric dynamic Nelson-Siegel model for estimating the density of bond yields. Specifically, we model the distribution of the yield curve factors according to an infinite Markov mixture (iMM). The model allows for time variation in the mean and covariance matrix of factors in a discrete manner, as opposed to continuous changes in these parameters such as the Time Varying Parameter (TVP) models. Estimating the number of regimes using the iMM structure endogenously leads to an adaptive process that can generate newly emerging regimes over time in response to changing economic conditions in addition to existing regimes. The potential of the proposed framework is examined using US bond yields data. The semiparametric structure of the factors can handle various forms of non-normalities including fat tails and nonlinear dependence between factors using a unified approach by generating new clusters capturing these specific characteristics. We document that modeling parameter changes in a discrete manner increases the model fit as well as forecasting performance at both short and long horizons relative to models with fixed parameters as well as the TVP model with continuous parameter changes. This is mainly due to fact that the discrete changes in parameters suit the typical low frequency monthly bond yields data characteristics better.  相似文献   

8.
Rao's score test normally replaces nuisance parameters by their maximum likelihood estimates under the null hypothesis about the parameter of interest. In some models, however, a nuisance parameter is not identified under the null, so that this approach cannot be followed. This paper suggests replacing the nuisance parameter by its maximum likelihood estimate from the unrestricted model and making the appropriate adjustment to the variance of the estimated score. This leads to a rather natural modification of Rao's test, which is examined in detail for a regression-type model. It is compared with the approach, which has featured most frequently in the literature on this problem, where a test statistic appropriate to a known value of the nuisance parameter is treated as a function of that parameter and maximised over its range. It is argued that the modified score test has considerable advantages, including robustness to a crucial assumption required by the rival approach.  相似文献   

9.
Summary.  The paper considers high dimensional Metropolis and Langevin algorithms in their initial transient phase. In stationarity, these algorithms are well understood and it is now well known how to scale their proposal distribution variances. For the random-walk Metropolis algorithm, convergence during the transient phase is extremely regular—to the extent that the algo-rithm's sample path actually resembles a deterministic trajectory. In contrast, the Langevin algorithm with variance scaled to be optimal for stationarity performs rather erratically. We give weak convergence results which explain both of these types of behaviour and practical guidance on implementation based on our theory.  相似文献   

10.
In this article I present a new approach to model more realistically the variability of financial time series. I develop a Markov-ARCH model that incorporates the features of both Hamilton's switching-regime model and Engle's autoregressive conditional heteroscedasticity (ARCH) model to examine the issue of volatility persistence in the monthly excess returns of the three-month treasury bill. The issue can be resolved by taking into account occasional shifts in the asymptotic variance of the Markov-ARCH process that cause the spurious persistence of the volatility process. I identify two periods during which there is a regime shift, the 1974:2–1974:8 period associated with the oil shock and the 1979:9–1982:8 period associated with the Federal Reserve's policy change. The variance approached asymptotically in these two episodes is more than 10 times as high as the asymptotic variance for the remainder of the sample. I conclude that regime shifts have a greater impact on the properties of the data, and I cannot reject the null hypothesis of no ARCH effects within the regimes. As a consequence of the striking findings in this article, previous empirical results that adopt an ARCH approach in modeling monthly or lower frequency interest-rate dynamics are rendered questionable.  相似文献   

11.
The parameters of a finite mixture model cannot be consistently estimated when the data come from an embedded distribution with fewer components than that being fitted, because the distribution is represented by a subset in the parameter space, and not by a single point. Feng & McCulloch (1996) give conditions, not easily verified, under which the maximum likelihood (ML) estimator will converge to an arbitrary point in this subset. We show that the conditions can be considerably weakened. Even though embedded distributions may not be uniquely represented in the parameter space, estimators of quantities of interest, like the mean or variance of the distribution, may nevertheless actually be consistent in the conventional sense. We give an example of some practical interest where the ML estimators are root of n -consistent.
Similarly consistent statistics can usually be found to test for a simpler model vs a full model. We suggest a test statistic suitable for a general class of model and propose a parameter-based bootstrap test, based on this statistic, for when the simpler model is correct.  相似文献   

12.
We revisit the problem of estimating the proportion π of true null hypotheses where a large scale of parallel hypothesis tests are performed independently. While the proportion is a quantity of interest in its own right in applications, the problem has arisen in assessing or controlling an overall false discovery rate. On the basis of a Bayes interpretation of the problem, the marginal distribution of the p-value is modeled in a mixture of the uniform distribution (null) and a non-uniform distribution (alternative), so that the parameter π of interest is characterized as the mixing proportion of the uniform component on the mixture. In this article, a nonparametric exponential mixture model is proposed to fit the p-values. As an alternative approach to the convex decreasing mixture model, the exponential mixture model has the advantages of identifiability, flexibility, and regularity. A computation algorithm is developed. The new approach is applied to a leukemia gene expression data set where multiple significance tests over 3,051 genes are performed. The new estimate for π with the leukemia gene expression data appears to be about 10% lower than the other three estimates that are known to be conservative. Simulation results also show that the new estimate is usually lower and has smaller bias than the other three estimates.  相似文献   

13.
A multivariate normal mean–variance mixture based on a Birnbaum–Saunders (NMVMBS) distribution is introduced and several properties of this new distribution are discussed. A new robust non-Gaussian ARCH-type model is proposed in which there exists a relation between the variance of the observations, and the marginal distributions are NMVMBS. A simple EM-based maximum likelihood estimation procedure to estimate the parameters of this normal mean–variance mixture distribution is given. A simulation study and some real data are used to demonstrate the modelling strength of this new model.  相似文献   

14.
Empirical likelihood inferences for the parameter component in an additive partially linear errors-in-variables model with longitudinal data are investigated in this article. A corrected-attenuation block empirical likelihood procedure is used to estimate the regression coefficients, a corrected-attenuation block empirical log-likelihood ratio statistic is suggested and its asymptotic distribution is obtained. Compared with the method based on normal approximations, our proposed method does not require any consistent estimator for the asymptotic variance and bias. Simulation studies indicate that our proposed method performs better than the method based on normal approximations in terms of relatively higher coverage probabilities and smaller confidence regions. Furthermore, an example of an air pollution and health data set is used to illustrate the performance of the proposed method.  相似文献   

15.
The hazard function plays an important role in reliability or survival studies since it describes the instantaneous risk of failure of items at a time point, given that they have not failed before. In some real life applications, abrupt changes in the hazard function are observed due to overhauls, major operations or specific maintenance activities. In such situations it is of interest to detect the location where such a change occurs and estimate the size of the change. In this paper we consider the problem of estimating a single change point in a piecewise constant hazard function when the observed variables are subject to random censoring. We suggest an estimation procedure that is based on certain structural properties and on least squares ideas. A simulation study is carried out to compare the performance of this estimator with two estimators available in the literature: an estimator based on a functional of the Nelson-Aalen estimator and a maximum likelihood estimator. The proposed least squares estimator tums out to be less biased than the other two estimators, but has a larger variance. We illustrate the estimation method on some real data sets.  相似文献   

16.
Abstract

In this paper we introduce continuous tree mixture model that is the mixture of undirected graphical models with tree structured graphs and is considered as multivariate analysis with a non parametric approach. We estimate its parameters, the component edge sets and mixture proportions through regularized maximum likalihood procedure. Our new algorithm, which uses expectation maximization algorithm and the modified version of Kruskal algorithm, simultaneosly estimates and prunes the mixture component trees. Simulation studies indicate this method performs better than the alternative Gaussian graphical mixture model. The proposed method is also applied to water-level data set and is compared with the results of Gaussian mixture model.  相似文献   

17.
Nonlinear mixed‐effects (NLME) modeling is one of the most powerful tools for analyzing longitudinal data especially under the sparse sampling design. The determinant of the Fisher information matrix is a commonly used global metric of the information that can be provided by the data under a given model. However, in clinical studies, it is also important to measure how much information the data provide for a certain parameter of interest under the assumed model, for example, the clearance in population pharmacokinetic models. This paper proposes a new, easy‐to‐interpret information metric, the “relative information” (RI), which is designed for specific parameters of a model and takes a value between 0% and 100%. We establish the relationship between interindividual variability for a specific parameter and the variance of the associated parameter estimator, demonstrating that, under a “perfect” experiment (eg, infinite samples or/and minimum experimental error), the RI and the variance of the model parameter estimator converge, respectively, to 100% and the ratio of the interindividual variability for that parameter and the number of subjects. Extensive simulation experiments and analyses of three real datasets show that our proposed RI metric can accurately characterize the information for parameters of interest for NLME models. The new information metric can be readily used to facilitate study designs and model diagnosis.  相似文献   

18.
In this article, a robust multistage parameter estimator is proposed for nonlinear regression with heteroscedastic variance, where the residual variances are considered as a general parametric function of predictors. The motivation is based on considering the chi-square distribution for the calculated sample variance of the data. It is shown that outliers that are influential in nonlinear regression parameter estimates are not necessarily influential in calculating the sample variance. This matter persuades us, not only to robustify the estimate of the parameters of the models for both the regression function and the variance, but also to replace the sample variance of the data by a robust scale estimate.  相似文献   

19.
In this paper, we consider the choice of pilot estimators for the single-index varying-coefficient model, which may result in radically different estimators, and develop the method for estimating the unknown parameter in this model. To estimate the unknown parameters efficiently, we use the outer product of gradient method to find the consistent initial estimators for interest parameters, and then adopt the refined estimation method to improve the efficiency, which is similar to the refined minimum average variance estimation method. An algorithm is proposed to estimate the model directly. Asymptotic properties for the proposed estimation procedure have been established. The bandwidth selection problem is also considered. Simulation studies are carried out to assess the finite sample performance of the proposed estimators, and efficiency comparisons between the estimation methods are made.  相似文献   

20.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号