首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Autoregressive Moving Average (ARMA) time series model fitting is a procedure often based on aggregate data, where parameter estimation plays a key role. Therefore, we analyze the effect of temporal aggregation on the accuracy of parameter estimation of mixed ARMA and MA models. We derive the expressions required to compute the parameter values of the aggregate models as functions of the basic model parameters in order to compare their estimation accuracy. To this end, a simulation experiment shows that aggregation causes a severe accuracy loss that increases with the order of aggregation, leading to poor accuracy.  相似文献   

2.
This article develops a new approach for testing aggregation restrictions in estimated production-function and cost-function models. Rather than using the well-known separability conditions for the existence of an aggregate, this approach focuses on testing whether a particular aggregate is valid and develops empirically testable necessary and sufficient conditions for the validity of some known aggregation scheme. An empirical section examines the power of this test in the context of a simple production-function model.  相似文献   

3.
For multiway contingency tables, Wall and Lienert (Biom. J. 18:259–264, 1976) considered the point-symmetry model. For square contingency tables, Tomizawa (Biom. J. 27:895–905, 1985) gave a theorem that the point-symmetry model holds if and only if both the quasi point-symmetry and the marginal point-symmetry models hold. This paper proposes some quasi point-symmetry models and marginal point-symmetry models for multiway tables, and extends Tomizawa’s (Biom. J. 27:895–905, 1985) theorem into multiway tables. We also show that for multiway tables the likelihood ratio statistic for testing goodness of fit of the point-symmetry model is asymptotically equivalent to the sum of those for testing the quasi point-symmetry model with some order and the marginal point-symmetry model with the corresponding order. An example is given.  相似文献   

4.
This paper considers the analysis of multivariate survival data where the marginal distributions are specified by semiparametric transformation models, a general class including the Cox model and the proportional odds model as special cases. First, consideration is given to the situation where the joint distribution of all failure times within the same cluster is specified by the Clayton–Oakes model (Clayton, Biometrika 65:141–151, l978; Oakes, J R Stat Soc B 44:412–422, 1982). A two-stage estimation procedure is adopted by first estimating the marginal parameters under the independence working assumption, and then the association parameter is estimated from the maximization of the full likelihood function with the estimators of the marginal parameters plugged in. The asymptotic properties of all estimators in the semiparametric model are derived. For the second situation, the third and higher order dependency structures are left unspecified, and interest focuses on the pairwise correlation between any two failure times. Thus, the pairwise association estimate can be obtained in the second stage by maximizing the pairwise likelihood function. Large sample properties for the pairwise association are also derived. Simulation studies show that the proposed approach is appropriate for practical use. To illustrate, a subset of the data from the Diabetic Retinopathy Study is used.  相似文献   

5.
The goal of this paper is to introduce a partially adaptive estimator for the censored regression model based on an error structure described by a mixture of two normal distributions. The model we introduce is easily estimated by maximum likelihood using an EM algorithm adapted from the work of Bartolucci and Scaccia (Comput Stat Data Anal 48:821–834, 2005). A Monte Carlo study is conducted to compare the small sample properties of this estimator to the performance of some common alternative estimators of censored regression models including the usual tobit model, the CLAD estimator of Powell (J Econom 25:303–325, 1984), and the STLS estimator of Powell (Econometrica 54:1435–1460, 1986). In terms of RMSE, our partially adaptive estimator performed well. The partially adaptive estimator is applied to data on wife’s hours worked from Mroz (1987). In this application we find support for the partially adaptive estimator over the usual tobit model.  相似文献   

6.
In time series analysis, Autoregressive Moving Average (ARMA) models play a central role. Because of the importance of parameter estimation in ARMA modeling and since it is based on aggregate time series so often, we analyze the effect of temporal aggregation on estimation accuracy. We derive the relationships between the aggregate and the basic parameters and compute the actual values of the former from those of the latter in order to measure and compare their estimation accuracy. We run a simulation experiment that shows that aggregation seriously worsens estimation accuracy and that the impact increases with the order of aggregation.  相似文献   

7.
This paper investigates the legitimacy of using area-wide models in predicting aggregate variables in the Euro-area. We aim to compare the performance of area-wide versus national specific models for modeling money demand when using different aggregation schemes. A generalized Grunfeld and Griliches criterion and the Vuong test are used to discriminate between competitive models. Results show that the use of different aggregation methods is not irrelevant. In fact, due to the volatility of the exchange rates, the aggregate models fit better than the disaggregate whenever we employ ECU exchange rates. However, for fixed exchange rates expressed in Euro, the disaggregate models outperform the aggregate ones. This paper was written during my visiting research period at the Department of Economics, University of Southampton. I wish to thank John Aldrich, Jan Podivinsky, Grayham Mizon and Akos Valentinyi. Financial support of the Universita degli Studi “Roma Tre” and the Marie Curie fellowship (HPMT-CT-2001-00353) are gratefully acknowledged.  相似文献   

8.
We develop a Bayesian analysis for the class of Birnbaum–Saunders nonlinear regression models introduced by Lemonte and Cordeiro (Comput Stat Data Anal 53:4441–4452, 2009). This regression model, which is based on the Birnbaum–Saunders distribution (Birnbaum and Saunders in J Appl Probab 6:319–327, 1969a), has been used successfully to model fatigue failure times. We have considered a Bayesian analysis under a normal-gamma prior. Due to the complexity of the model, Markov chain Monte Carlo methods are used to develop a Bayesian procedure for the considered model. We describe tools for model determination, which include the conditional predictive ordinate, the logarithm of the pseudo-marginal likelihood and the pseudo-Bayes factor. Additionally, case deletion influence diagnostics is developed for the joint posterior distribution based on the Kullback–Leibler divergence. Two empirical applications are considered in order to illustrate the developed procedures.  相似文献   

9.
10.
系统性金融风险分析框架的选取问题是理论与实务界对系统性金融风险研究争论的焦点之一。建立完善的分析框架需要立足于合理的宏观加总,而用于构建系统性金融风险分析框架的加总模式主要包括简易累加、新古典宏观加总和宏观审慎原则下的新加总模式。对不同加总模式下的系统性金融风险研究成果进行纵向梳理与横向比较分析发现:当前对于系统性金融风险的研究应着眼在货币量值加总的基础上,形成具备一定理论基础的整体分析框架。  相似文献   

11.
This article analyzes the importance of exact aggregation restrictions and the modeling of demographic effects in Jorgenson, Lau, and Stoker's (1982) model of aggregate consumer behavior. These issues are examined at the household level, using Canadian cross-sectional microdata. Exact aggregation restrictions and some implicit restrictions on household demographic effects are strongly rejected by our data. These results do not preclude pooling aggregate time series data with cross-sectional microdata to estimate a model of aggregate consumer behavior. They do suggest, however, an alternative basis for the aggregate model.  相似文献   

12.
In randomized clinical trials, we are often concerned with comparing two-sample survival data. Although the log-rank test is usually suitable for this purpose, it may result in substantial power loss when the two groups have nonproportional hazards. In a more general class of survival models of Yang and Prentice (Biometrika 92:1–17, 2005), which includes the log-rank test as a special case, we improve model efficiency by incorporating auxiliary covariates that are correlated with the survival times. In a model-free form, we augment the estimating equation with auxiliary covariates, and establish the efficiency improvement using the semiparametric theories in Zhang et al. (Biometrics 64:707–715, 2008) and Lu and Tsiatis (Biometrics, 95:674–679, 2008). Under minimal assumptions, our approach produces an unbiased, asymptotically normal estimator with additional efficiency gain. Simulation studies and an application to a leukemia study show the satisfactory performance of the proposed method.  相似文献   

13.
This paper uses a modified rank score test for non-nested linear regression models. The modified rank score test is robust with respect to models with non-normal distributions and can be viewed as a robust version of the J test of Davidson and MacKinnon (Econometrica 49:781–793, 1981). Therefore, this test does not require a specification of error density function and is easy to implement. Also, a modified rank score test for multiple non-nested models is provided. Monte Carlo simulation results show that the test has good finite sample performances. Financial applications for two competing theories, the capital asset pricing model and the arbitrage pricing theory, are considered herein. Empirical evidence from the modified rank score test shows that the former is a better model for asset pricing.  相似文献   

14.
In this paper we present a unified discussion of different approaches to the identification of smoothing spline analysis of variance (ANOVA) models: (i) the “classical” approach (in the line of Wahba in Spline Models for Observational Data, 1990; Gu in Smoothing Spline ANOVA Models, 2002; Storlie et al. in Stat. Sin., 2011) and (ii) the State-Dependent Regression (SDR) approach of Young in Nonlinear Dynamics and Statistics (2001). The latter is a nonparametric approach which is very similar to smoothing splines and kernel regression methods, but based on recursive filtering and smoothing estimation (the Kalman filter combined with fixed interval smoothing). We will show that SDR can be effectively combined with the “classical” approach to obtain a more accurate and efficient estimation of smoothing spline ANOVA models to be applied for emulation purposes. We will also show that such an approach can compare favorably with kriging.  相似文献   

15.
In econometrics and finance, variables are collected at different frequencies. One straightforward regression model is to aggregate the higher frequency variable to match the lower frequency with a fixed weight function. However, aggregation with fixed weight functions may overlook useful information in the higher frequency variable. On the other hand, keeping all higher frequencies may result in overly complicated models. In literature, mixed data sampling (MIDAS) regression models have been proposed to balance between the two. In this article, a new model specification test is proposed that can help decide between the simple aggregation and the MIDAS model.  相似文献   

16.
Family-based follow-up study designs are important in epidemiology as they enable investigations of disease aggregation within families. Such studies are subject to methodological complications since data may include multiple endpoints as well as intra-family correlation. The methods herein are developed for the analysis of age of onset with multiple disease types for family-based follow-up studies. The proposed model expresses the marginalized frailty model in terms of the subdistribution hazards (SDH). As with Pipper and Martinussen’s (Scand J Stat 30:509–521, 2003) model, the proposed multivariate SDH model yields marginal interpretations of the regression coefficients while allowing the correlation structure to be specified by a frailty term. Further, the proposed model allows for a direct investigation of the covariate effects on the cumulative incidence function since the SDH is modeled rather than the cause specific hazard. A simulation study suggests that the proposed model generally offers improved performance in terms of bias and efficiency when a sufficient number of events is observed. The proposed model also offers type I error rates close to nominal. The method is applied to a family-based study of breast cancer when death in absence of breast cancer is considered a competing risk.  相似文献   

17.
Quantile regression, including median regression, as a more completed statistical model than mean regression, is now well known with its wide spread applications. Bayesian inference on quantile regression or Bayesian quantile regression has attracted much interest recently. Most of the existing researches in Bayesian quantile regression focus on parametric quantile regression, though there are discussions on different ways of modeling the model error by a parametric distribution named asymmetric Laplace distribution or by a nonparametric alternative named scale mixture asymmetric Laplace distribution. This paper discusses Bayesian inference for nonparametric quantile regression. This general approach fits quantile regression curves using piecewise polynomial functions with an unknown number of knots at unknown locations, all treated as parameters to be inferred through reversible jump Markov chain Monte Carlo (RJMCMC) of Green (Biometrika 82:711–732, 1995). Instead of drawing samples from the posterior, we use regression quantiles to create Markov chains for the estimation of the quantile curves. We also use approximate Bayesian factor in the inference. This method extends the work in automatic Bayesian mean curve fitting to quantile regression. Numerical results show that this Bayesian quantile smoothing technique is competitive with quantile regression/smoothing splines of He and Ng (Comput. Stat. 14:315–337, 1999) and P-splines (penalized splines) of Eilers and de Menezes (Bioinformatics 21(7):1146–1153, 2005).  相似文献   

18.
In order to guarantee confidentiality and privacy of firm-level data, statistical offices apply various disclosure limitation techniques. However, each anonymization technique has its protection limits such that the probability of disclosing the individual information for some observations is not minimized. To overcome this problem, we propose combining two separate disclosure limitation techniques, blanking and multiplication of independent noise, in order to protect the original dataset. The proposed approach yields a decrease in the probability of reidentifying/disclosing individual information and can be applied to linear and nonlinear regression models. We show how to combine the blanking method with the multiplicative measurement error method and how to estimate the model by combining the multiplicative Simulation-Extrapolation (M-SIMEX) approach from Nolte (, 2007) on the one side with the Inverse Probability Weighting (IPW) approach going back to Horwitz and Thompson (J. Am. Stat. Assoc. 47:663–685, 1952) and on the other side with matching methods, as an alternative to IPW, like the semiparametric M-Estimator proposed by Flossmann (, 2007). Based on Monte Carlo simulations, we show that multiplicative measurement error combined with blanking as a masking procedure does not necessarily lead to a severe reduction in the estimation quality, provided that its effects on the data generating process are known.  相似文献   

19.
This article generalizes the Monte Carlo Markov Chain (MCMC) algorithm, based on the Gibbs weighted Chinese restaurant (gWCR) process algorithm, for a class of kernel mixture of time series models over the Dirichlet process. This class of models is an extension of Lo’s (Ann. Stat. 12:351–357, 1984) kernel mixture model for independent observations. The kernel represents a known distribution of time series conditional on past time series and both present and past latent variables. The latent variables are independent samples from a Dirichlet process, which is a random discrete (almost surely) distribution. This class of models includes an infinite mixture of autoregressive processes and an infinite mixture of generalized autoregressive conditional heteroskedasticity (GARCH) processes.  相似文献   

20.
This paper describes the performance of specific-to-general composition of forecasting models that accord with (approximate) linear autoregressions. Monte Carlo experiments are complemented with ex-ante forecasting results for 97 macroeconomic time series collected for the G7 economies in Stock and Watson (J. Forecast. 23:405–430, 2004). In small samples, the specific-to-general strategy is superior in terms of ex-ante forecasting performance in comparison with a commonly applied strategy of successive model reduction according to weakest parameter significance. Applied to real data, the specific-to-general approach turns out to be preferable. In comparison with successive model reduction, the successive model expansion is less likely to involve overly large losses in forecast accuracy and is particularly recommended if the diagnosed prediction schemes are characterized by a medium to large number of predictors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号