首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
In econometrics and finance, variables are collected at different frequencies. One straightforward regression model is to aggregate the higher frequency variable to match the lower frequency with a fixed weight function. However, aggregation with fixed weight functions may overlook useful information in the higher frequency variable. On the other hand, keeping all higher frequencies may result in overly complicated models. In literature, mixed data sampling (MIDAS) regression models have been proposed to balance between the two. In this article, a new model specification test is proposed that can help decide between the simple aggregation and the MIDAS model.  相似文献   

2.
Current status data arise in studies where the target measurement is the time of occurrence of some event, but observations are limited to indicators of whether or not the event has occurred at the time the sample is collected - only the current status of each individual with respect to event occurrence is observed. Examples of such data arise in several fields, including demography, epidemiology, econometrics and bioassay. Although estimation of the marginal distribution of times of event occurrence is well understood, techniques for incorporating covariate information are not well developed. This paper proposes a semiparametric approach to estimation for regression models of current status data, using techniques from generalized additive modeling and isotonic regression. This procedure provides simultaneous estimates of the baseline distribution of event times and covariate effects. No parametric assumptions about the form of the baseline distribution are required. The results are illustrated using data from a demographic survey of breastfeeding practices in developing countries, and from an epidemiological study of heterosexual Human Immunodeficiency Virus (HIV) transmission. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
A residual-based test of the null of cointegration in panel data   总被引:2,自引:0,他引:2  
This paper proposes a residual-based Lagrange Multiplier (LM) test for the null of cointegration in panel data. The test is analogous to the locally best unbiased invariant (LBUI) for a moving average (MA) unit root. The asymptotic distribution of the test is derived under the null. Monte Carlo simulations are performed to study the size and power properties of the proposed test.

overall, the empirical sizes of the LM-FM and LM-DOLs are close to the true size even in small samples. The power is quite good for the panels where T ≥ 50, and decent with panels for fewer observation in T. In our fixed sample of N = 50 and T = 50, the presence of a moving average and correlation between the LM-DOLS test seems to be better at correcting these effects, although in some cases the LM-FM test is more powerful.

Although much of the non-stationary time series econometrics has been criticized for having more to do with the specific properties of the data set rather than underlying economic models, the recent development of the cointegration literature has allowed for a concrete bridge between economic long run theory and time series methods. Our test now allows for the testing of the null of cointegration in a panel setting and should be of considerable interest to economists in a wide variety of fields.  相似文献   

4.
Summary Quantile regression methods are emerging as a popular technique in econometrics and biometrics for exploring the distribution of duration data. This paper discusses quantile regression for duration analysis allowing for a flexible specification of the functional relationship and of the error distribution. Censored quantile regression addresses the issue of right censoring of the response variable which is common in duration analysis. We compare quantile regression to standard duration models. Quantile regression does not impose a proportional effect of the covariates on the hazard over the duration time. However, the method cannot take account of time-varying covariates and it has not been extended so far to allow for unobserved heterogeneity and competing risks. We also discuss how hazard rates can be estimated using quantile regression methods. This paper benefitted from the helpful comments by an anonymous referee. Due to space constraints, we had to omit the details of the empirical application. These can be found in the long version of this paper, Fitzenberger and Wilke (2005). We gratefully acknowledge financial support by the German Research Foundation (DFG) through the research project ‘Microeconometric modelling of unemployment durations under consideration of the macroeconomic situation’. Thanks are due to Xuan Zhang for excellent research assistance. All errors are our sole responsibility.  相似文献   

5.
高频价格运动对金融市场微观结构具有非常重要的意义。由于高频价格的离散性和不等距变化的特点使得对该类变量建模较为困难,而在理论与实证研究中通常是根据特定的研究目的建立相应的高频价格运动模型。故从金融市场微观结构信息不对称理论出发,结合中国纯限价订单市场的实际情况和对中国市场交易者行为的分析,建立高频价格运动和非对称信息的联合模型,并同时估计和模拟了非对称信息状态和高频价格运动的轨迹。研究发现,知情交易者的比率对交易价格与市场流动性均有显著影响。  相似文献   

6.
多水平模型及静态面板数据模型的比较研究   总被引:1,自引:0,他引:1  
对两水平模型与静态面板数据模型进行对比分析:多水平模型主要用于分析具有层次结构的统计数据,面板数据模型是针对面板数据而提出的一种应用广泛的计量经济模型。面板数据可以看成是具有截面水平与时间水平的两层数据,两水平模型也能对面板数据进行分析,在一定条件下具有一定的相似性。因此,提出多水平的静态面板数据模型,为分析具有多个层次结构的面板数据提供分析工具。  相似文献   

7.
Because of the recent regulatory emphasis on issues related to drug‐induced cardiac repolarization that can potentially lead to sudden death, QT interval analysis has received much attention in the clinical trial literature. The analysis of QT data is complicated by the fact that the QT interval is correlated with heart rate and other prognostic factors. Several attempts have been made in the literature to derive an optimal method for correcting the QT interval for heart rate; however the QT correction formulae obtained are not universal because of substantial variability observed across different patient populations. It is demonstrated in this paper that the widely used fixed QT correction formulae do not provide an adequate fit to QT and RR data and bias estimates of treatment effect. It is also shown that QT correction formulae derived from baseline data in clinical trials are likely to lead to Type I error rate inflation. This paper develops a QT interval analysis framework based on repeated‐measures models accomodating the correlation between QT interval and heart rate and the correlation among QT measurements collected over time. The proposed method of QT analysis controls the Type I error rate and is at least as powerful as traditional QT correction methods with respect to detecting drug‐related QT interval prolongation. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
Parametric nonlinear mixed effects models (NLMEs) are now widely used in biometrical studies, especially in pharmacokinetics research and HIV dynamics models, due to, among other aspects, the computational advances achieved during the last years. However, this kind of models may not be flexible enough for complex longitudinal data analysis. Semiparametric NLMEs (SNMMs) have been proposed as an extension of NLMEs. These models are a good compromise and retain nice features of both parametric and nonparametric models resulting in more flexible models than standard parametric NLMEs. However, SNMMs are complex models for which estimation still remains a challenge. Previous estimation procedures are based on a combination of log-likelihood approximation methods for parametric estimation and smoothing splines techniques for nonparametric estimation. In this work, we propose new estimation strategies in SNMMs. On the one hand, we use the Stochastic Approximation version of EM algorithm (SAEM) to obtain exact ML and REML estimates of the fixed effects and variance components. On the other hand, we propose a LASSO-type method to estimate the unknown nonlinear function. We derive oracle inequalities for this nonparametric estimator. We combine the two approaches in a general estimation procedure that we illustrate with simulations and through the analysis of a real data set of price evolution in on-line auctions.  相似文献   

9.
The fluctuation of the gold price has significant impact on the economic and social aspects of a society. In the literature, most authors have employed fundamental analysis approach in forecast model building. The basic principle underlying this approach is that it is the supply and the demand which simultaneously determines the gold price. However, due to the lack of data of quantity supplied and quantity demanded, simultaneous econometric approach seems unsuccessful. In this paper, combined and composite time series forecasting techniques are proposed. The effects of various economic factors towards spot price of gold are also examined. Among the combined forecasting models, it seems that the odds-matrix method of assigning weights provides the most accurate forecasts of spot price of gold. For the economic factors considered, the futures price of gold and and the exchange rate seem to be most informative in forecasting the spot price of gold.  相似文献   

10.
Given time series data for fixed interval t= 1,2,…, M with non-autocorrelated innovations, the regression formulae for the best linear unbiased parameter estimates at each time t are given by the Kalman filter fixed interval smoothing equations. Formulae for the variance of such parameter estimates are well documented. However, formulae for covariance between these fixed interval best linear parameter estimates have previously been derived only for lag one. In this paper more general formulae for covariance between fixed interval best linear unbiased estimates at times t and t - l are derived for t= 1,2,…, M and l= 0,1,…, t - 1. Under Gaussian assumptions, these formulae are also those for the corresponding conditional covariances between the fixed interval best linear unbiased parameter estimates given the data to time M. They have application, for example, in determination via the expectation-maximisation (EM) algorithm of exact maximum likelihood parameter estimates for ARMA processes expressed in statespace form when multiple observations are available at each time point.  相似文献   

11.
The subject of the present study is to analyze how accurately an elaborated price jump detection methodology by Barndorff-Nielsen and Shephard (J. Financ. Econom. 2:1–37, 2004a; 4:1–30, 2006) applies to financial time series characterized by less frequent trading. In this context, it is of primary interest to understand the impact of infrequent trading on two test statistics, applicable to disentangle contributions from price jumps to realized variance. In a simulation study, evidence is found that infrequent trading induces a sizable distortion of the test statistics towards overrejection. A new empirical investigation using high frequency information of the most heavily traded electricity forward contract of the Nord Pool Energy Exchange corroborates the evidence of the simulation. In line with the theory, a “zero-return-adjusted estimation” is introduced to reduce the bias in the test statistics, both illustrated in the simulation study and empirical case.  相似文献   

12.
Data is rapidly increasing in volume and velocity and the Internet of Things (IoT) is one important source of this data. The IoT is a collection of connected devices (things) which are constantly recording data from their surroundings using on-board sensors. These devices can record and stream data to the cloud at a very high rate, leading to high storage and analysis costs. In order to ameliorate these costs, the data is modelled as a stream and analysed online to learn about the underlying process, perform interpolation and smoothing and make forecasts and predictions. Conventional state space modelling tools assume the observations occur on a fixed regular time grid. However, many sensors change their sampling frequency, sometimes adaptively, or get interrupted and re-started out of sync with the previous sampling grid, or just generate event data at irregular times. It is therefore desirable to model the system as a partially and irregularly observed Markov process which evolves in continuous time. Both the process and the observation model are potentially non-linear. Particle filters therefore represent the simplest approach to online analysis. A functional Scala library of composable continuous time Markov process models has been developed in order to model the wide variety of data captured in the IoT.  相似文献   

13.
Single index models are frequently used in econometrics and biometrics. Logit and Probit models arc special cases with fixed link functions. In this paper we consider a bootstrap specification test that detects nonparametric deviations of the link function. The bootstrap is used with the aim to rind a more accurate distribution under the null than the normal approximation. We prove that the statistic and its bootstrapped version have the same asymptotic distribution. In a simulation study we show that the bootstrap is able to capture the negative bias and the skewness of the test statistic. It yields better approximations to the true critical values and consequently it has a more accurate level than the normal approximation.  相似文献   

14.
Peter Schmidt has been one of its best-known and most respected econometricians in the profession for four decades. He has brought his talents to many scholarly outlets and societies, and has played a foundational and constructive role in the development of the field of econometrics. Peter Schmidt has also served and led the development of Econometric Reviews since its inception in 1982. His judgment has always been fair, informed, clear, decisive, and constructive. Respect for ideas and scholarship of others, young and old, is second nature to him. This is the best of traits, and Peter serves as an uncommon example to us all. The seventeen articles that make up this Econometric Reviews Special Issue in Honor of Peter Schmidt represent the work of fifty of the very best econometricians in our profession. They honor Professor Schmidt's lifelong accomplishments by providing fundamental research work that reflects many of the broad research themes that have distinguished his long and productive career. These include time series econometrics, panel data econometrics, and stochastic frontier production analysis.  相似文献   

15.
Consider panel data modelled by a linear random intercept model that includes a time‐varying covariate. Suppose that our aim is to construct a confidence interval for the slope parameter. Commonly, a Hausman pretest is used to decide whether this confidence interval is constructed using the random effects model or the fixed effects model. This post‐model‐selection confidence interval has the attractive features that it (a) is relatively short when the random effects model is correct and (b) reduces to the confidence interval based on the fixed effects model when the data and the random effects model are highly discordant. However, this confidence interval has the drawbacks that (i) its endpoints are discontinuous functions of the data and (ii) its minimum coverage can be far below its nominal coverage probability. We construct a new confidence interval that possesses these attractive features, but does not suffer from these drawbacks. This new confidence interval provides an intermediate between the post‐model‐selection confidence interval and the confidence interval obtained by always using the fixed effects model. The endpoints of the new confidence interval are smooth functions of the Hausman test statistic, whereas the endpoints of the post‐model‐selection confidence interval are discontinuous functions of this statistic.  相似文献   

16.
Spectral analysis at frequencies other than zero plays an increasingly important role in econometrics. A number of alternative automated data-driven procedures for nonparametric spectral density estimation have been suggested in the literature, but little is known about their finite-sample accuracy. We compare five such procedures in terms of their mean-squared percentage error across frequencies. Our data generating processes (DGP) include autoregressive-moving average (ARMA) models, fractionally integrated ARMA models and nonparametric models based on 16 commonly used macroeconomic time series. We find that for both quarterly and monthly data the autoregressive sieve estimator is the most reliable method overall.  相似文献   

17.
In this article salient aspects of the past, present and future of econometrics are considered. These include a resumé of past key developments in econometric modeling, inference and uses of econometrics. Further, some comments are made relating to various statistical inference procedures, techniques of model formulation and the relations of theory and application. It is concluded that a stronger interaction between theory and application will do much to promote further progress in econometrics in the future.  相似文献   

18.
Recursive and en-bloc approaches to signal extraction   总被引:1,自引:0,他引:1  
In the literature on unobservable component models , three main statistical instruments have been used for signal extraction: fixed interval smoothing (FIS), which derives from Kalman's seminal work on optimal state-space filter theory in the time domain; Wiener-Kolmogorov-Whittle optimal signal extraction (OSE) theory, which is normally set in the frequency domain and dominates the field of classical statistics; and regularization , which was developed mainly by numerical analysts but is referred to as 'smoothing' in the statistical literature (such as smoothing splines, kernel smoothers and local regression). Although some minor recognition of the interrelationship between these methods can be discerned from the literature, no clear discussion of their equivalence has appeared. This paper exposes clearly the interrelationships between the three methods; highlights important properties of the smoothing filters used in signal extraction; and stresses the advantages of the FIS algorithms as a practical solution to signal extraction and smoothing problems. It also emphasizes the importance of the classical OSE theory as an analytical tool for obtaining a better understanding of the problem of signal extraction.  相似文献   

19.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

20.
A previously known result in the econometrics literature is that when covariates of an underlying data generating process are jointly normally distributed, estimates from a nonlinear model that is misspecified as linear can be interpreted as average marginal effects. This has been shown for models with exogenous covariates and separability between covariates and errors. In this paper, we extend this identification result to a variety of more general cases, in particular for combinations of separable and nonseparable models under both exogeneity and endogeneity. So long as the underlying model belongs to one of these large classes of data generating processes, our results show that nothing else must be known about the true DGP—beyond normality of observable data, a testable assumption—in order for linear estimators to be interpretable as average marginal effects. We use simulation to explore the performance of these estimators using a misspecified linear model and show they perform well when the data are normal but can perform poorly when this is not the case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号