首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary In spite of widespread criticism, macroeconometric models are still most popular for forecasting and policy, analysis. When the most recent data available on both the exogenous and the endogenous variable are preliminaryestimates subject to a revision process, the estimators of the coefficients are affected by the presence of the preliminary data, the projections for the exogenous variables are affected by the presence of data uncertainty, the values of lagged dependent variables used as initial values for, forecasts are still subject to revisions. Since several provisional estimates of the value of a certain variable are available before the data are finalized, in this paper they are seen as repeated predictions of the same quantity (referring to different information sets not necessarily overlapping with one other) to be exploited in a forecast combination framework. The components of the asymptotic bias and of the asymptotic mean square prediction error related to data uncertainty can be reduced or eliminated by using a forecast combination technique which makes the deterministic and the Monte Carlo predictors not worse than either predictor used with or without provisional data. The precision of the forecast with the nonlinear model can be improved if the provisional data are not rational predictions of the final data and contain systematic effects. Economics Department European University Institute Thanks are due to my Ph. D. thesis advisor Bobby Mariano for his guidance and encouragment at various stages of this research. The comments of the participants in the Europan Meeting of the Econometric Society in Maastricht, Aug. 1994, helped in improving the presentation,. A grant from the NSF (SES 8604219) is gratefully acknowledged.  相似文献   

2.
Many of the existing methods of finding calibration intervals in simple linear regression rely on the inversion of prediction limits. In this article, we propose an alternative procedure which involves two stages. In the first stage, we find a confidence interval for the value of the explanatory variable which corresponds to the given future value of the response. In the second stage, we enlarge the confidence interval found in the first stage to form a confidence interval called, calibration interval, for the value of the explanatory variable which corresponds to the theoretical mean value of the future observation. In finding the confidence interval in the first stage, we have used the method based on hypothesis testing and percentile bootstrap. When the errors are normally distributed, the coverage probability of resulting calibration interval based on hypothesis testing is comparable to that of the classical calibration interval. In the case of non normal errors, the coverage probability of the calibration interval based on hypothesis testing is much closer to the target value than that of the calibration interval based on percentile bootstrap.  相似文献   

3.
Heavily right-censored time to event, or survival, data arise frequently in research areas such as medicine and industrial reliability. Recently, there have been suggestions that auxiliary outcomes which are more fully observed may be used to “enhance” or increase the efficiency of inferences for a primary survival time variable. However, efficiency gains from this approach have mostly been very small. Most of the situations considered have involved semiparametric models, so in this note we consider two very simple fully parametric models. In the one case involving a correlated auxiliary variable that is always observed, we find that efficiency gains are small unless the response and auxiliary variable are very highly correlated and the response is heavily censored. In the second case, which involves an intermediate stage in a three-stage model of failure, the efficiency gains can be more substantial. We suggest that careful study of specific situations is needed to identify opportunities for “enhanced” inferences, but that substantial gains seem more likely when auxiliary information involves structural information about the failure process.  相似文献   

4.
In some fields, we are forced to work with missing data in multivariate time series. Unfortunately, the data analysis in this context cannot be carried out in the same way as in the case of complete data. To deal with this problem, a Bayesian analysis of multivariate threshold autoregressive models with exogenous inputs and missing data is carried out. In this paper, Markov chain Monte Carlo methods are used to obtain samples from the involved posterior distributions, including threshold values and missing data. In order to identify autoregressive orders, we adapt the Bayesian variable selection method in this class of multivariate process. The number of regimes is estimated using marginal likelihood or product parameter-space strategies.  相似文献   

5.
A Latent Process Model for Temporal Extremes   总被引:1,自引:0,他引:1  
This paper presents a hierarchical approach to modelling extremes of a stationary time series. The procedure comprises two stages. In the first stage, exceedances over a high threshold are modelled through a generalized Pareto distribution, which is represented as a mixture of an exponential variable with a Gamma distributed rate parameter. In the second stage, a latent Gamma process is embedded inside the exponential distribution in order to induce temporal dependence among exceedances. Unlike other hierarchical extreme‐value models, this version has marginal distributions that belong to the generalized Pareto family, so that the classical extreme‐value paradigm is respected. In addition, analytical developments show that different choices of the underlying Gamma process can lead to different degrees of temporal dependence of extremes, including asymptotic independence. The model is tested through a simulation study in a Markov chain setting and used for the analysis of two datasets, one environmental and one financial. In both cases, a good flexibility in capturing different types of tail behaviour is obtained.  相似文献   

6.
In longitudinal surveys where a number of observations have to be made on the same sampling unit at specified time intervals, it is not uncommon that observations for some of the time stages for some of the sampled units are found missing. In the present investigation an estimation procedure for estimating the population total based on such incomplete data from multiple observations is suggested which makes use of all the available information and is seen to be more efficient than the one based on only completely observed units. Estimators are also proposed for two other situations; firstly when data is collected only for a sample of time stages and secondly when data is observed for only one time stage per sampled unit.  相似文献   

7.
Based on the concept of repeated significance tests, an empirical study may be planned in subsequent stages. Group sequential test procedures offer the possibility of performing the study with a fixed number of observations per stage. At least, the number of observations must be chosen independently of the observed data. In adaptive group sequential test procedures, the number of observations can be changed during the course of the study using all results observed so far. In this article, the basic concepts of these two designs are reviewed. Recent developments in adaptive designs are outlined and potential fields of application are given.  相似文献   

8.
ABSTRACT

Environmental data is typically indexed in space and time. This work deals with modelling spatio-temporal air quality data, when multiple measurements are available for each space-time point. Typically this situation arises when different measurements referring to several response variables are observed in each space-time point, for example, different pollutants or size resolved data on particular matter. Nonetheless, such a kind of data also arises when using a mobile monitoring station moving along a path for a certain period of time. In this case, each spatio-temporal point has a number of measurements referring to the response variable observed several times over different locations in a close neighbourhood of the space-time point. We deal with this type of data within a hierarchical Bayesian framework, in which observed measurements are modelled in the first stage of the hierarchy, while the unobserved spatio-temporal process is considered in the following stages. The final model is very flexible and includes autoregressive terms in time, different structures for the variance-covariance matrix of the errors, and can manage covariates available at different space-time resolutions. This approach is motivated by the availability of data on urban pollution dynamics: fast measures of gases and size resolved particulate matter have been collected using an Optical Particle Counter located on a cabin of a public conveyance that moves on a monorail on a line transect of a town. Urban microclimate information is also available and included in the model. Simulation studies are conducted to evaluate the performance of the proposed model over existing alternatives that do not model data over the first stage of the hierarchy.  相似文献   

9.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

10.
This article provides a strategy to identify the existence and direction of a causal effect in a generalized nonparametric and nonseparable model identified by instrumental variables. The causal effect concerns how the outcome depends on the endogenous treatment variable. The outcome variable, treatment variable, other explanatory variables, and the instrumental variable can be essentially any combination of continuous, discrete, or “other” variables. In particular, it is not necessary to have any continuous variables, none of the variables need to have large support, and the instrument can be binary even if the corresponding endogenous treatment variable and/or outcome is continuous. The outcome can be mismeasured or interval-measured, and the endogenous treatment variable need not even be observed. The identification results are constructive, and can be empirically implemented using standard estimation results.  相似文献   

11.
In some applications it is cost efficient to sample data in two or more stages. In the first stage a simple random sample is drawn and then stratified according to some easily measured attribute. In each subsequent stage a random subset of previously selected units is sampled for more detailed and costly observation, with a unit's sampling probability determined by its attributes as observed in the previous stages. This paper describes multistage sampling designs and estimating equations based on the resulting data. Maximum likelihood estimates (MLEs) and their asymptotic variances are given for designs using parametric models. Horvitz–Thompson estimates are introduced as alternatives to MLEs, their asymptotic distributions are derived and their strengths and weaknesses are evaluated. The designs and the estimates are illustrated with data on corn production.  相似文献   

12.
This article extends the spatial panel data regression with fixed-effects to the case where the regression function is partially linear and some regressors may be endogenous or predetermined. Under the assumption that the spatial weighting matrix is strictly exogenous, we propose a sieve two stage least squares (S2SLS) regression. Under some sufficient conditions, we show that the proposed estimator for the finite dimensional parameter is root-N consistent and asymptotically normally distributed and that the proposed estimator for the unknown function is consistent and also asymptotically normally distributed but at a rate slower than root-N. Consistent estimators for the asymptotic variances of the proposed estimators are provided. A small scale simulation study is conducted, and the simulation results show that the proposed procedure has good finite sample performance.  相似文献   

13.
Inequality-restricted hypotheses testing methods containing multivariate one-sided testing methods are useful in practice, especially in multiple comparison problems. In practice, multivariate and longitudinal data often contain missing values since it may be difficult to observe all values for each variable. However, although missing values are common for multivariate data, statistical methods for multivariate one-sided tests with missing values are quite limited. In this article, motivated by a dataset in a recent collaborative project, we develop two likelihood-based methods for multivariate one-sided tests with missing values, where the missing data patterns can be arbitrary and the missing data mechanisms may be non-ignorable. Although non-ignorable missing data are not testable based on observed data, statistical methods addressing this issue can be used for sensitivity analysis and might lead to more reliable results, since ignoring informative missingness may lead to biased analysis. We analyse the real dataset in details under various possible missing data mechanisms and report interesting findings which are previously unavailable. We also derive some asymptotic results and evaluate our new tests using simulations.  相似文献   

14.
15.
In this article, we propose a new class of semiparametric instrumental variable models with partially varying coefficients, in which the structural function has a partially linear form and the impact of endogenous structural variables can vary over different levels of some exogenous variables. We propose a three-step estimation procedure to estimate both functional and constant coefficients. The consistency and asymptotic normality of these proposed estimators are established. Moreover, a generalized F-test is developed to test whether the functional coefficients are of particular parametric forms with some underlying economic intuitions, and furthermore, the limiting distribution of the proposed generalized F-test statistic under the null hypothesis is established. Finally, we illustrate the finite sample performance of our approach with simulations and two real data examples in economics.  相似文献   

16.
ABSTRACT

We evaluate the bias from endogenous job mobility in fixed-effects estimates of worker- and firm-specific earnings heterogeneity using longitudinally linked employer–employee data from the LEHD infrastructure file system of the U.S. Census Bureau. First, we propose two new residual diagnostic tests of the assumption that mobility is exogenous to unmodeled determinants of earnings. Both tests reject exogenous mobility. We relax exogenous mobility by modeling the matched data as an evolving bipartite graph using a Bayesian latent-type framework. Our results suggest that allowing endogenous mobility increases the variation in earnings explained by individual heterogeneity and reduces the proportion due to employer and match effects. To assess external validity, we match our estimates of the wage components to out-of-sample estimates of revenue per worker. The mobility-bias-corrected estimates attribute much more of the variation in revenue per worker to variation in match quality and worker quality than the uncorrected estimates. Supplementary materials for this article are available online.  相似文献   

17.
The forecasting stage in the analysis of a univariate threshold-autoregressive model, with exogenous threshold variable, has been developed in this paper via the computation of the so-called predictive distributions. The procedure permits one to forecast simultaneously the response and exogenous variables. An important issue in this work is the treatment of eventual missing observations present in the two time series before obtaining forecasts.  相似文献   

18.
Often the variables in a regression model are difficult or expensive to obtain so auxiliary variables are collected in a preliminary step of a study and the model variables are measured at later stages on only a subsample of the study participants called the validation sample. We consider a study in which at the first stage some variables, throughout called auxiliaries, are collected; at the second stage the true outcome is measured on a subsample of the first-stage sample, and at the third stage the true covariates are collected on a subset of the second-stage sample. In order to increase efficiency, the probabilities of selection into the second and third-stage samples are allowed to depend on the data observed at the previous stages. In this paper we describe a class of inverse-probability-of-selection-weighted semiparametric estimators for the parameters of the model for the conditional mean of the outcomes given the covariates. We assume that a subject's probability of being sampled at subsequent stages is bounded away from zero and depends only on the subject's data collected at the previous sampling stages. We show that the asymptotic variance of the optimal estimator in our class is equal to the semiparametric variance bound for the model. Since the optimal estimator depends on unknown population parameters it is not available for data analysis. We therefore propose an adaptive estimation procedure for locally efficient inferences. A simulation study is carried out to study the finite sample properties of the proposed estimators.  相似文献   

19.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   

20.
This article examines the exchange-rate determination in a target-zone regime when the bounds can be fixed for an extended period but are subject to occasional jumps. In this case, the behavior of the endogenous variable is affected by the agents' expectations about both the occurrence and the size of the jump. Empirical results using data for the franc/mark exchange rate provide support for the nonlinear model with time-varying realignment probability and indicate that the agents correctly anticipated most of the observed changes in the central parity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号