首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider non-response models for a single categorical response with categorical covariates whose values are always observed. We present Bayesian methods for ignorable models and a particular non-ignorable model, and we argue that standard methods of model comparison are inappropriate for comparing ignorable and non-ignorable models. Uncertainty about ignorability of non-response is incorporated by introducing parameters describing the extent of non-ignorability into a pattern mixture specification and integrating over the prior uncertainty associated with these parameters. Our approach is illustrated using polling data from the 1992 British general election panel survey. We suggest sample size adjustments for surveys when non-ignorable non-response is expected.  相似文献   

2.
Population-level proportions of individuals that fall at different points in the spectrum [of disease severity], from asymptomatic infection to severe disease, are often difficult to observe, but estimating these quantities can provide information about the nature and severity of the disease in a particular population. Logistic and multinomial regression techniques are often applied to infectious disease modeling of large populations and are suited to identifying variables associated with a particular disease or disease state. However, they are less appropriate for estimating infection state prevalence over time because they do not naturally accommodate known disease dynamics like duration of time an individual is infectious, heterogeneity in the risk of acquiring infection, and patterns of seasonality. We propose a Bayesian compartmental model to estimate latent infection state prevalence over time that easily incorporates known disease dynamics. We demonstrate how and why a stochastic compartmental model is a better approach for determining infection state proportions than multinomial regression is by using a novel method for estimating Bayes factors for models with high-dimensional parameter spaces. We provide an example using visceral leishmaniasis in Brazil and present an empirically-adjusted reproductive number for the infection.  相似文献   

3.
The objective of this article is to propose a method of exploring the mechanism of expectation formation based on qualitative survey data. The survey data are regarded as a sample from a multinomial distribution whose parameters are time-variant functions of inflation expectations. The parameters are estimated using a Bayesian recursive approach, which is a generalization of the Kalman filtering technique. For illustrative purposes, the method is applied to Japanese data. One notable finding from the empirical analysis is that the expectation formation process of Japanese enterprises has varied greatly over time.  相似文献   

4.
Summary.  Analysing the use of marijuana is challenging in part because there is no widely accepted single measure of individual use. Similarly, there is no single response variable that effectively captures attitudes toward its social and moral acceptability. One approach is to view the joint distribution of multiple use and attitude indicators as a mixture of latent classes. Pooling items from the annual 'Monitoring the future' surveys of American high school seniors from 1977 to 2001, we find that marijuana use and attitudes are well summarized by a four-class model. Secular trends in class prevalences over this period reveal major shifts in use and attitudes. Applying a multinomial logistic model to the latent response, we investigate how class membership relates to demographic and life style factors, political beliefs and religiosity over time. Inferences about the parameters of the latent class logistic model are obtained by a combination of maximum likelihood and Bayesian techniques.  相似文献   

5.
We consider the problem of model selection based on quantile analysis and with unknown parameters estimated using quantile leasts squares. We propose a model selection test for the null hypothesis that the competing models are equivalent against the alternative hypothesis that one model is closer to the true model. We follow with two applications of the proposed model selection test. The first application is in model selection for time series with non-normal innovations. The second application is in model selection in the NoVas method, short for normalizing and variance stabilizing transformation, forecast. A set of simulation results also lends strong support to the results presented in the paper.  相似文献   

6.
A Bayesian cluster analysis for the results of an election based on multinomial mixture models is proposed. The number of clusters is chosen based on the careful comparison of the results with predictive simulations from the models, and by checking whether models capture most of the spatial dependence in the results. By implementing the analysis on five recent elections in Barcelona, the reader is walked through the choice of the best statistics and graphical displays to help chose a model and present the results. Even though the models do not use any information about the location of the areas in which the results are broken into, in the example they uncover a four-cluster structure with a strong spatial dependence, that is very stable over time and relates to the demographic composition.  相似文献   

7.
In many surveys, the domains of study are small and the samples that carry information on a domain can be very small indeed. If the survey is conducted repeatedly there is often a high degree of overlap in samples over time. We show how to use the richness of information over time to compensate for the paucity of cross‐sectional information. We propose a model‐based estimator of the population total which makes use of stabilised parameter estimates that combine information from different survey periods that are adjacent in time. The motivating example for this research was the ProdCom survey as implemented in the UK.  相似文献   

8.
Bayesian dynamic linear models (DLMs) are useful in time series modelling, because of the flexibility that they off er for obtaining a good forecast. They are based on a decomposition of the relevant factors which explain the behaviour of the series through a series of state parameters. Nevertheless, the DLM as developed by West and Harrison depend on additional quantities, such as the variance of the system disturbances, which, in practice, are unknown. These are referred to here as 'hyper-parameters' of the model. In this paper, DLMs with autoregressive components are used to describe time series that show cyclic behaviour. The marginal posterior distribution for state parameters can be obtained by weighting the conditional distribution of state parameters by the marginal distribution of hyper-parameters. In most cases, the joint distribution of the hyperparameters can be obtained analytically but the marginal distributions of the components cannot, so requiring numerical integration. We propose to obtain samples of the hyperparameters by a variant of the sampling importance resampling method. A few applications are shown with simulated and real data sets.  相似文献   

9.
Previously, Bayesian anomaly was reported for estimating reliability when subsystem failure data and system failure data were obtained from the same time period. As a result, a practical method for mitigating Bayesian anomaly was developed. In the first part of this paper, however, we show that the Bayesian anomaly can be avoided as long as the same failure information is incorporated in the model. In the second part of this paper, we consider a problem of estimating the Bayesian reliability when the failure count data on subsystems and systems are obtained from the same time period. We show that Bayesian anomaly does not exist when using the multinomial distribution with the Dirichlet prior distribution. A numerical example is given to compare the proposed method with the previous methods.  相似文献   

10.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

11.
We study the most basic Bayesian forecasting model for exponential family time series, the power steady model (PSM) of Smith, in terms of observable properties of one-step forecast distributions and sample paths. The PSM implies a constraint between location and spread of the forecast distribution. Including a scale parameter in the models does not always give an exact solution free of this problem, but it does suggest how to define related models free of the constraint. We define such a class of models which contains the PSM. We concentrate on the case where observations are non-negative. Probability theory and simulation show that under very mild conditions almost all sample paths of these models converge to some constant, making them unsuitable for modelling in many situations. The results apply more generally to non-negative models defined in terms of exponentially weighted moving averages. We use these and related results to motivate, define and apply very simple models based on directly specifying the forecast distributions.  相似文献   

12.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   

13.
Abstract

We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and conditionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach—one new example of the concept of decouple/recouple in time series—enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.  相似文献   

14.
ABSTRACT

This article proposes a development of detecting patches of additive outliers in autoregressive time series models. The procedure improves the existing detection methods via Gibbs sampling. We combine the Bayesian method and the Kalman smoother to present some candidate models of outlier patches and the best model with the minimum Bayesian information criterion (BIC) is selected among them. We propose that this combined Bayesian and Kalman method (CBK) can reduce the masking and swamping effects about detecting patches of additive outliers. The correctness of the method is illustrated by simulated data and then by analyzing a real set of observations.  相似文献   

15.
We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalizations of univariate stochastic volatility models and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We discuss model fitting based on retrospective data and sequential analysis for forward filtering and short-term forecasting. Analyses are compared with results from the much simpler method of dynamic variance-matrix discounting that, for over a decade, has been a standard approach in applied financial econometrics. We study these models in analysis, forecasting, and sequential portfolio allocation for a selected set of international exchange-rate-return time series. Our goals are to understand a range of modeling questions arising in using these factor models and to explore empirical performance in portfolio construction relative to discount approaches. We report on our experiences and conclude with comments about the practical utility of structured factor models and on future potential model extensions.  相似文献   

16.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

17.
This article is concerned with how the bootstrap can be applied to study conditional forecast error distributions and construct prediction regions for future observations in periodic time-varying state-space models. We derive, first, an algorithm for assessing the precision of quasi-maximum likelihood estimates of the parameters. As a result, the derived algorithm is exploited for numerically evaluating the conditional forecast accuracy of a periodic time series model expressed in state space form. We propose a method which requires the backward, or reverse-time, representation of the model for assessing conditional forecast errors. Finally, the small sample properties of the proposed procedures will be investigated by some simulation studies. Furthermore, we illustrate the results by applying the proposed method to a real time series.  相似文献   

18.
This paper proposes a linear mixed model (LMM) with spatial effects, trend, seasonality and outliers for spatio-temporal time series data. A linear trend, dummy variables for seasonality, a binary method for outliers and a multivariate conditional autoregressive (MCAR) model for spatial effects are adopted. A Bayesian method using Gibbs sampling in Markov Chain Monte Carlo is used for parameter estimation. The proposed model is applied to forecast rice and cassava yields, a spatio-temporal data type, in Thailand. The data have been extracted from the Office of Agricultural Economics, Ministry of Agriculture and Cooperatives of Thailand. The proposed model is compared with our previous model, an LMM with MCAR, and a log transformed LMM with MCAR. We found that the proposed model is the most appropriate, using the mean absolute error criterion. It fits the data very well in both the fitting part and the validation part for both rice and cassava. Therefore, it is recommended to be a primary model for forecasting these types of spatio-temporal time series data.  相似文献   

19.
Categorical data frequently arise in applications in the Social Sciences. In such applications, the class of log-linear models, based on either a Poisson or (product) multinomial response distribution, is a flexible model class for inference and prediction. In this paper we consider the Bayesian analysis of both Poisson and multinomial log-linear models. It is often convenient to model multinomial or product multinomial data as observations of independent Poisson variables. For multinomial data, Lindley (1964) [20] showed that this approach leads to valid Bayesian posterior inferences when the prior density for the Poisson cell means factorises in a particular way. We develop this result to provide a general framework for the analysis of multinomial or product multinomial data using a Poisson log-linear model. Valid finite population inferences are also available, which can be particularly important in modelling social data. We then focus particular attention on multivariate normal prior distributions for the log-linear model parameters. Here, an improper prior distribution for certain Poisson model parameters is required for valid multinomial analysis, and we derive conditions under which the resulting posterior distribution is proper. We also consider the construction of prior distributions across models, and for model parameters, when uncertainty exists about the appropriate form of the model. We present classes of Poisson and multinomial models, invariant under certain natural groups of permutations of the cells. We demonstrate that, if prior belief concerning the model parameters is also invariant, as is the case in a ‘reference’ analysis, then the choice of prior distribution is considerably restricted. The analysis of multivariate categorical data in the form of a contingency table is considered in detail. We illustrate the methods with two examples.  相似文献   

20.
We consider a Bayesian deterministically trending dynamic time series model with heteroscedastic error variance, in which there exist multiple structural changes in level, trend and error variance, but the number of change-points and the timings are unknown. For a Bayesian analysis, a truncated Poisson prior and conjugate priors are used for the number of change-points and the distributional parameters, respectively. To identify the best model and estimate the model parameters simultaneously, we propose a new method by sequentially making use of the Gibbs sampler in conjunction with stochastic approximation Monte Carlo simulations, as an adaptive Monte Carlo algorithm. The numerical results are in favor of our method in terms of the quality of estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号