首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a Bayesian stochastic search approach to selecting restrictions on multivariate regression models where the errors exhibit deterministic or stochastic conditional volatilities. We develop a Markov chain Monte Carlo (MCMC) algorithm that generates posterior restrictions on the regression coefficients and Cholesky decompositions of the covariance matrix of the errors. Numerical simulations with artificially generated data show that the proposed method is effective in selecting the data-generating model restrictions and improving the forecasting performance of the model. Applying the method to daily foreign exchange rate data, we conduct stochastic search on a VAR model with stochastic conditional volatilities.  相似文献   

2.
A general four parameter growth curve is presented as a model for the growth curve of a group of mice for which averaged weights of the group are available. Several data sets of mice weights obtained from experiments performed at the National Center for Toxicological Research are analyzed. The results are compared with traditional models for growth curves. Both additive and multiplicative error models are analyzed. It is shown that for this data the four parameter model gives a much better fit than traditional growth curve models and should be given serious consideration in model fitting.  相似文献   

3.
Attempts to better the Performance of classical growth curve functions have met with limited success. Construction industry projects highlighted the need to improve deterministic models rather than the stochastic methodologies which are nearly always based on the former. New concepts (changed for the first time since 1825) are formulated and used to generate multi-component deterministic models. Six highly diverse case studies, of which three are presented, were used to test one model and its autocorrelation form. Trial forecast standard errors showed a drop of 50% when compared to classical and stochastic models. Among the by-products of this work are uses of normalisation, scaling and a simple statistical procedure to estimate linear constants. A different consequence of the new concepts thew light on the problem of predicting a consumption process in marketing. The major implications of this research show the import of the new concepts and diversification of the fields of study on deterministic modelling; and also the need to reappraise the functional interface with many of the underlying processes of growth.  相似文献   

4.
Classification error can lead to substantial biases in the estimation of gross flows from longitudinal data. We propose a method to adjust flow estimates for bias, based on fitting separate multinomial logistic models to the classification error probabilities and the true state transition probabilities using values of auxiliary variables. Our approach has the advantages that it does not require external information on misclassification rates, it permits the identification of factors that are related to misclassification and true transitions and it does not assume independence between classification errors at successive points in time. Constraining the prediction of the stocks to agree with the observed stocks protects against model misspecification. We apply the approach to data on women from the Panel Study of Income Dynamics with three categories of labour force status. The model fitted is shown to have interpretable coefficient estimates and to provide a good fit. Simulation results indicate good performance of the model in predicting the true flows and robustness against departures from the model postulated.  相似文献   

5.
韩本三  曹征  黎实 《统计研究》2012,29(7):81-85
 本文将RESET检验扩展到二元选择面板数据模型的设定,考察了固定效应Probit模型和Logit模型的设定检验,包括异方差、遗漏变量和分布误设的检验。模拟结果表明Logit模型的RESET设定检验显示良好的水平和功效,而Probit模型的RESET检验可能由于估计方法的选择导致在某些方面的功效表现不好。但总体说来,在二元选择面板数据模型的设定检验上,RESET检验仍然是一个较好的选择。  相似文献   

6.
Marginalised models, also known as marginally specified models, have recently become a popular tool for analysis of discrete longitudinal data. Despite being a novel statistical methodology, these models introduce complex constraint equations and model fitting algorithms. On the other hand, there is a lack of publicly available software to fit these models. In this paper, we propose a three-level marginalised model for analysis of multivariate longitudinal binary outcome. The implicit function theorem is introduced to approximately solve the marginal constraint equations explicitly. probit link enables direct solutions to the convolution equations. Parameters are estimated by maximum likelihood via a Fisher–Scoring algorithm. A simulation study is conducted to examine the finite-sample properties of the estimator. We illustrate the model with an application to the data set from the Iowa Youth and Families Project. The R package pnmtrem is prepared to fit the model.  相似文献   

7.
This paper focuses on the analysis of errors between a flight trajectory prediction model and flight data. A novel stochastic prediction flight model is compared with the popular fly-by and fly-over turn models. The propagated error is measured using either spatial coordinates or angles. Depending on the case, the distribution of error is estimated and confidence bounds for the linear and directional mean are provided for all three stochastic flight models.  相似文献   

8.
This paper considers two types of chaotic map time series models, including the well-known tent, logistic and binary-shift maps as special cases; these are called curved tent and curved binary families. Deterministic behaviour is investigated by invariant distributions, Lyapunov exponents, and by serial dependency. Stochastic time reversal of the families is shown to produce models which have a broader range of stochastic and chaotic properties than their deterministic counterparts. The marginal distributions may have concentrations and restricted supports and are shown to be a non-standard class of invariant distribution. Dependenc y is generally weaker with the reversed stochastic models. The work gives a broad statistical account of deterministic and stochastically reversed map models, such as are emerging in random number generation, communica tion systems and cryptography  相似文献   

9.
Forecasting of future snow depths is useful for many applications like road safety, winter sport activities, avalanche risk assessment and hydrology. Motivated by the lack of statistical forecasts models for snow depth, in this paper we present a set of models to fill this gap. First, we present a model to do short-term forecasts when we assume that reliable weather forecasts of air temperature and precipitation are available. The covariates are included nonlinearly into the model following basic physical principles of snowfall, snow aging and melting. Due to the large set of observations with snow depth equal to zero, we use a zero-inflated gamma regression model, which is commonly used to similar applications like precipitation. We also do long-term forecasts of snow depth and much further than traditional weather forecasts for temperature and precipitation. The long-term forecasts are based on fitting models to historic time series of precipitation, temperature and snow depth. We fit the models to data from six locations in Norway with different climatic and vegetation properties. Forecasting five days into the future, the results showed that, given reliable weather forecasts of temperature and precipitation, the forecast errors in absolute value was between 3 and 7?cm for different locations in Norway. Forecasting three weeks into the future, the forecast errors were between 7 and 16?cm.  相似文献   

10.
We present a method for fitting parametric probability density models using an integrated square error criterion on a continuum of weighted Lebesgue spaces formed by ultraspherical polynomials. This approach is inherently suitable for creating mixture model representations of complex distributions and allows fully autonomous cluster analysis of high-dimensional datasets. The method is also suitable for extremely large sets, allowing post facto model selection and analysis even in the absence of the original data. Furthermore, the fitting procedure only requires the parametric model to be pointwise evaluable, making it trivial to fit user-defined models through a generic algorithm.  相似文献   

11.
Very often in regression analysis, a particular functional form connecting known covariates and unknown parameters is either suggested by previous work or demanded by theoretical considerations so that the deterministic part of the responses has a known form. However, the underlying error structure is often less well understood. In this case, the transform-both-sides (TBS) models are appropriate. In this paper we generalize the usual TBS models and develop tests to assess goodness of fit when fitting TBS or GTBS models. Parameter estimation is discussed, and tests based on the Cramér-von Mises statistic and the Anderson-Darling statistic are presented with a table suitable for finite-sample applications.  相似文献   

12.
When there are frequent capture occasions, both semiparametric and nonparametric estimators for the size of an open population have been proposed using kernel smoothing methods. While kernel smoothing methods are mathematically tractable, fitting them to data is computationally intensive. Here, we use smoothing splines in the form of P-splines to provide an alternate less computationally intensive method of fitting these models to capture–recapture data from open populations with frequent capture occasions. We fit the model to capture data collected over 64 occasions and model the population size as a function of time, seasonal effects and an environmental covariate. A small simulation study is also conducted to examine the performance of the estimators and their standard errors.  相似文献   

13.
A stochastic volatility in mean model with correlated errors using the symmetrical class of scale mixtures of normal distributions is introduced in this article. The scale mixture of normal distributions is an attractive class of symmetric distributions that includes the normal, Student-t, slash and contaminated normal distributions as special cases, providing a robust alternative to estimation in stochastic volatility in mean models in the absence of normality. Using a Bayesian paradigm, an efficient method based on Markov chain Monte Carlo (MCMC) is developed for parameter estimation. The methods developed are applied to analyze daily stock return data from the São Paulo Stock, Mercantile & Futures Exchange index (IBOVESPA). The Bayesian predictive information criteria (BPIC) and the logarithm of the marginal likelihood are used as model selection criteria. The results reveal that the stochastic volatility in mean model with correlated errors and slash distribution provides a significant improvement in model fit for the IBOVESPA data over the usual normal model.  相似文献   

14.
A general stochastic model for the spread of an epidemic developing in a closed population is introduced. Each model consisting of a discrete-time Markov chain involves a deterministic counterpart represented by an ordinary differential equation. Our framework involves various epidemic models such as a stochastic version of the Kermack and McKendrick model and the SIS epidemic model. We prove the asymptotic consistency of the stochastic model regarding a deterministic model; this means that for a large population both modelings are similar. Moreover, a Central Limit Theorem for the fluctuations of the stochastic modeling regarding the deterministic model is also proved.  相似文献   

15.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   

16.
Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty).  相似文献   

17.
In this article, the normal inverse Gaussian stochastic volatility model of Barndorff-Nielsen is extended. The resulting model has a more flexible lag structure than the original one. In addition, the second-and fourth-order moments, important properties of a volatility model, are derived. The model can be considered either as a generalized autoregressive conditional heteroscedasticity model with nonnormal errors or as a stochastic volatility model with an inverse Gaussian distributed conditional variance. A simulation study is made to investigate the performance of the maximum likelihood estimator of the model. Finally, the model is applied to stock returns and exchange-rate movements. Its fit to two stylized facts and its forecasting performance is compared with two other volatility models.  相似文献   

18.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.  相似文献   

19.
Abstract.  We present in this paper iterative estimation procedures, using conditional expectations, to fit linear models when the distributions of the errors are general and the dependent data stem from a finite number of sources, either grouped or non-grouped with different classification criteria. We propose an initial procedure that is inspired by the expectation-maximization (EM) algorithm, although it does not agree with it. The proposed procedure avoids the nested iteration, which implicitly appears in the initial procedure and also in the EM algorithm. The stochastic asymptotic properties of the corresponding estimators are analysed.  相似文献   

20.
In this paper we consider the impact of both missing data and measurement errors on a longitudinal analysis of participation in higher education in Australia. We develop a general method for handling both discrete and continuous measurement errors that also allows for the incorporation of missing values and random effects in both binary and continuous response multilevel models. Measurement errors are allowed to be mutually dependent and their distribution may depend on further covariates. We show that our methodology works via two simple simulation studies. We then consider the impact of our measurement error assumptions on the analysis of the real data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号