首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany.  相似文献   

2.
To examine childhood cancer diagnoses in the province of Alberta, Canada during 1983–2004, we construct a generalized additive mixed model for the analysis of geographic and temporal variability of cancer ratios. In this model, spatially correlated random effects and temporal components are adopted. The interaction between space and time is also accommodated. Spatio-temporal models that use conditional autoregressive smoothing across the spatial dimension and B-spline over the temporal dimension are considered. We study the patterns of incidence ratios over time and identify areas with consistently high ratio estimates as areas for potential further investigation. We apply the method of penalized quasi-likelihood to estimate the model parameters. We illustrate this approach using a yearly data set of childhood cancer diagnoses in the province of Alberta, Canada during 1983–2004.  相似文献   

3.
Hypothermia which is induced by reducing core body temperature is a therapeutic tool used to prevent brain damage resulting from physical trauma. However, all physiological systems begin to slow down due to hypothermia and this can result in increased risk of mortality. Therefore quantification of the transition of core body temperature to early hypothermia is of great clinical interest. Conceptually core body temperature may exhibit an either gradual or abrupt transition. Bent‐cable regression is an appealing statistical tool to model such data due to the model's flexibility and readily interpretable regression coefficients. It handles more flexibly models that traditionally have been handled by low‐order polynomial models (for gradual transition) or piecewise linear changepoint models (for abrupt change). We consider a rat model to quantify the temporal trend of core body temperature primarily to address the question: What is the critical time point associated with a breakdown in the compensatory mechanisms following the start of hypothermia therapy? To this end, we develop a Bayesian modelling framework for bent‐cable regression of longitudinal data to simultaneously account for gradual and abrupt transitions. Our analysis reveals that: (i) about 39% of rats exhibit a gradual transition in core body temperature; (ii) the critical time point is approximately the same regardless of transition type; and (iii) both transition types show a significant increase of core body temperature followed by a significant decrease.  相似文献   

4.
Recent changes in European family dynamics are often linked to common latent trends of economic and ideational change. Using Bayesian factor analysis, we extract three latent variables from eight socio-demographic indicators related to family formation, dissolution, and gender system and collected on 19 European countries within four periods (1970, 1980, 1990, 1998). The flexibility of the Bayesian approach allows us to introduce an innovative temporal factor model, adding the temporal dimension to the traditional factorial analysis. The underlying structure of the Bayesian factor model proposed reflects our idea of an autoregressive pattern in the latent variables relative to adjacent time periods. The results we obtain are consistent with current interpretations in European demographic trends.  相似文献   

5.
Summary. A dynamic treatment regime is a list of decision rules, one per time interval, for how the level of treatment will be tailored through time to an individual's changing status. The goal of this paper is to use experimental or observational data to estimate decision regimes that result in a maximal mean response. To explicate our objective and to state the assumptions, we use the potential outcomes model. The method proposed makes smooth parametric assumptions only on quantities that are directly relevant to the goal of estimating the optimal rules. We illustrate the methodology proposed via a small simulation.  相似文献   

6.
A spatiotemporal model for Mexico City ozone levels   总被引:9,自引:1,他引:8  
Summary.  We consider hourly readings of concentrations of ozone over Mexico City and propose a model for spatial as well as temporal interpolation and prediction. The model is based on a time-varying regression of the observed readings on air temperature. Such a regression requires interpolated values of temperature at locations and times where readings are not available. These are obtained from a time-varying spatiotemporal model that is coupled to the model for the ozone readings. Two location-dependent harmonic components are added to account for the main periodicities that ozone presents during a given day and that are not explained through the covariate. The model incorporates spatial covariance structure for the observations and the parameters that define the harmonic components. Using the dynamic linear model framework, we show how to compute smoothed means and predictive values for ozone. We illustrate the methodology on data from September 1997.  相似文献   

7.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

8.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

9.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

10.
GARCH models include most of the stylized facts of financial time series and they have been largely used to analyse discrete financial time series. In the last years, continuous-time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behaviour of some NASDAQ time series.  相似文献   

11.
ABSTRACT

Dependence among defaults both across assets and over time is an important characteristic of financial risk. A Bayesian approach to default rate estimation is proposed and illustrated using prior distributions assessed from an experienced industry expert. Two extensions of the binomial model are proposed. The first allows correlated defaults yet remains consistent with Basel II’s asymptotic single-factor model. The second adds temporal correlation in default rates through autocorrelation in the systemic factor. Implications for the predictability of default rates are considered. The single-factor model generates more forecast uncertainty than does the parameter uncertainty. A robustness exercise illustrates that the correlation indicated by the data is much smaller than that specified in the Basel II regulations.  相似文献   

12.
《Econometric Reviews》2012,31(1):71-91
Abstract

This paper proposes the Bayesian semiparametric dynamic Nelson-Siegel model for estimating the density of bond yields. Specifically, we model the distribution of the yield curve factors according to an infinite Markov mixture (iMM). The model allows for time variation in the mean and covariance matrix of factors in a discrete manner, as opposed to continuous changes in these parameters such as the Time Varying Parameter (TVP) models. Estimating the number of regimes using the iMM structure endogenously leads to an adaptive process that can generate newly emerging regimes over time in response to changing economic conditions in addition to existing regimes. The potential of the proposed framework is examined using US bond yields data. The semiparametric structure of the factors can handle various forms of non-normalities including fat tails and nonlinear dependence between factors using a unified approach by generating new clusters capturing these specific characteristics. We document that modeling parameter changes in a discrete manner increases the model fit as well as forecasting performance at both short and long horizons relative to models with fixed parameters as well as the TVP model with continuous parameter changes. This is mainly due to fact that the discrete changes in parameters suit the typical low frequency monthly bond yields data characteristics better.  相似文献   

13.
There is by now a substantial literature on spatio-temporal modeling. However, to date, there exists essentially no literature which addresses the issue of process change from a certain time. In fact, if we look at change points for purely time series data, the customary form is to propose a model involving a mean or level shift. We see little attempting to capture a change in association structure. Part of the concern is how to specify flexible ways to bridge the association across the time point and still ensure that a proper joint distribution has been defined for all of the data. Introducing a spatial component evidently adds further complication. We want to allow for a change-point reflecting change in both temporal and spatial association. In this paper we propose a constructive, flexible model formulation through additive specifications. We also demonstrate how computational concerns benefit from the availability of temporal order. Finally, we illustrate with several simulated datasets to examine the capability of the model to detect different types of structural changes.  相似文献   

14.
Although Cox proportional hazards regression is the default analysis for time to event data, there is typically uncertainty about whether the effects of a predictor are more appropriately characterized by a multiplicative or additive model. To accommodate this uncertainty, we place a model selection prior on the coefficients in an additive-multiplicative hazards model. This prior assigns positive probability, not only to the model that has both additive and multiplicative effects for each predictor, but also to sub-models corresponding to no association, to only additive effects, and to only proportional effects. The additive component of the model is constrained to ensure non-negative hazards, a condition often violated by current methods. After augmenting the data with Poisson latent variables, the prior is conditionally conjugate, and posterior computation can proceed via an efficient Gibbs sampling algorithm. Simulation study results are presented, and the methodology is illustrated using data from the Framingham heart study.  相似文献   

15.
Internet traffic data is characterized by some unusual statistical properties, in particular, the presence of heavy-tailed variables. A typical model for heavy-tailed distributions is the Pareto distribution although this is not adequate in many cases. In this article, we consider a mixture of two-parameter Pareto distributions as a model for heavy-tailed data and use a Bayesian approach based on the birth-death Markov chain Monte Carlo algorithm to fit this model. We estimate some measures of interest related to the queueing system k-Par/M/1 where k-Par denotes a mixture of k Pareto distributions. Heavy-tailed variables are difficult to model in such queueing systems because of the lack of a simple expression for the Laplace Transform (LT). We use a procedure based on recent LT approximating results for the Pareto/M/1 system. We illustrate our approach with both simulated and real data.  相似文献   

16.
Survival data involving silent events are often subject to interval censoring (the event is known to occur within a time interval) and classification errors if a test with no perfect sensitivity and specificity is applied. Considering the nature of this data plays an important role in estimating the time distribution until the occurrence of the event. In this context, we incorporate validation subsets into the parametric proportional hazard model, and show that this additional data, combined with Bayesian inference, compensate the lack of knowledge about test sensitivity and specificity improving the parameter estimates. The proposed model is evaluated through simulation studies, and Bayesian analysis is conducted within a Gibbs sampling procedure. The posterior estimates obtained under validation subset models present lower bias and standard deviation compared to the scenario with no validation subset or the model that assumes perfect sensitivity and specificity. Finally, we illustrate the usefulness of the new methodology with an analysis of real data about HIV acquisition in female sex workers that have been discussed in the literature.  相似文献   

17.
This research provides a generalized framework to disaggregate lower-frequency time series and evaluate the disaggregation performance. The proposed framework combines two models in separate stages: a linear regression model to exploit related independent variables in the first stage and a state–space model to disaggregate the residual from the regression in the second stage. For the purpose of providing a set of practical criteria for assessing the disaggregation performance, we measure the information loss that occurs during temporal aggregation while examining what effects take place when aggregating data. To validate the proposed framework, we implement Monte Carlo simulations and provide two empirical studies. Supplementary materials for this article are available online.  相似文献   

18.
Matching and stratification based on confounding factors or propensity scores (PS) are powerful approaches for reducing confounding bias in indirect treatment comparisons. However, implementing these approaches requires pooled individual patient data (IPD). The research presented here was motivated by an indirect comparison between a single-armed trial in acute myeloid leukemia (AML), and two external AML registries with current treatments for a control. For confidentiality reasons, IPD cannot be pooled. Common approaches to adjusting confounding bias, such as PS matching or stratification, cannot be applied as 1) a model for PS, for example, a logistic model, cannot be fitted without pooling covariate data; 2) pooling response data may be necessary for some statistical inference (e.g., estimating the SE of mean difference of matched pairs) after PS matching. We propose a set of approaches that do not require pooling IPD, using a combination of methods including a linear discriminant for matching and stratification, and secure multiparty computation for estimation of within-pair sample variance and for calculations involving multiple control sources. The approaches only need to share aggregated data offline, rather than real-time secure data transfer, as required by typical secure multiparty computation for model fitting. For survival analysis, we propose an approach using restricted mean survival time. A simulation study was conducted to evaluate this approach in several scenarios, in particular, with a mixture of continuous and binary covariates. The results confirmed the robustness and efficiency of the proposed approach. A real data example is also provided for illustration.  相似文献   

19.
ABSTRACT

Autoregressive Moving Average (ARMA) time series model fitting is a procedure often based on aggregate data, where parameter estimation plays a key role. Therefore, we analyze the effect of temporal aggregation on the accuracy of parameter estimation of mixed ARMA and MA models. We derive the expressions required to compute the parameter values of the aggregate models as functions of the basic model parameters in order to compare their estimation accuracy. To this end, a simulation experiment shows that aggregation causes a severe accuracy loss that increases with the order of aggregation, leading to poor accuracy.  相似文献   

20.
Summary.  We introduce a semiparametric approach for modelling the effect of concurrent events on an outcome of interest. Concurrency manifests itself as temporal and spatial dependences. By temporal dependence we mean the effect of an event in the past. Modelling this effect is challenging since events arrive at irregularly spaced time intervals. For the spatial part we use an abstract notion of 'feature space' to conceptualize distances among a set of item features. We motivate our model in the context of on-line auctions by modelling the effect of concurrent auctions on an auction's price. Our concurrency model consists of three components: a transaction-related component that accounts for auction design and bidding competition, a spatial component that takes into account similarity between item features and a temporal component that accounts for recently closed auctions. To construct each of these we borrow ideas from spatial and mixed model methodology. The power of this model is illustrated on a large and diverse set of laptop auctions on eBay.com. We show that our model results in superior predictive performance compared with a set of competitor models. The model also allows for new insight into the factors that drive price in on-line auctions and their relationship to bidding competition, auction design, product variety and temporal learning effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号