首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Statistical space–time modelling has traditionally been concerned with separable covariance functions, meaning that the covariance function is a product of a purely temporal function and a purely spatial function. We draw attention to a physical dispersion model which could model phenomena such as the spread of an air pollutant. We show that this model has a non-separable covariance function. The model is well suited to a wide range of realistic problems which will be poorly fitted by separable models. The model operates successively in time: the spatial field at time t +1 is obtained by 'blurring' the field at time t and adding a spatial random field. The model is first introduced at discrete time steps, and the limit is taken as the length of the time steps goes to 0. This gives a consistent continuous model with parameters that are interpretable in continuous space and independent of sampling intervals. Under certain conditions the blurring must be a Gaussian smoothing kernel. We also show that the model is generated by a stochastic differential equation which has been studied by several researchers previously.  相似文献   

2.
The purpose of this paper is to develop a new linear regression model for count data, namely generalized-Poisson Lindley (GPL) linear model. The GPL linear model is performed by applying generalized linear model to GPL distribution. The model parameters are estimated by the maximum likelihood estimation. We utilize the GPL linear model to fit two real data sets and compare it with the Poisson, negative binomial (NB) and Poisson-weighted exponential (P-WE) models for count data. It is found that the GPL linear model can fit over-dispersed count data, and it shows the highest log-likelihood, the smallest AIC and BIC values. As a consequence, the linear regression model from the GPL distribution is a valuable alternative model to the Poisson, NB, and P-WE models.  相似文献   

3.
The generalized Poisson (GP) regression model has been used to model count data that exhibit over-dispersion or under-dispersion. The zero-inflated GP (ZIGP) regression model can additionally handle count data characterized by many zeros. However, the parameters of ZIGP model cannot easily be used for inference on overall exposure effects. In order to address this problem, a marginalized ZIGP is proposed to directly model the population marginal mean count. The parameters of the marginalized zero-inflated GP model are estimated by the method of maximum likelihood. The regression model is illustrated by three real-life data sets.  相似文献   

4.
The number of variables in a regression model is often too large and a more parsimonious model may be preferred. Selection strategies (e.g. all-subset selection with various penalties for model complexity, or stepwise procedures) are widely used, but there are few analytical results about their properties. The problems of replication stability, model complexity, selection bias and an over-optimistic estimate of the predictive value of a model are discussed together with several proposals based on resampling methods. The methods are applied to data from a case–control study on atopic dermatitis and a clinical trial to compare two chemotherapy regimes by using a logistic regression and a Cox model. A recent proposal to use shrinkage factors to reduce the bias of parameter estimates caused by model building is extended to parameterwise shrinkage factors and is discussed as a further possibility to illustrate problems of models which are too complex. The results from the resampling approaches favour greater simplicity of the final regression model.  相似文献   

5.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

6.
The two-part model and Heckman's sample selection model are often used in economic studies which involve analyzing the demand for limited variables. This study proposed a simultaneous equation model (SEM) and used the expectation-maximization algorithm to obtain the maximum likelihood estimate. We then constructed a simulation to compare the performance of estimates of price elasticity using SEM with those estimates from the two-part model and the sample selection model. The simulation shows that the estimates of price elasticity by SEM are more precise than those by the sample selection model and the two-part model when the model includes limited independent variables. Finally, we analyzed a real example of cigarette consumption as an application. We found an increase in cigarette price associated with a decrease in both the propensity to consume cigarettes and the amount actually consumed.  相似文献   

7.
The paper proposes a joint mixture model to model non-ignorable drop-out in longitudinal cohort studies of mental health outcomes. The model combines a (non)-linear growth curve model for the time-dependent outcomes and a discrete-time survival model for the drop-out with random effects shared by the two sub-models. The mixture part of the model takes into account population heterogeneity by accounting for latent subgroups of the shared effects that may lead to different patterns for the growth and the drop-out tendency. A simulation study shows that the joint mixture model provides greater precision in estimating the average slope and covariance matrix of random effects. We illustrate its benefits with data from a longitudinal cohort study that characterizes depression symptoms over time yet is hindered by non-trivial participant drop-out.KEYWORDS: Latent growth curve, MNAR drop-out, survival analysis, finite mixture model, mental health  相似文献   

8.
A statistical model assuming a preferential attachment network, which is generated by adding nodes sequentially according to a few simple rules, usually describes real-life networks better than a model assuming, for example, a Bernoulli random graph, in which any two nodes have the same probability of being connected, does. Therefore, to study the propagation of “infection” across a social network, we propose a network epidemic model by combining a stochastic epidemic model and a preferential attachment model. A simulation study based on the subsequent Markov Chain Monte Carlo algorithm reveals an identifiability issue with the model parameters. Finally, the network epidemic model is applied to a set of online commissioning data.  相似文献   

9.
In a joint analysis of longitudinal quality of life (QoL) scores and relapse-free survival (RFS) times from a clinical trial on early breast cancer conducted by the Canadian Cancer Trials Group, we observed a complicated trajectory of QoL scores and existence of long-term survivors. Motivated by this observation, we proposed in this paper a flexible joint model for the longitudinal measurements and survival times. A partly linear mixed effect model is used to capture the complicated but smooth trajectory of longitudinal measurements and approximated by B-splines and a semiparametric mixture cure model with the B-spline baseline hazard to model survival times with a cure fraction. These two models are linked by shared random effects to explore the dependence between longitudinal measurements and survival times. A semiparametric inference procedure with an EM algorithm is proposed to estimate the parameters in the joint model. The performance of proposed procedures are evaluated by simulation studies and through the application to the analysis of data from the clinical trial which motivated this research.  相似文献   

10.
The Bayesian vector autoregression (BVAR) employment-forecasting approach is generalized using data for the state of Georgia. This study advances previous regional BVAR approaches by (a) incorporating regional input-output coefficients instead of national coefficients, (b) using the coefficients both to specify the prior means in one model and to weight the variances of a Minnesota-type prior in a second model, and (c) including final-demand effects and links to national and world economies. Out-of-sample forecasts produced by the generalized BVAR models are compared to forecasts produced from an autoregressive model, an unconstrained VAR model, and a Minnesota BVAR model.  相似文献   

11.
We propose a random partition model that implements prediction with many candidate covariates and interactions. The model is based on a modified product partition model that includes a regression on covariates by favouring homogeneous clusters in terms of these covariates. Additionally, the model allows for a cluster‐specific choice of the covariates that are included in this evaluation of homogeneity. The variable selection is implemented by introducing a set of cluster‐specific latent indicators that include or exclude covariates. The proposed model is motivated by an application to predicting mortality in an intensive care unit in Lisboa, Portugal.  相似文献   

12.
Likelihood computation in spatial statistics requires accurate and efficient calculation of the normalizing constant (i.e. partition function) of the Gibbs distribution of the model. Two available methods to calculate the normalizing constant by Markov chain Monte Carlo methods are compared by simulation experiments for an Ising model, a Gaussian Markov field model and a pairwise interaction point field model.  相似文献   

13.
A nested-error regression model having both fixed and random effects is introduced to estimate linear parameters of small areas. The model is applicable to data having a proportion of domains where the variable of interest cannot be described by a standard linear mixed model. Algorithms and formulas to fit the model, to calculate EBLUP and to estimate mean-squared errors are given. A Monte Carlo simulation experiment is presented to illustrate the gain of precision obtained by using the proposed model and to obtain some practical conclusions. A motivating application to Spanish Labour Force Survey data is also given.  相似文献   

14.
 内容提要:中国股指期货的推出指日可待,交易者多了一种投资工具的同时也带来了新的风险。建立准确的金融时间序列预测模型是逐利及避险的方法之一,一直是学者专家研究的热点。本研究结合小波转换与支持向量回归,提出一个二阶段时间序列预测模型。先以离散小波框架将预测变量分解成不同尺度的多个子序列,揭示隐藏在预测变量内的信息,再以支持向量回归为工具,以这些子序列为预测变量建构SVR模型。本研究以日经225指数开盘价为预测目标,以期货开盘价为预测变量对模型进行实证研究,结果显示,该模型的预测绩效比单纯SVR模型及随机漫步模型好。未来可尝试以不同的基底函数作进一步研究。  相似文献   

15.
Mixed models are powerful tools for the analysis of clustered data and many extensions of the classical linear mixed model with normally distributed response have been established. As with all parametric (P) models, correctness of the assumed model is critical for the validity of the ensuing inference. An incorrectly specified P means model may be improved by using a local, or nonparametric (NP), model. Two local models are proposed by a pointwise weighting of the marginal and conditional variance–covariance matrices. However, NP models tend to fit to irregularities in the data and may provide fits with high variance. Model robust regression techniques estimate mean response as a convex combination of a P and a NP model fit to the data. It is a semiparametric method by which incomplete or incorrectly specified P models can be improved by adding an appropriate amount of the NP fit. We compare the approximate integrated mean square error of the P, NP, and mixed model robust methods via a simulation study and apply these methods to two real data sets: the monthly wind speed data from countries in Ireland and the engine speed data.  相似文献   

16.
We propose a new cure rate survival model by assuming that the initial number of competing causes of the event of interest follows a Poisson distribution and the time to event has the odd log-logistic generalized half-normal distribution. This survival model describes a realistic interpretation for the biological mechanism of the event of interest. We estimate the model parameters using maximum likelihood. For different sample sizes, various simulation scenarios are performed. We propose the diagnostics and residual analysis to verify the model assumptions. The potentiality of the new cure rate model is illustrated by means of a real data.  相似文献   

17.
Motivated by a specific problem concerning the relationship between radar reflectance and rainfall intensity, the paper develops a space–time model for use in environmental monitoring applications. The model is cast as a high dimensional multivariate state space time series model, in which the cross-covariance structure is derived from the spatial context of the component series, in such a way that its interpretation is essentially independent of the particular set of spatial locations at which the data are recorded. We develop algorithms for estimating the parameters of the model by maximum likelihood, and for making spatial predictions of the radar calibration parameters by using realtime computations. We apply the model to data from a weather radar station in Lancashire, England, and demonstrate through empirical validation the predictive performance of the model.  相似文献   

18.
Lyell, a founder of the science of geology, used statistical models to describe the changes that had occurred in the earth and its environment. From this model he attempted to establish a time frame for each epoch. This article shows that Lyell's model is equivalent to the classic coupon problem included in many probability texts. Furthermore, it is shown that the time frame deduced by Lyell is inconsistent with the model he was using. The proper time frame consistent with the model is provided. A second model that was considered by Lyell is also investigated.  相似文献   

19.
In this paper the exponentiated-Weibull model is modified to model the possibility that long-term survivors are present in the data. The modification leads to an exponentiated-Weibull mixture model which encompasses as special cases the exponential and Weibull mixture models typically used to model such data. Inference for the model parameters is considered via maximum likelihood and also via Bayesian inference by using Markov chain Monte Carlo simulation. Model comparison is considered by using likelihood ratio statistics and also the pseudo Bayes factor, which can be computed by using the generated samples. An example of a data set is considered for which the exponentiated-Weibull mixture model presents a better fit than the Weibull mixture model. Results of simulation studies are also reported, which show that the likelihood ratio statistics seems to be somewhat deficient for small and moderate sample sizes.  相似文献   

20.
We present a Bayesian analysis of a piecewise linear model constructed by using basis functions which generalizes the univariate linear spline to higher dimensions. Prior distributions are adopted on both the number and the locations of the splines, which leads to a model averaging approach to prediction with predictive distributions that take into account model uncertainty. Conditioning on the data produces a Bayes local linear model with distributions on both predictions and local linear parameters. The method is spatially adaptive and covariate selection is achieved by using splines of lower dimension than the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号