首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The completely random character of radioactive disintegration provides the basis of a strong justification for a Poisson linear model for single-photon emission computed tomography data, which can be used to produce reconstructions of isotope densities, whether by maximum likelihood or Bayesian methods. However, such a model requires the construction of a matrix of weights, which represent the mean rates of arrival at each detector of photons originating from each point within the body space. Two methods of constructing these weights are discussed, and reconstructions resulting from phantom and real data are presented.  相似文献   

2.
3.
ABSTRACT

Environmental data is typically indexed in space and time. This work deals with modelling spatio-temporal air quality data, when multiple measurements are available for each space-time point. Typically this situation arises when different measurements referring to several response variables are observed in each space-time point, for example, different pollutants or size resolved data on particular matter. Nonetheless, such a kind of data also arises when using a mobile monitoring station moving along a path for a certain period of time. In this case, each spatio-temporal point has a number of measurements referring to the response variable observed several times over different locations in a close neighbourhood of the space-time point. We deal with this type of data within a hierarchical Bayesian framework, in which observed measurements are modelled in the first stage of the hierarchy, while the unobserved spatio-temporal process is considered in the following stages. The final model is very flexible and includes autoregressive terms in time, different structures for the variance-covariance matrix of the errors, and can manage covariates available at different space-time resolutions. This approach is motivated by the availability of data on urban pollution dynamics: fast measures of gases and size resolved particulate matter have been collected using an Optical Particle Counter located on a cabin of a public conveyance that moves on a monorail on a line transect of a town. Urban microclimate information is also available and included in the model. Simulation studies are conducted to evaluate the performance of the proposed model over existing alternatives that do not model data over the first stage of the hierarchy.  相似文献   

4.
Statistical approaches in quantitative positron emission tomography   总被引:5,自引:0,他引:5  
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.  相似文献   

5.
6.
A minimum distance procedure, analogous to maximum likelihood for multinomial data, is employed to fit mixture models to mass-size relative frequencies recorded for some clay soils of southeastern Australia. Log hyperbolic component distributions are considered initially and it is shown how they can be fitted satisfactorily at least to ungrouped data using a generalized EM algorithm. A computationally more convenient model with log skew Laplace components is subsequently shown to suffice. It is demonstrated how it can be fitted to the data in their original grouped form. Consideration is given also to the provision of standard errors using the idea of a quasi-sample size.  相似文献   

7.
8.
P. Economou 《Statistics》2013,47(2):453-464
Frailty models are often used to describe the extra heterogeneity in survival data by introducing an individual random, unobserved effect. The frailty term is usually assumed to act multiplicatively on a baseline hazard function common to all individuals. In order to apply the frailty model, a specific frailty distribution has to be assumed. If at least one of the latent variables is continuous, the frailty must follow a continuous distribution. In this paper, a finite mixture of continuous frailty distributions is used in order to describe situations in which one (or more) of the latent variables separates the population in study into two (or more) subpopulations. Closure properties of the unobserved quantity are given along with the maximum-likelihood estimates under the most common choices of frailty distributions. The model is illustrated on a set of lifetime data.  相似文献   

9.
Positron emission tomography (PET) imaging can be used to study the effects of pharmacologic intervention on brain function. Partial least squares (PLS) regression is a standard tool that can be applied to characterize such effects throughout the brain volume and across time. We have extended the PLS regression methodology to adjust for covariate effects that may influence spatial and temporal aspects of the functional image data over the brain volume. The extension involves multi-dimensional latent variables, experimental design variables based upon sequential PET scanning, and covariates. An illustration is provided using a sequential PET data set acquired to study the effect of d-amphetamine on cerebral blood flow in baboons. An iterative algorithm is developed and implemented and validation results are provided through computer simulation studies.  相似文献   

10.
Modelling daily multivariate pollutant data at multiple sites   总被引:6,自引:1,他引:6  
Summary. This paper considers the spatiotemporal modelling of four pollutants measured daily at eight monitoring sites in London over a 4-year period. Such multiple-pollutant data sets measured over time at multiple sites within a region of interest are typical. Here, the modelling was carried out to provide the exposure for a study investigating the health effects of air pollution. Alternative objectives include the design problem of the positioning of a new monitoring site, or for regulatory purposes to determine whether environmental standards are being met. In general, analyses are hampered by missing data due, for example, to a particular pollutant not being measured at a site, a monitor being inactive by design (e.g. a 6-day monitoring schedule) or because of an unreliable or faulty monitor. Data of this type are modelled here within a dynamic linear modelling framework, in which the dependences across time, space and pollutants are exploited. Throughout the approach is Bayesian, with implementation via Markov chain Monte Carlo sampling.  相似文献   

11.
Modelling count data with overdispersion and spatial effects   总被引:1,自引:1,他引:0  
In this paper we consider regression models for count data allowing for overdispersion in a Bayesian framework. We account for unobserved heterogeneity in the data in two ways. On the one hand, we consider more flexible models than a common Poisson model allowing for overdispersion in different ways. In particular, the negative binomial and the generalized Poisson (GP) distribution are addressed where overdispersion is modelled by an additional model parameter. Further, zero-inflated models in which overdispersion is assumed to be caused by an excessive number of zeros are discussed. On the other hand, extra spatial variability in the data is taken into account by adding correlated spatial random effects to the models. This approach allows for an underlying spatial dependency structure which is modelled using a conditional autoregressive prior based on Pettitt et al. in Stat Comput 12(4):353–367, (2002). In an application the presented models are used to analyse the number of invasive meningococcal disease cases in Germany in the year 2004. Models are compared according to the deviance information criterion (DIC) suggested by Spiegelhalter et al. in J R Stat Soc B64(4):583–640, (2002) and using proper scoring rules, see for example Gneiting and Raftery in Technical Report no. 463, University of Washington, (2004). We observe a rather high degree of overdispersion in the data which is captured best by the GP model when spatial effects are neglected. While the addition of spatial effects to the models allowing for overdispersion gives no or only little improvement, spatial Poisson models with spatially correlated or uncorrelated random effects are to be preferred over all other models according to the considered criteria.  相似文献   

12.
A parametric modelling for interval data is proposed, assuming a multivariate Normal or Skew-Normal distribution for the midpoints and log-ranges of the interval variables. The intrinsic nature of the interval variables leads to special structures of the variance–covariance matrix, which is represented by five different possible configurations. Maximum likelihood estimation for both models under all considered configurations is studied. The proposed modelling is then considered in the context of analysis of variance and multivariate analysis of variance testing. To access the behaviour of the proposed methodology, a simulation study is performed. The results show that, for medium or large sample sizes, tests have good power and their true significance level approaches nominal levels when the constraints assumed for the model are respected; however, for small samples, sizes close to nominal levels cannot be guaranteed. Applications to Chinese meteorological data in three different regions and to credit card usage variables for different card designations, illustrate the proposed methodology.  相似文献   

13.
We investigate the extremal clustering behaviour of stationary time series that possess two regimes, where the switch is governed by a hidden two-state Markov chain. We also suppose that the process is conditionally Markovian in each latent regime. We prove under general assumptions that above high thresholds these models behave approximately as a random walk in one (called dominant) regime and as a stationary autoregression in the other (dominated) regime. Based on this observation, we propose an estimation and simulation scheme to analyse the extremal dependence structure of such models, taking into account only observations above high thresholds. The properties of the estimation method are also investigated. Finally, as an application, we fit a model to high-level exceedances of water discharge data, simulate extremal events from the fitted model, and show that the (model-based) flood peak, flood duration and flood volume distributions match their observed counterparts.  相似文献   

14.
This paper uses a new bivariate negative binomial distribution to model scores in the 1996 Australian Rugby League competition. First, scores are modelled using the home ground advantage but ignoring the actual teams playing. Then a bivariate negative binomial regression model is introduced that takes into account the offensive and defensive capacities of each team. Finally, the 1996 season is simulated using the latter model to determine whether or not Manly did indeed deserve to win the competition.  相似文献   

15.
In this paper we consider the analysis of recall-based competing risks data. The chance of an individual recalling the exact time to event depends on the time of occurrence of the event and time of observation of the individual. In particular, it is assumed that the probability of recall depends on the time elapsed since the occurrence of an event. In this study we consider the likelihood-based inference for the analysis of recall-based competing risks data. The likelihood function is constructed by incorporating the information about the probability of recall. We consider the maximum likelihood estimation of parameters. Simulation studies are carried out to examine the performance of the estimators. The proposed estimation procedure is applied to a real life data set.  相似文献   

16.
Modelling udder infection data using copula models for quadruples   总被引:1,自引:0,他引:1  
We study copula models for correlated infection times in the four udder quarters of dairy cows. Both a semi-parametric and a nonparametric approach are considered to estimate the marginal survival functions, taking into account the effect of a binary udder quarter level covariate. We use a two-stage estimation approach and we briefly discuss the asymptotic behaviour of the estimators obtained in the first and the second stage of the estimation. A pseudo-likelihood ratio test is used to select an appropriate copula from the power variance copula family that describes the association between the outcomes in a cluster. We propose a new bootstrap algorithm to obtain the p-value for this test. This bootstrap algorithm also provides estimates for the standard errors of the estimated parameters in the copula. The proposed methods are applied to the udder infection data. A small simulation study for a setting similar to the setting of the udder infection data gives evidence that the proposed method provides a valid approach to select an appropriate copula within the power variance copula family.  相似文献   

17.
Finite mixture methods are applied to bird band-recovery studies to allow for heterogeneity of survival. Birds are assumed to belong to one of finitely many groups, each of which has its own survival rate (or set of survival rates varying by time and/or age). The group to which a specific animal belongs is not known, so its survival probability is a random variable from a finite mixture. Heterogeneity is thus modelled as a latent effect. This gives a wide selection of likelihood-based models, which may be compared using likelihood ratio tests. These models are discussed with reference to real and simulated data, and compared with previous models.  相似文献   

18.
Finite mixture methods are applied to bird band-recovery studies to allow for heterogeneity of survival. Birds are assumed to belong to one of finitely many groups, each of which has its own survival rate (or set of survival rates varying by time and/or age). The group to which a specific animal belongs is not known, so its survival probability is a random variable from a finite mixture. Heterogeneity is thus modelled as a latent effect. This gives a wide selection of likelihood-based models, which may be compared using likelihood ratio tests. These models are discussed with reference to real and simulated data, and compared with previous models.  相似文献   

19.
Summary. The paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson-distributed data, and applications in disease mapping. We work in a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatially dependent weights, based on transformations of autoregressive Gaussian processes: in one (the logistic normal model), the mixture component labels are exchangeable; in the other (the grouped continuous model), they are ordered. Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performances of both of these formulations are examined on synthetic data and real data on mortality from a rare disease.  相似文献   

20.
This article proposes a new model for right‐censored survival data with multi‐level clustering based on the hierarchical Kendall copula model of Brechmann (2014) with Archimedean clusters. This model accommodates clusters of unequal size and multiple clustering levels, without imposing any structural conditions on the parameters or on the copulas used at various levels of the hierarchy. A step‐wise estimation procedure is proposed and shown to yield consistent and asymptotically Gaussian estimates under mild regularity conditions. The model fitting is based on multiple imputation, given that the censoring rate increases with the level of the hierarchy. To check the model assumption of Archimedean dependence, a goodness‐of test is developed. The finite‐sample performance of the proposed estimators and of the goodness‐of‐fit test is investigated through simulations. The new model is applied to data from the study of chronic granulomatous disease. The Canadian Journal of Statistics 47: 182–203; 2019 © 2019 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号