首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Statistics for spatial functional data is an emerging field in statistics which combines methods of spatial statistics and functional data analysis to model spatially correlated functional data. Checking for spatial autocorrelation is an important step in the statistical analysis of spatial data. Several statistics to achieve this goal have been proposed. The test based on the Mantel statistic is widely known and used in this context. This paper proposes an application of this test to the case of spatial functional data. Although we focus particularly on geostatistical functional data, that is functional data observed in a region with spatial continuity, the test proposed can also be applied with functional data which can be measured on a discrete set of areas of a region (areal functional data) by defining properly the distance between the areas. Based on two simulation studies, we show that the proposed test has a good performance. We illustrate the methodology by applying it to an agronomic data set.  相似文献   

3.
Abstract.  In this paper we propose fast approximate methods for computing posterior marginals in spatial generalized linear mixed models. We consider the common geostatistical case with a high dimensional latent spatial variable and observations at known registration sites. The methods of inference are deterministic, using no simulation-based inference. The first proposed approximation is fast to compute and is 'practically sufficient', meaning that results do not show any bias or dispersion effects that might affect decision making. Our second approximation, an improvement of the first version, is 'practically exact', meaning that one would have to run MCMC simulations for very much longer than is typically done to detect any indication of error in the approximate results. For small-count data the approximations are slightly worse, but still very accurate. Our methods are limited to likelihood functions that give unimodal full conditionals for the latent variable. The methods help to expand the future scope of non-Gaussian geostatistical models as illustrated by applications of model choice, outlier detection and sampling design. The approximations take seconds or minutes of CPU time, in sharp contrast to overnight MCMC runs for solving such problems.  相似文献   

4.
In this paper we examine the consequences, for statistical analysis and interpretation, of the particulate nature of radioactive contamination of a nuclear weapons test site. We propose a probabilistic model which incorporates the particulate nature of the contamination and which is simple enough to be statistically fitted to the data. Parameter estimation involves the reconciliation and combination of measurements of (a) 59.5 ke V gamma rays from americium-241, a decay product of plutonium-241, using a portable medium resolution NaI detector, on a regular survey grid at a test site and (b) 59.5 ke V radiation from soil samples obtained at grid points. The implications of the model for measurement of levels of contamination are considered.  相似文献   

5.
We describe a class of random field models for geostatistical count data based on Gaussian copulas. Unlike hierarchical Poisson models often used to describe this type of data, Gaussian copula models allow a more direct modelling of the marginal distributions and association structure of the count data. We study in detail the correlation structure of these random fields when the family of marginal distributions is either negative binomial or zero‐inflated Poisson; these represent two types of overdispersion often encountered in geostatistical count data. We also contrast the correlation structure of one of these Gaussian copula models with that of a hierarchical Poisson model having the same family of marginal distributions, and show that the former is more flexible than the latter in terms of range of feasible correlation, sensitivity to the mean function and modelling of isotropy. An exploratory analysis of a dataset of Japanese beetle larvae counts illustrate some of the findings. All of these investigations show that Gaussian copula models are useful alternatives to hierarchical Poisson models, specially for geostatistical count data that display substantial correlation and small overdispersion.  相似文献   

6.
In survival analysis, time-dependent covariates are usually present as longitudinal data collected periodically and measured with error. The longitudinal data can be assumed to follow a linear mixed effect model and Cox regression models may be used for modelling of survival events. The hazard rate of survival times depends on the underlying time-dependent covariate measured with error, which may be described by random effects. Most existing methods proposed for such models assume a parametric distribution assumption on the random effects and specify a normally distributed error term for the linear mixed effect model. These assumptions may not be always valid in practice. In this article, we propose a new likelihood method for Cox regression models with error-contaminated time-dependent covariates. The proposed method does not require any parametric distribution assumption on random effects and random errors. Asymptotic properties for parameter estimators are provided. Simulation results show that under certain situations the proposed methods are more efficient than the existing methods.  相似文献   

7.
Important progress has been made with model averaging methods over the past decades. For spatial data, however, the idea of model averaging has not been applied well. This article studies model averaging methods for the spatial geostatistical linear model. A spatial Mallows criterion is developed to choose weights for the model averaging estimator. The resulting estimator can achieve asymptotic optimality in terms of L2 loss. Simulation experiments reveal that our proposed estimator is superior to the model averaging estimator by the Mallows criterion developed for ordinary linear models [Hansen, 2007] and the model selection estimator using the corrected Akaike's information criterion, developed for geostatistical linear models [Hoeting et al., 2006]. The Canadian Journal of Statistics 47: 336–351; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
The completely random character of radioactive disintegration provides the basis of a strong justification for a Poisson linear model for single-photon emission computed tomography data, which can be used to produce reconstructions of isotope densities, whether by maximum likelihood or Bayesian methods. However, such a model requires the construction of a matrix of weights, which represent the mean rates of arrival at each detector of photons originating from each point within the body space. Two methods of constructing these weights are discussed, and reconstructions resulting from phantom and real data are presented.  相似文献   

9.
Daniel Hohmann 《Statistics》2013,47(2):348-362
We consider a two-component location mixture model with symmetric components, one of which is assumed to be known, the other is unknown. We show identifiability under assumptions on the tails of the characteristic function for the true underlying mixture, and also construct asymptotically normal estimates. The model is an extension of the contamination model in Bordes et al. [Semiparametric estimation of a two-component mixture model when a component is known, Scand. J. Statist. 33 (2006), pp. 733–752], and also related to a location mixture of one symmetric density as in Bordes et al. [Semiparametric estimation of a two component mixture model, Ann. Statist. 34 (2006), pp. 1204–1232]. We show by simulation that estimating the additional location parameter leads to a slight loss of efficiency as compared with the contamination model.  相似文献   

10.
The completely random character of radioactive disintegration provides the basis of a strong justification for a Poisson linear model for single-photon emission computed tomography data, which can be used to produce reconstructions of isotope densities, whether by maximum likelihood or Bayesian methods. However, such a model requires the construction of a matrix of weights, which represent the mean rates of arrival at each detector of photons originating from each point within the body space. Two methods of constructing these weights are discussed, and reconstructions resulting from phantom and real data are presented.  相似文献   

11.
Radon is a natural radioactive gas known to be the main contributor to natural background radiation exposure and the major leading cause of lung cancer second to smoking. Indoor radon concentration levels of 200 and 400 Bq/m3 are reference values suggested by the 90/143/Euratom recommendation, above which mitigation measures should be taken in new and old buildings, respectively, to reduce exposure to radon. Despite this international recommendation, Italy still does not have mandatory regulations or guidelines to deal with radon in dwellings. Monitoring surveys have been undertaken in a number of western European countries in order to assess the exposure of people to this radioactive gas and to identify radon prone areas. However, such campaigns provide concentration values in each single dwelling included in the sample, while it is often necessary to provide measures of the pollutant concentration which refer to sub-areas of the region under study. This requires a realignment of the spatial data from the level at which they are collected (points) to the level at which they are necessary (areas). This is known as change of support problem.In this paper, we propose a methodology based on geostatistical simulations in order to solve this problem and to identify radon prone areas which may be suggested for national guidelines.  相似文献   

12.
Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribution as well as the mechanism to generate the simulated data are crucial. In this paper we develop a new heuristic for an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the (locally) optimal importance sampling distribution. The proposed approach also allows for a convenient incorporation of quasi-Monte Carlo methods. Quasi-Monte Carlo methods produce simulated data which can significantly increase the accuracy of the likelihood-estimate over regular Monte Carlo methods. Several examples provide evidence for the potential efficiency gain of this new method. We apply the method to a computationally challenging geostatistical model of online retailing.  相似文献   

13.
Semi parametric methods provide estimates of finite parameter vectors without requiring that the complete data generation process be assumed in a finite-dimensional family. By avoiding bias from incorrect specification, such estimators gain robustness, although usually at the cost of decreased precision. The most familiar semi parametric method in econometrics is ordi¬nary least squares, which estimates the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family. The recent literature in econometric theory has extended semi parametric methods to a variety of non-linear models, including models appropriate for analysis of censored duration data. Horowitz and Newman make perhaps the first empirical application of these methods, to data on employment duration. Their analysis provides insights into the practical problems of implementing these methods, and limited information on performance. Their data set, containing 226 male controls from the Denver income maintenance experiment in 1971-74, does not show any significant covariates (except race), even when a fully parametric model is assumed. Consequently, the authors are unable to reject the fully parametric model in a test against the alternative semi parametric estimators. This provides some negative, but tenuous, evidence that in practical applications the reduction in bias using semi parametric estimators is insufficient to offset loss in precision. Larger samples, and data sets with strongly significant covariates, will need to be interval, and if the observation period is long enough will eventually be more loyal on average for those starting employment spells near the end of the observation period.  相似文献   

14.
A typical model for geostatistical data when the observations are counts is the spatial generalised linear mixed model. We present a criterion for optimal sampling design under this framework which aims to minimise the error in the prediction of the underlying spatial random effects. The proposed criterion is derived by performing an asymptotic expansion to the conditional prediction variance. We argue that the mean of the spatial process needs to be taken into account in the construction of the predictive design, which we demonstrate through a simulation study where we compare the proposed criterion against the widely used space-filling design. Furthermore, our results are applied to the Norway precipitation data and the rhizoctonia disease data.  相似文献   

15.
We study a particular marked three-dimensional point process sample that represents a Laguerre tessellation. It comes from a polycrystalline sample of aluminium alloy material. The ‘points’ are the cell generators while the ‘marks’ are radius marks that control the size and shape of the tessellation cells. Our statistical mark correlation analyses show that the marks of the sample are in clear and plausible spatial correlation: the marks of generators close together tend to be small and similar and the form of the correlation functions does not justify geostatistical marking. We show that a simplified modelling of tessellations by Laguerre tessellations with independent radius marks may lead to wrong results. When we started from the aluminium alloy data and generated random marks by random permutation we obtained tessellations with characteristics quite different from the original ones. We observed similar behaviour for simulated Laguerre tessellations. This fact, which seems to be natural for the given data type, makes fitting of models to empirical Laguerre tessellations quite difficult: the generator points and radius marks have to be modelled simultaneously. This may imply that the reconstruction methods are more efficient than point-process modelling if only samples of similar Laguerre tessellations are needed. We also found that literature recipes for bandwidth choice for estimating correlation functions should be used with care.  相似文献   

16.
Abstract.  We consider a two-component mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown. These kinds of models were first introduced in biology to study the differences in expression between genes. The various estimation methods proposed till now have all assumed that the unknown distribution belongs to a parametric family. In this paper, we show how this assumption can be relaxed. First, we note that generally the above model is not identifiable, but we show that under moment and symmetry conditions some 'almost everywhere' identifiability results can be obtained. Where such identifiability conditions are fulfilled we propose an estimation method for the unknown parameters which is shown to be strongly consistent under mild conditions. We discuss applications of our method to microarray data analysis and to the training data problem. We compare our method to the parametric approach using simulated data and, finally, we apply our method to real data from microarray experiments.  相似文献   

17.
Most data have a space and time label associated with them; data that are close together are usually more correlated than those that are far apart. Prediction (or forecasting) of a process at a particular label where there is no datum, from observed nearby data, is the subject of this article. One approach, known as geostatistics, is featured, from which linear methods of spatial prediction (kriging) will be considered. Brief reference is made to other linear/nonlinear, stochastic/deterministic predictors. The (linear) geostatistical method is applied to piezometric-head data around a potential nuclear-waste repository site.  相似文献   

18.
Summary This paper presents a case study for the geostatistical characterization of the hydraulic conductivity field of a fluvial aquifer. An experimental distribution of hydraulic conductivity values was obtained from core samples, which were analysed in the laboratory with regard to their grain size distribution. The geostatistical analysis was performed on (i) hydraulic conductivity data derived from the grain size distributions applying an empirical relationship, (ii) binary transforms of the hydraulic conductivity data using thresholds, and (iii) categorical variables obtained by a K-means clustering of the grain size distributions. Although the available data base is rather small, the results show that the investigated aquifer is distinctly heterogeneous and anisotropic with respect to hydraulic conductivity and soil texture.  相似文献   

19.
Abstract.  In this paper, we propose a random varying-coefficient model for longitudinal data. This model is different from the standard varying-coefficient model in the sense that the time-varying coefficients are assumed to be subject-specific, and can be considered as realizations of stochastic processes. This modelling strategy allows us to employ powerful mixed-effects modelling techniques to efficiently incorporate the within-subject and between-subject variations in the estimators of time-varying coefficients. Thus, the subject-specific feature of longitudinal data is effectively considered in the proposed model. A backfitting algorithm is proposed to estimate the coefficient functions. Simulation studies show that the proposed estimation methods are more efficient in finite-sample performance compared with the standard local least squares method. An application to an AIDS clinical study is presented to illustrate the proposed methodologies.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号