首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The autologistic model, first introduced by Besag, is a popular tool for analyzing binary data in spatial lattices. However, no investigation was found to consider modeling of binary data clustered in uncorrelated lattices. Owing to spatial dependency of responses, the exact likelihood estimation of parameters is not possible. For circumventing this difficulty, many studies have been designed to approximate the likelihood and the related partition function of the model. So, the traditional and Bayesian estimation methods based on the likelihood function are often time-consuming and require heavy computations and recursive techniques. Some investigators have introduced and implemented data augmentation and latent variable model to reduce computational complications in parameter estimation. In this work, the spatially correlated binary data distributed in uncorrelated lattices were modeled using autologistic regression, a Bayesian inference was developed with contribution of data augmentation and the proposed models were applied to caries experiences of deciduous dents.  相似文献   

2.
Summary. Motivated by the autologistic model for the analysis of spatial binary data on the two-dimensional lattice, we develop efficient computational methods for calculating the normalizing constant for models for discrete data defined on the cylinder and lattice. Because the normalizing constant is generally unknown analytically, statisticians have developed various ad hoc methods to overcome this difficulty. Our aim is to provide computationally and statistically efficient methods for calculating the normalizing constant so that efficient likelihood-based statistical methods are then available for inference. We extend the so-called transition method to find a feasible computational method of obtaining the normalizing constant for the cylinder boundary condition. To extend the result to the free-boundary condition on the lattice we use an efficient path sampling Markov chain Monte Carlo scheme. The methods are generally applicable to association patterns other than spatial, such as clustered binary data, and to variables taking three or more values described by, for example, Potts models.  相似文献   

3.
New recursive algorithms for fast computation of the normalizing constant for the autologistic model on the lattice make feasible a sample-based maximum likelihood estimation (MLE) of the autologistic parameters. We demonstrate by sampling from 12 simulated 420×420 binary lattices with square lattice plots of size 4×4, …, 7×7 and sample sizes between 20 and 600. Sample-based results are compared with ‘benchmark’ MCMC estimates derived from all binary observations on a lattice. Sample-based estimates are, on average, biased systematically by 3%–7%, a bias that can be reduced by more than half by a set of calibrating equations. MLE estimates of sampling variances are large and usually conservative. The variance of the parameter of spatial association is about 2–10 times higher than the variance of the parameter of abundance. Sample distributions of estimates were mostly non-normal. We conclude that sample-based MLE estimation of the autologistic parameters with an appropriate sample size and post-estimation calibration will furnish fully acceptable estimates. Equations for predicting the expected sampling variance are given.  相似文献   

4.
A spatial lattice model for binary data is constructed from two spatial scales linked through conditional probabilities. A coarse grid of lattice locations is specified, and all remaining locations (which we call the background) capture fine-scale spatial dependence. Binary data on the coarse grid are modelled with an autologistic distribution, conditional on the binary process on the background. The background behaviour is captured through a hidden Gaussian process after a logit transformation on its Bernoulli success probabilities. The likelihood is then the product of the (conditional) autologistic probability distribution and the hidden Gaussian–Bernoulli process. The parameters of the new model come from both spatial scales. A series of simulations illustrates the spatial-dependence properties of the model and likelihood-based methods are used to estimate its parameters. Presence–absence data of corn borers in the roots of corn plants are used to illustrate how the model is fitted.  相似文献   

5.
In many medical studies patients are nested or clustered within doctor. With many explanatory variables, variable selection with clustered data can be challenging. We propose a method for variable selection based on random forest that addresses clustered data through stratified binary splits. Our motivating example involves the detection orthopedic device components from a large pool of candidates, where each patient belongs to a surgeon. Simulations compare the performance of survival forests grown using the stratified logrank statistic to conventional and robust logrank statistics, as well as a method to select variables using a threshold value based on a variable's empirical null distribution. The stratified logrank test performs superior to conventional and robust methods when data are generated to have cluster-specific effects, and when cluster sizes are sufficiently large, perform comparably to the splitting alternatives in the absence of cluster-specific effects. Thresholding was effective at distinguishing between important and unimportant variables.  相似文献   

6.
Simulation studies employed to study properties of estimators for parameters in population-average models for clustered or longitudinal data require suitable algorithms for data generation. Methods for generating correlated binary data that allow general specifications of the marginal mean and correlation structures are particularly useful. We compare an algorithm based on dichotomizing multi-normal variates to one based on a conditional linear family (CLF) of distributions [Qaqish BF. A family of multivariate binary distributions for simulating correlated binary variables with specified marginal means and correlations. Biometrika. 2003;90:455–463] with respect to range restrictions induced on correlations. Examples include generating longitudinal binary data and generating correlated binary data compatible with specified marginal means and covariance structures for bivariate, overdispersed binomial outcomes. Results show the CLF method gives a wider range of correlations for longitudinal data having autocorrelated within-subject associations, while the multivariate probit method gives a wider range of correlations for clustered data having exchangeable-type correlations. In the case of a decaying-product correlation structure, it is shown that the CLF method achieves the nonparametric limits on the range of correlations, which cannot be surpassed by any method.  相似文献   

7.
This paper first introduces a parametric model for the generation of stationary random correlated binary sequences. The parameters of the model include the probability that a pixel is a binary one pixel and the length of the structuring element which dilates the initially spatially uncorrelated sequence. The spatial statistics of such eroded, dilated, opened and closed correlated binary sequences are derived in terms of the spatial statistics of the input binary sequence. Understanding of such one-dimensional processing is a precondition for understanding what happens in the more interesting two- dimensional case.  相似文献   

8.
This paper first introduces a parametric model for the generation of stationary random correlated binary sequences. The parameters of the model include the probability that a pixel is a binary one pixel and the length of the structuring element which dilates the initially spatially uncorrelated sequence. The spatial statistics of such eroded, dilated, opened and closed correlated binary sequences are derived in terms of the spatial statistics of the input binary sequence. Understanding of such one-dimensional processing is a precondition for understanding what happens in the more interesting two- dimensional case.  相似文献   

9.
A general framework is proposed for modelling clustered mixed outcomes. A mixture of generalized linear models is used to describe the joint distribution of a set of underlying variables, and an arbitrary function relates the underlying variables to be observed outcomes. The model accommodates multilevel data structures, general covariate effects and distinct link functions and error distributions for each underlying variable. Within the framework proposed, novel models are developed for clustered multiple binary, unordered categorical and joint discrete and continuous outcomes. A Markov chain Monte Carlo sampling algorithm is described for estimating the posterior distributions of the parameters and latent variables. Because of the flexibility of the modelling framework and estimation procedure, extensions to ordered categorical outcomes and more complex data structures are straightforward. The methods are illustrated by using data from a reproductive toxicity study.  相似文献   

10.
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond. We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.  相似文献   

11.
Modeling clustered categorical data based on extensions of generalized linear model theory has received much attention in recent years. The rapidly increasing number of approaches suitable for categorical data in which clusters are uncorrelated, but correlations exist within a cluster, has caused uncertainty among applied scientists as to their respective merits and demerits. Upon centering estimation around solving an unbiased estimating function for mean parameters and estimation of covariance parameters describing within-cluster or among-cluster heterogeneity, many approaches can easily be related. This contribution describes a series of algorithms and their implementation in detail, based on a classification of inferential procedures for clustered data.  相似文献   

12.
Most problems related to environmental studies are innately multivariate. In fact, in each spatial location more than one variable is usually measured. In geostatistics multivariate data analysis, where we intend to predict the value of a random vector in a new site, which has no data, cokriging method is used as the best linear unbiased prediction. In lattice data analysis, where almost exclusively the probability modeling of data is of concern, only auto-Gaussian model has been used for continuous multivariate data. For discrete multivariate data little work has been carried out. In this paper, an auto-multinomial model is suggested for analyzing multivariate lattice discrete data. The proposed method is illustrated by a real example of air pollution in Tehran, Iran.  相似文献   

13.
Mixture separation for mixed-mode data   总被引:3,自引:0,他引:3  
One possible approach to cluster analysis is the mixture maximum likelihood method, in which the data to be clustered are assumed to come from a finite mixture of populations. The method has been well developed, and much used, for the case of multivariate normal populations. Practical applications, however, often involve mixtures of categorical and continuous variables. Everitt (1988) and Everitt and Merette (1990) recently extended the normal model to deal with such data by incorporating the use of thresholds for the categorical variables. The computations involved in this model are so extensive, however, that it is only feasible for data containing very few categorical variables. In the present paper we consider an alternative model, known as the homogeneous Conditional Gaussian model in graphical modelling and as the location model in discriminant analysis. We extend this model to the finite mixture situation, obtain maximum likelihood estimates for the population parameters, and show that computation is feasible for an arbitrary number of variables. Some data sets are clustered by this method, and a small simulation study demonstrates characteristics of its performance.  相似文献   

14.
A spatial hidden Markov model (SHMM) is introduced to analyse the distribution of a species on an atlas, taking into account that false observations and false non-detections of the species can occur during the survey, blurring the true map of presence and absence of the species. The reconstruction of the true map is tackled as the restoration of a degraded pixel image, where the true map is an autologistic model, hidden behind the observed map, whose normalizing constant is efficiently computed by simulating an auxiliary map. The distribution of the species is explained under the Bayesian paradigm and Markov chain Monte Carlo (MCMC) algorithms are developed. We are interested in the spatial distribution of the bird species Greywing Francolin in the south of Africa. Many climatic and land-use explanatory variables are also available: they are included in the SHMM and a subset of them is selected by the mutation operators within the MCMC algorithm.  相似文献   

15.
Data collected on a rectangular lattice are common in many areas, and models used often make simplifying assumptions. These assumptions include axial symmetry in the spatial process and separability. Some different methods for testing axial symmetry and separability are considered. Using the sample periodogram is shown to provide some simple satisfactory tests of both hypotheses, but tests for separability given axial symmetry have low power for small lattices.  相似文献   

16.
Both kriging and non-parametric regression smoothing can model a non-stationary regression function with spatially correlated errors. However comparisons have mainly been based on ordinary kriging and smoothing with uncorrelated errors. Ordinary kriging attributes smoothness of the response to spatial autocorrelation whereas non-parametric regression attributes trends to a smooth regression function. For spatial processes it is reasonable to suppose that the response is due to both trend and autocorrelation. This paper reviews methodology for non-parametric regression with autocorrelated errors which is a natural compromise between the two methods. Re-analysis of the one-dimensional stationary spatial data of Laslett (1994) and a clearly non-stationary time series demonstrates the rather surprising result that for these data, ordinary kriging outperforms more computationally intensive models including both universal kriging and correlated splines for spatial prediction. For estimating the regression function, non-parametric regression provides adaptive estimation, but the autocorrelation must be accounted for in selecting the smoothing parameter.  相似文献   

17.
Misclassifications in binary responses have long been a common problem in medical and health surveys. One way to handle misclassifications in clustered or longitudinal data is to incorporate the misclassification model through the generalized estimating equation (GEE) approach. However, existing methods are developed under a non-survey setting and cannot be used directly for complex survey data. We propose a pseudo-GEE method for the analysis of binary survey responses with misclassifications. We focus on cluster sampling and develop analysis strategies for analyzing binary survey responses with different forms of additional information for the misclassification process. The proposed methodology has several attractive features, including simultaneous inferences for both the response model and the association parameters. Finite sample performance of the proposed estimators is evaluated through simulation studies and an application using a real dataset from the Canadian Longitudinal Study on Aging.  相似文献   

18.
Generalized linear models are addressed to describe the dependence of data on explanatory variables when the binary outcome is subject to misclassification. Both probit and t-link regressions for misclassified binary data under Bayesian methodology are proposed. The computational difficulties have been avoided by using data augmentation. The idea of using a data augmentation framework (with two types of latent variables) is exploited to derive efficient Gibbs sampling and expectation–maximization algorithms. Besides, this formulation has allowed to obtain the probit model as a particular case of the t-link model. Simulation examples are presented to illustrate the model performance when comparing with standard methods that do not consider misclassification. In order to show the potential of the proposed approaches, a real data problem arising when studying hearing loss caused by exposure to occupational noise is analysed.  相似文献   

19.
Clustered binary data are common in medical research and can be fitted to the logistic regression model with random effects which belongs to a wider class of models called the generalized linear mixed model. The likelihood-based estimation of model parameters often has to handle intractable integration which leads to several estimation methods to overcome such difficulty. The penalized quasi-likelihood (PQL) method is the one that is very popular and computationally efficient in most cases. The expectation–maximization (EM) algorithm allows to estimate maximum-likelihood estimates, but requires to compute possibly intractable integration in the E-step. The variants of the EM algorithm to evaluate the E-step are introduced. The Monte Carlo EM (MCEM) method computes the E-step by approximating the expectation using Monte Carlo samples, while the Modified EM (MEM) method computes the E-step by approximating the expectation using the Laplace's method. All these methods involve several steps of approximation so that corresponding estimates of model parameters contain inevitable errors (large or small) induced by approximation. Understanding and quantifying discrepancy theoretically is difficult due to the complexity of approximations in each method, even though the focus is on clustered binary data. As an alternative competing computational method, we consider a non-parametric maximum-likelihood (NPML) method as well. We review and compare the PQL, MCEM, MEM and NPML methods for clustered binary data via simulation study, which will be useful for researchers when choosing an estimation method for their analysis.  相似文献   

20.
I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号