首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The choice of multi-state models is natural in analysis of survival data, e.g., when the subjects in a study pass through different states like ‘healthy’, ‘in a state of remission’, ‘relapse’ or ‘dead’ in a health related quality of life study. Competing risks is another common instance of the use of multi-state models. Statistical inference for such event history data can be carried out by assuming a stochastic process model. Under such a setting, comparison of the event history data generated by two different treatments calls for testing equality of the corresponding transition probability matrices. The present paper proposes solution to this class of problems by assuming a non-homogeneous Markov process to describe the transitions among the health states. A class of test statistics are derived for comparison of \(k\) treatments by using a ‘weight process’. This class, in particular, yields generalisations of the log-rank, Gehan, Peto–Peto and Harrington–Fleming tests. For an intrinsic comparison of the treatments, the ‘leave-one-out’ jackknife method is employed for identifying influential observations. The proposed methods are then used to develop the Kolmogorov–Smirnov type supremum tests corresponding to the various extended tests. To demonstrate the usefulness of the test procedures developed, a simulation study was carried out and an application to the Trial V data provided by International Breast Cancer Study Group is discussed.  相似文献   

2.
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large.  相似文献   

3.
A continuous two-dimensional region is partitioned into a fine rectangular array of sites, or ‘pixels', each pixel having a particular '‘colour’ belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a non-degenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable large-scale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.  相似文献   

4.
In this paper, we improve upon the Carlin and Chib Markov chain Monte Carlo algorithm that searches in model and parameter spaces. Our proposed algorithm attempts non-uniformly chosen ‘local’ moves in the model space and avoids some pitfalls of other existing algorithms. In a series of examples with linear and logistic regression, we report evidence that our proposed algorithm performs better than the existing algorithms.  相似文献   

5.
Probabilistic graphical models offer a powerful framework to account for the dependence structure between variables, which is represented as a graph. However, the dependence between variables may render inference tasks intractable. In this paper, we review techniques exploiting the graph structure for exact inference, borrowed from optimisation and computer science. They are built on the principle of variable elimination whose complexity is dictated in an intricate way by the order in which variables are eliminated. The so‐called treewidth of the graph characterises this algorithmic complexity: low‐treewidth graphs can be processed efficiently. The first point that we illustrate is therefore the idea that for inference in graphical models, the number of variables is not the limiting factor, and it is worth checking the width of several tree decompositions of the graph before resorting to the approximate method. We show how algorithms providing an upper bound of the treewidth can be exploited to derive a ‘good' elimination order enabling to realise exact inference. The second point is that when the treewidth is too large, algorithms for approximate inference linked to the principle of variable elimination, such as loopy belief propagation and variational approaches, can lead to accurate results while being much less time consuming than Monte‐Carlo approaches. We illustrate the techniques reviewed in this article on benchmarks of inference problems in genetic linkage analysis and computer vision, as well as on hidden variables restoration in coupled Hidden Markov Models.  相似文献   

6.
Markov Random Fields with Higher-order Interactions   总被引:5,自引:0,他引:5  
Discrete-state Markov random fields on regular arrays have played a significant role in spatial statistics and image analysis. For example, they are used to represent objects against background in computer vision and pixel-based classification of a region into different crop types in remote sensing. Convenience has generally favoured formulations that involve only pairwise interactions. Such models are in themselves unrealistic and, although they often perform surprisingly well in tasks such as the restoration of degraded images, they are unsatisfactory for many other purposes. In this paper, we consider particular forms of Markov random fields that involve higher-order interactions and therefore are better able to represent the large-scale properties of typical spatial scenes. Interpretations of the parameters are given and realizations from a variety of models are produced via Markov chain Monte Carlo. Potential applications are illustrated in two examples. The first concerns Bayesian image analysis and confirms that pairwise-interaction priors may perform very poorly for image functionals such as number of objects, even when restoration apparently works well. The second example describes a model for a geological dataset and obtains maximum-likelihood parameter estimates using Markov chain Monte Carlo. Despite the complexity of the formulation, realizations of the estimated model suggest that the representation is quite realistic.  相似文献   

7.
There is an extensive literature on image models; perhaps the greatest emphasis has been on models based on Markov random fields. This article briefly discusses some general aspects of image modeling:what should be modeled (the scene rather than the image; both the ‘objects' and the ‘clutter’); the importance of non-standard types of models; the oversimplification of conventional models; possible limitations on the effectiveness of realistic models; models in higher-dimensional domains; and general guidelines for selecting problems to which models can be effectively applied.  相似文献   

8.
In this paper, we use the Bayesian method in the application of hypothesis testing and model selection to determine the order of a Markov chain. The criteria used are based on Bayes factors with noninformative priors. Com¬parisons with the commonly used AIC and BIC criteria are made through an example and computer simulations. The results show that the proposed method is better than the AIC and BIC criteria, especially for Markov chains with higher orders and larger state spaces.  相似文献   

9.
An inherent property of objects in the world is that they only exist as meaningful entities over certain ranges of scale. If one aims to describe the structure of unknown real-world signals, then a multi-scale representation of data is of crucial importance. This paper gives a tutorial review of a special type of multi-scale representation—linear scale-space representation—which has been developed by the computer vision community to handle image structures at different scales in a consistent manner. The basic idea is to embed the original signal into a one-parameter family of gradually smoothed signals in which the fine-scale details are successively suppressed. Under rather general conditions on the type of computations that are to be performed at the first stages of visual processing, in what can be termed ‘the visual front-end’, it can be shown that the Gaussian kernel and its derivatives are singled out as the only possible smoothing kernels. The conditions that specify the Gaussian kernel are, basically, linearity and shift invariance, combined with different ways of formalizing the notion that structures at coarse scales should correspond to simplifications of corresponding structures at fine scales-they should not be accidental phenomena created by the smoothing method. Notably, several different ways of choosing scale-space axioms give rise to the same conclusion. The output from the scale-space representation can be used for a variety of early visual tasks; operations such as feature detection, feature classification and shape computation can be expressed directly in terms of (possibly non-linear) combinations of Gaussian derivatives at multiple scales. In this sense the scale-space representation canserve as a basis for early vision. During the last few decades, a number of other approaches to multiscale representations have been developed, which are more or less related to scale-space theory, notably the theories of pyramids, wavelets and multi grid methods.Despite their qualitative differences, the increasing propularity of each of these approaches indicates that the crucial notion of scale is increasingly appreciated by the computer.vision community and by researchers in other related fields. An interesting similarity to biological vision is that the scale-space operators closely resemble receptive field profiles registered in neurophysiological studies of the mam- malian retina and visual cortex.  相似文献   

10.
With reference to a specific dataset, we consider how to perform a flexible non‐parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location‐dependent first‐order term and pairwise interaction only. A priori we assume that the first‐order term is a shot noise process, and that the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior distribution using a Metropolis–Hastings algorithm in the ‘conventional’ way involves evaluating ratios of unknown normalizing constants. We avoid this problem by applying a recently introduced auxiliary variable technique. In the present setting, the auxiliary variable used is an example of a partially ordered Markov point process model.  相似文献   

11.
ABSTRACT. We develop exact Markov chain Monte Carlo methods for discretely sampled, directly and indirectly observed diffusions. The qualification ‘exact’ refers to the fact that the invariant and limiting distribution of the Markov chains is the posterior distribution of the parameters free of any discretization error. The class of processes to which our methods directly apply are those which can be simulated using the most general to date exact simulation algorithm. The article introduces various methods to boost the performance of the basic scheme, including reparametrizations and auxiliary Poisson sampling. We contrast both theoretically and empirically how this new approach compares to irreducible high frequency imputation, which is the state‐of‐the‐art alternative for the class of processes we consider, and we uncover intriguing connections. All methods discussed in the article are tested on typical examples.  相似文献   

12.
Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg’s (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.  相似文献   

13.
Abstract. The use of the concept of ‘direct’ versus ‘indirect’ causal effects is common, not only in statistics but also in many areas of social and economic sciences. The related terms of ‘biomarkers’ and ‘surrogates’ are common in pharmacological and biomedical sciences. Sometimes this concept is represented by graphical displays of various kinds. The view here is that there is a great deal of imprecise discussion surrounding this topic and, moreover, that the most straightforward way to clarify the situation is by using potential outcomes to define causal effects. In particular, I suggest that the use of principal stratification is key to understanding the meaning of direct and indirect causal effects. A current study of anthrax vaccine will be used to illustrate ideas.  相似文献   

14.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

15.
We show how to infer about a finite population proportion using data from a possibly biased sample. In the absence of any selection bias or survey weights, a simple ignorable selection model, which assumes that the binary responses are independent and identically distributed Bernoulli random variables, is not unreasonable. However, this ignorable selection model is inappropriate when there is a selection bias in the sample. We assume that the survey weights (or their reciprocals which we call ‘selection’ probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the selection bias, we assume that there is some correlation between the binary responses and the selection probabilities (e.g., there may be a somewhat higher/lower proportion of positive responses among the sampled units than among the nonsampled units). We use a Bayesian nonignorable selection model to accommodate the selection mechanism. We use Markov chain Monte Carlo methods to fit the nonignorable selection model. We illustrate our method using numerical examples obtained from NHIS 1995 data.  相似文献   

16.
Econometricians have generally used the term ‘methodology’ to be synonymous with ‘methods’ and, consequently, the field of econometric methodology has been dominated by the discussion of econometric techniques. The purpose of this paper is to present an alternative perspective on econometric methodology by relating it to the more general field of economic methodology, particularly through the use of concepts drawn from the philosophy of science. Definitional and conceptual issues surrounding the term ‘methodology’ are clarified. Three methodologies, representing abstractions from the actual approaches found within econometrics, are identified. First, an ‘a priorist’ methodology, which tends to accord axiomatic status to economic theory, is outlined, and the philosophical foundations of this approach are explored with reference to the interpretive strand within the philosophy of the social sciences. A second approach is an ‘instrumentalist’ one emphasising prediction as the primary goal of econometrics, and a third methodology is ‘falsificationism’, which attempts to test economic theories. These are critically evaluated by introducing relevant issues from the philosophy of science, so that the taxonomy presented here can serve as a framework for future discussions of econometric methodology.  相似文献   

17.
While standard techniques are available for the analysis of time-series (longitudinal) data, and for ordinal (rating) data, not much is available for the combination of the two, at least in a readily-usable form. However, this data type is common place in the natural and health sciences where repeated ratings are recorded on the same subject. To analyse these data, this paper considers a transition (Markov) model where the rating of a subject at one time depends explicitly on the observed rating at the previous point of time by incorporating the previous rating as a predictor variable. Complications arise with adequate handling of data at the first observation (t=1), as there is no prior observation to use as a predictor. To overcome this, it is postulated the existence of a rating at time t=0; however it is treated as ‘missing data’ and the expectation–maximisation algorithm used to accommodate this. The particular benefits of this method are shown for shorter time series.  相似文献   

18.
Abstract. We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model. Under this model, the points can be of one of three types: a ‘background point’ an ‘independent cluster point’ or a ‘dependent cluster point’. The background and independent cluster points are thought to exhibit ‘complete spatial randomness’, whereas the dependent cluster points are likely to occur close to previous cluster points. We demonstrate the flexibility of the model for producing point patterns with linear structures and propose to use the model as the likelihood in a Bayesian setting when analysing a spatial point pattern exhibiting linear structures. We illustrate this methodology by analysing two spatial point pattern datasets (locations of bronze age graves in Denmark and locations of mountain tops in Spain).  相似文献   

19.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, non-linear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low-energy states (‘annealing’), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ‘relaxation’ algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.  相似文献   

20.
In proteomics, identification of proteins from complex mixtures of proteins extracted from biological samples is an important problem. Among the experimental technologies, mass spectrometry (MS) is the most popular one. Protein identification from MS data typically relies on a ‘two-step’ procedure of identifying the peptide first followed by the separate protein identification procedure next. In this setup, the interdependence of peptides and proteins is neglected resulting in relatively inaccurate protein identification. In this article, we propose a Markov chain Monte Carlo based Bayesian hierarchical model, a first of its kind in protein identification, which integrates the two steps and performs joint analysis of proteins and peptides using posterior probabilities. We remove the assumption of independence of proteins by using clustering group priors to the proteins based on the assumption that proteins sharing the same biological pathway are likely to be present or absent together and are correlated. The complete conditionals of the proposed joint model being tractable, we propose and implement a Gibbs sampling scheme for full posterior inference that provides the estimation and statistical uncertainties of all relevant parameters. The model has better operational characteristics compared to two existing ‘one-step’ procedures on a range of simulation settings as well as on two well-studied datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号