首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors consider the problem of searching for activation in brain images obtained from functional magnetic resonance imaging and the corresponding functional signal detection problem. They develop a Bayesian procedure to detect signals existing within noisy images when the image is modeled as a scale space random field. Their procedure is based on the Radon‐Nikodym derivative, which is used as the Bayes factor for assessing the point null hypothesis of no signal. They apply their method to data from the Montreal Neurological Institute.  相似文献   

2.
Abstract.  We consider the problem of estimating a collection of integrals with respect to an unknown finite measure μ from noisy observations of some of the integrals. A new method to carry out Bayesian inference for the integrals is proposed. We use a Dirichlet or Gamma process as a prior for μ , and construct an approximation to the posterior distribution of the integrals using the sampling importance resampling algorithm and samples from a new multidimensional version of a Markov chain by Feigin and Tweedie. We prove that the Markov chain is positive Harris recurrent, and that the approximating distribution converges weakly to the posterior as the sample size increases, under a mild integrability condition. Applications to polymer chemistry and mathematical finance are given.  相似文献   

3.
Stochastic kinetic models are often used to describe complex biological processes. Typically these models are analytically intractable and have unknown parameters which need to be estimated from observed data. Ideally we would have measurements on all interacting chemical species in the process, observed continuously in time. However, in practice, measurements are taken only at a relatively few time‐points. In some situations, only very limited observation of the process is available, for example settings in which experimenters can only observe noisy observations on the proportion of cells that are alive. This makes the inference task even more problematic. We consider a range of data‐poor scenarios and investigate the performance of various computationally intensive Bayesian algorithms in determining the posterior distribution using data on proportions from a simple birth‐death process.  相似文献   

4.
We consider Bayesian analysis of a class of multiple changepoint models. While there are a variety of efficient ways to analyse these models if the parameters associated with each segment are independent, there are few general approaches for models where the parameters are dependent. Under the assumption that the dependence is Markov, we propose an efficient online algorithm for sampling from an approximation to the posterior distribution of the number and position of the changepoints. In a simulation study, we show that the approximation introduced is negligible. We illustrate the power of our approach through fitting piecewise polynomial models to data, under a model which allows for either continuity or discontinuity of the underlying curve at each changepoint. This method is competitive with, or outperform, other methods for inferring curves from noisy data; and uniquely it allows for inference of the locations of discontinuities in the underlying curve.  相似文献   

5.
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.  相似文献   

6.
Clustering algorithms are important methods widely used in mining data streams because of their abilities to deal with infinite data flows. Although these algorithms perform well to mining latent relationship in data streams, most of them suffer from loss of cluster purity and become unstable when the inputting data streams have too many noisy variables. In this article, we propose a clustering algorithm to cluster data streams with noisy variables. The result from simulation shows that our proposal method is better than previous studies by adding a process of variable selection as a component in clustering algorithms. The results of two experiments indicate that clustering data streams with the process of variable selection are more stable and have better purity than those without such process. Another experiment testing KDD-CUP99 dataset also shows that our algorithm can generate more stable result.  相似文献   

7.
周聪  张宗新 《统计研究》2021,38(6):86-101
特质风险向债券市场传递风险的方式,直接关系到债券定价逻辑和系统性金融风险防范。本文选取交易所公司债数据,从投资者信息挖掘行为和非理性交易行为出发,研究债券特质风险对信用利差的传导效应与传导机制,并从违约视角探索特质风险产生传导效应的原因,同时分析投资者对不同类型债券所做反应的异质性。研究结论表明:特质风险会通过信息挖掘机制和噪声交易机制影响 信用利差,且以噪声交易机制为主;违约事件引致了更多噪声交易,是特质风险产生传导效应的重要环境因素;发行人的股票上市或国企背景降低了投资者面临的信息不对称程度,并有效抑制了噪声交易机 制的作用,而债券的低评级或短期限特征则会引发投资者的抛售行为,进而放大了噪声交易机制的作用。  相似文献   

8.
This paper concerns the problem of reconstructing images from noisy data by means of Bayesian classification methods. In Klein and Press, 1992, the authors presented a method for reconstructing images called Adaptive Bayesian Classification (ABC). The ABC procedure was shown to preform very well in simulation experiments. The ABC procedure was multistaged; moreover, it involved selecting a prior at Stage n that was the posterior at Stage n - 1. In this paper the authors show that we can improve upon ABC for some problems by modifying the way we take the prior at each stage. The new proposal is to take the prior for the pixel label at each stage as proportional to the number of pixels with that label in a small neighborhood of the pixel. The ABC procedure with a locally proportional prior (ABC/LPP) tends to improve upon the ABC procedure for some problems because the prior in the iterative portion of ABC/LPP is contextual, while that in ABC in non- contextual.  相似文献   

9.
Summary Empirical Bayes estimates have been advocated as an improvement for mapping rare diseases or health events aggregated in small areas. In particular different parametric approaches have been proposed for dealing with non-normal data, assuming that disease occurrencies follow non-homogeneous Poisson law, whose parameters are treated as random variables. This paper shows how to conduct a complete Empirical Bayes analysis under an exchangeable model in the context of Geographical Epidemiology. Three different approaches for defining confidence limits obtained using a parametric bootstrap are compared: method 1 relies only on the first and second moment of the bootstrapped posterior distributions; method 2 computes the centiles of the bootstrapped posteriors; method 3 equates to α the average of the probabilities derived from the estimated bootstrapped cumulative posterior distributions. The simple Poisson-Gamma formulation was used to model mortality data on Larynx Cancer in the Local Health Units of Tuscany (1980–82 males). Two areas of significant elevated risk are identified.  相似文献   

10.
This work estimates the effect of covariates on survival data when times of both originating and failure events are interval-censored. Proportional hazards model [16] along with log-linear models was applied on a data of 130 vertically infected HIV-1 children visiting the paediatrics clinic. The covariates considered for the analysis were antiretroviral (ARV) therapy, age at diagnosis, and change in CD4+T cell count. Change in CD4+T cell count was the difference in the last and first count in non-ARV therapy group, while in the ARV therapy group the same was considered after the start of the treatment. Our findings suggest that children on ARV therapy had significantly lower risk of death (p<0.001). We further investigated the effect of age and change in CD4+T cell count on risk of death. These covariates exhibited a possible association with risk of death by both the procedures (p<0.0001). The effect of number of years under ARV therapy with diagnosis year as a confounding factor was directly related to longevity. The results obtained by the two procedures gave reasonable estimates. We conclude that when the lengths of intervals are narrow, we can opt for parametric modeling which is less computationally intensive.  相似文献   

11.
Partial specification of a prior distribution can be appealing to an analyst, but there is no conventional way to update a partial prior. In this paper, we show how a framework for Bayesian updating with data can be based on the Dirichlet(a) process. Within this framework, partial information predictors generalize standard minimax predictors and have interesting multiple-point shrinkage properties. Approximations to partial-information estimators for squared error loss are defined straightforwardly, and an estimate of the mean shrinks the sample mean. The proposed updating of the partial prior is a consequence of four natural requirements when the Dirichlet parameter a is continuous. Namely, the updated partial posterior should be calculable from knowledge of only the data and partial prior, it should be faithful to the full posterior distribution, it should assign positive probability to every observed event {X,}, and it should not assign probability to unobserved events not included in the partial prior specification.  相似文献   

12.
This paper presents a new Bayesian, infinite mixture model based, clustering approach, specifically designed for time-course microarray data. The problem is to group together genes which have “similar” expression profiles, given the set of noisy measurements of their expression levels over a specific time interval. In order to capture temporal variations of each curve, a non-parametric regression approach is used. Each expression profile is expanded over a set of basis functions and the sets of coefficients of each curve are subsequently modeled through a Bayesian infinite mixture of Gaussian distributions. Therefore, the task of finding clusters of genes with similar expression profiles is then reduced to the problem of grouping together genes whose coefficients are sampled from the same distribution in the mixture. Dirichlet processes prior is naturally employed in such kinds of models, since it allows one to deal automatically with the uncertainty about the number of clusters. The posterior inference is carried out by a split and merge MCMC sampling scheme which integrates out parameters of the component distributions and updates only the latent vector of the cluster membership. The final configuration is obtained via the maximum a posteriori estimator. The performance of the method is studied using synthetic and real microarray data and is compared with the performances of competitive techniques.  相似文献   

13.
The author develops the theme of biology as a rich source of interesting problems where the data available are often noisy and multivariate. He presents various situations in which the data raise model building and inference issues. He emphasizes that appropriate modeling requires careful thought and usually provides methodological challenges. The data sets considered include bird count data, tracking data from leatherback turtles, and amino acid sequence data from primates and cetaceans.  相似文献   

14.
In recent years, wavelet shrinkage has become a very appealing method for data de-noising and density function estimation. In particular, Bayesian modelling via hierarchical priors has introduced novel approaches for Wavelet analysis that had become very popular, and are very competitive with standard hard or soft thresholding rules. In this sense, this paper proposes a hierarchical prior that is elicited on the model parameters describing the wavelet coefficients after applying a Discrete Wavelet Transformation (DWT). In difference to other approaches, the prior proposes a multivariate Normal distribution with a covariance matrix that allows for correlations among Wavelet coefficients corresponding to the same level of detail. In addition, an extra scale parameter is incorporated that permits an additional shrinkage level over the coefficients. The posterior distribution for this shrinkage procedure is not available in closed form but it is easily sampled through Markov chain Monte Carlo (MCMC) methods. Applications on a set of test signals and two noisy signals are presented.  相似文献   

15.
A novel approach to solve the independent component analysis (ICA) model in the presence of noise is proposed. We use wavelets as natural denoising tools to solve the noisy ICA model. To do this, we use a multivariate wavelet denoising algorithm allowing spatial and temporal dependency. We propose also using a statistical approach, named nested design of experiments, to select the parameters such as wavelet family and thresholding type. This technique helps us to select more convenient combination of the parameters. This approach could be extended to many other problems in which one needs to choose parameters between many choices. The performance of the proposed method is illustrated on the simulated data and promising results are obtained. Also, the suggested method applied in latent variables regression in the presence of noise on real data. The good results confirm the ability of multivariate wavelet denoising to solving noisy ICA.  相似文献   

16.
Let T be a two-dimensional region, and let X be a surface dejined on T. The values of X on T, constitute an image, or pattern. The true value of X at any point on T cannot be directly observed, but data can be recorded which provide information about X. The aim is to reconstruct X using the prior knowledge that X will vary smoothly over most of T, but may exhibit jump discontinuities over line segments. This information can be incorporated via Bayes' theorem, using a polygonal Markov random field on T as prior distribution. Under this continuum model, X may in principle be estimated according to standard criteria. In practice, the techniques rely on simulation of the posterior distribution. A natural family of conjugate priors is identified, and a class of spatial-temporal Markov processes is constructed on the uncountable state space; simulation then proceeds by a method of analogous to the Gibbs sampler.  相似文献   

17.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

18.
ABSTRACT

The support vector machine (SVM), first developed by Vapnik and his group at AT&T Bell Laboratories, is being used as a new technique for regression and classification problems. In this paper we present an approach to estimating prediction intervals for SVM regression based on posterior predictive densities. Furthermore, the method is illustrated with a data example.  相似文献   

19.
For models with random effects or missing data, the likelihood function is sometimes intractable analytically but amenable to Monte Carlo approximation. To get a good approximation, the parameter value that drives the simulations should be sufficiently close to the maximum likelihood estimate (MLE) which unfortunately is unknown. Introducing a working prior distribution, we express the likelihood function as a posterior expectation and approximate it using posterior simulations. If the sample size is large, the sample information is likely to outweigh the prior specification and the posterior simulations will be concentrated around the MLE automatically, leading to good approximation of the likelihood near the MLE. For smaller samples, we propose to use the current posterior as the next prior distribution to make the posterior simulations closer to the MLE and hence improve the likelihood approximation. By using the technique of data duplication, we can simulate from the sharpened posterior distribution without actually updating the prior distribution. The suggested method works well in several test cases. A more complex example involving censored spatial data is also discussed.  相似文献   

20.
Data resulting from some deterministic dynamic systems may appear to be random. To distinguish these kinds of data from random data is a new challenge for statisticians. This paper develops a nonparametric statistical test procedure for distinguishing noisy chaos from i. i. d. random processes. The procedure can be easily implemented by computer and is very effective in identifying low dimensional chaos in certain instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号