首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994 Mitchell, T., Sacks, J., Ylvisaker, D. (1994). Asymptotic Bayes criteria for nonparametric response surface design. Ann. Stat. 22:634651.[Crossref], [Web of Science ®] [Google Scholar]) for a more general covariance kernel proposed by Chatterjee and Qin (2011 Chatterjee, K., Qin, H. (2011). Generalized discrete discrepancy and its applications in experimental designs. J. Stat. Plann. Inference 141:951960.[Crossref], [Web of Science ®] [Google Scholar]). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011 Yue, R.X., Qin, H., Chatterjee, K. (2011). Optimal U-type design for Bayesian nonparametric multiresponse prediction. J. Stat. Plann. Inference 141:24722479.[Crossref], [Web of Science ®] [Google Scholar]) from symmetrical case to asymmetrical U-type designs.  相似文献   

2.
3.
This paper deals with the problem of increasing air pollution monitoring stations in Tehran city for efficient spatial prediction. As the data are multivariate and skewed, we introduce two multivariate skew models through developing the univariate skew Gaussian random field proposed by Zareifard and Jafari Khaledi [21 H. Zareifard and M. Jafari Khaledi, Non-Gaussian modeling of spatial data using scale mixing of a unified skew Gaussian process, J. Multivariate Anal. 114 (2013), pp. 1628. doi: 10.1016/j.jmva.2012.07.003[Crossref], [Web of Science ®] [Google Scholar]]. These models provide extensions of the linear model of coregionalization for non-Gaussian data. In the Bayesian framework, the optimal network design is found based on the maximum entropy criterion. A Markov chain Monte Carlo algorithm is developed to implement posterior inference. Finally, the applicability of two proposed models is demonstrated by analyzing an air pollution data set.  相似文献   

4.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   

5.
In this paper, a Bayesian two-stage D–D optimal design for mixture experimental models under model uncertainty is developed. A Bayesian D-optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimalityprocedure that collaborates with the improved model from the first stage data. The results show that a Bayesian two-stage D–D-optimal design for mixture experiments under model uncertainty is more efficient than both the Bayesian one-stage D-optimal design and the non-Bayesian one-stage D-optimal design in most situations. Furthermore, simulations are used to obtain a reasonable ratio of the sample sizes between the two stages.  相似文献   

6.
This report is about the analysis of stochastic processes of the form R = S + N, where S is a “smooth” functional and N is noise. The proposed methods derive from the assumption that the observed R-values and unobserved values of R, the assumed inferential objectives of the analysis, are linearly related through Taylor series expansions of observed about unobserved values. The expansion errors and all other priori unspecified quantities have a joint multivariate normal distribution which expresses the prior uncertainty about their values. The results include interpolators, predictors, and derivative estimates, with credibility-interval estimates automatically generated in each case. An analysis of an acid-rain wet-deposition time series is included to indicate the efficacy of the proposed method. It was this problem which led to the methodological developments reported in this paper.  相似文献   

7.
Typically, in the practice of causal inference from observational studies, a parametric model is assumed for the joint population density of potential outcomes and treatment assignments, and possibly this is accompanied by the assumption of no hidden bias. However, both assumptions are questionable for real data, the accuracy of causal inference is compromised when the data violates either assumption, and the parametric assumption precludes capturing a more general range of density shapes (e.g., heavier tail behavior and possible multi-modalities). We introduce a flexible, Bayesian nonparametric causal model to provide more accurate causal inferences. The model makes use of a stick-breaking prior, which has the flexibility to capture any multi-modalities, skewness and heavier tail behavior in this joint population density, while accounting for hidden bias. We prove the asymptotic consistency of the posterior distribution of the model, and illustrate our causal model through the analysis of small and large observational data sets.  相似文献   

8.
Frequentist and Bayesian methods differ in many aspects but share some basic optimal properties. In real-life prediction problems, situations exist in which a model based on one of the above paradigms is preferable depending on some subjective criteria. Nonparametric classification and regression techniques, such as decision trees and neural networks, have both frequentist (classification and regression trees (CARTs) and artificial neural networks) as well as Bayesian counterparts (Bayesian CART and Bayesian neural networks) to learning from data. In this paper, we present two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which we call the Bayesian neural tree (BNT) models. BNT models can simultaneously perform feature selection and prediction, are highly flexible, and generalise well in settings with limited training observations. We study the statistical consistency of the proposed approaches and derive the optimal value of a vital model parameter. The excellent performance of the newly proposed BNT models is shown using simulation studies. We also provide some illustrative examples using a wide variety of standard regression datasets from a public available machine learning repository to show the superiority of the proposed models in comparison to popularly used Bayesian CART and Bayesian neural network models.  相似文献   

9.
A biosimilar drug is a biological product that is highly similar to and at the same time has no clinically meaningful difference from licensed product in terms of safety, purity, and potency. Biosimilar study design is essential to demonstrate the equivalence between biosimilar drug and reference product. However, existing designs and assessment methods are primarily based on binary and continuous endpoints. We propose a Bayesian adaptive design for biosimilarity trials with time-to-event endpoint. The features of the proposed design are twofold. First, we employ the calibrated power prior to precisely borrow relevant information from historical data for the reference drug. Second, we propose a two-stage procedure using the Bayesian biosimilarity index (BBI) to allow early stop and improve the efficiency. Extensive simulations are conducted to demonstrate the operating characteristics of the proposed method in contrast with some naive method. Sensitivity analysis and extension with respect to the assumptions are presented.  相似文献   

10.
Many commonly used statistical methods for data analysis or clinical trial design rely on incorrect assumptions or assume an over‐simplified framework that ignores important information. Such statistical practices may lead to incorrect conclusions about treatment effects or clinical trial designs that are impractical or that do not accurately reflect the investigator's goals. Bayesian nonparametric (BNP) models and methods are a very flexible new class of statistical tools that can overcome such limitations. This is because BNP models can accurately approximate any distribution or function and can accommodate a broad range of statistical problems, including density estimation, regression, survival analysis, graphical modeling, neural networks, classification, clustering, population models, forecasting and prediction, spatiotemporal models, and causal inference. This paper describes 3 illustrative applications of BNP methods, including a randomized clinical trial to compare treatments for intraoperative air leaks after pulmonary resection, estimating survival time with different multi‐stage chemotherapy regimes for acute leukemia, and evaluating joint effects of targeted treatment and an intermediate biological outcome on progression‐free survival time in prostate cancer.  相似文献   

11.
Summary.  We discuss a method for combining different but related longitudinal studies to improve predictive precision. The motivation is to borrow strength across clinical studies in which the same measurements are collected at different frequencies. Key features of the data are heterogeneous populations and an unbalanced design across three studies of interest. The first two studies are phase I studies with very detailed observations on a relatively small number of patients. The third study is a large phase III study with over 1500 enrolled patients, but with relatively few measurements on each patient. Patients receive different doses of several drugs in the studies, with the phase III study containing significantly less toxic treatments. Thus, the main challenges for the analysis are to accommodate heterogeneous population distributions and to formalize borrowing strength across the studies and across the various treatment levels. We describe a hierarchical extension over suitable semiparametric longitudinal data models to achieve the inferential goal. A nonparametric random-effects model accommodates the heterogeneity of the population of patients. A hierarchical extension allows borrowing strength across different studies and different levels of treatment by introducing dependence across these nonparametric random-effects distributions. Dependence is introduced by building an analysis of variance (ANOVA) like structure over the random-effects distributions for different studies and treatment combinations. Model structure and parameter interpretation are similar to standard ANOVA models. Instead of the unknown normal means as in standard ANOVA models, however, the basic objects of inference are random distributions, namely the unknown population distributions under each study. The analysis is based on a mixture of Dirichlet processes model as the underlying semiparametric model.  相似文献   

12.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

13.
A density estimation method in a Bayesian nonparametric framework is presented when recorded data are not coming directly from the distribution of interest, but from a length biased version. From a Bayesian perspective, efforts to computationally evaluate posterior quantities conditionally on length biased data were hindered by the inability to circumvent the problem of a normalizing constant. In this article, we present a novel Bayesian nonparametric approach to the length bias sampling problem that circumvents the issue of the normalizing constant. Numerical illustrations as well as a real data example are presented and the estimator is compared against its frequentist counterpart, the kernel density estimator for indirect data of Jones.  相似文献   

14.
We propose a new class of time dependent random probability measures and show how this can be used for Bayesian nonparametric inference in continuous time. By means of a nonparametric hierarchical model we define a random process with geometric stick-breaking representation and dependence structure induced via a one dimensional diffusion process of Wright-Fisher type. The sequence is shown to be a strongly stationary measure-valued process with continuous sample paths which, despite the simplicity of the weights structure, can be used for inferential purposes on the trajectory of a discretely observed continuous-time phenomenon. A simple estimation procedure is presented and illustrated with simulated and real financial data.  相似文献   

15.
A copula can fully characterize the dependence of multiple variables. The purpose of this paper is to provide a Bayesian nonparametric approach to the estimation of a copula, and we do this by mixing over a class of parametric copulas. In particular, we show that any bivariate copula density can be arbitrarily accurately approximated by an infinite mixture of Gaussian copula density functions. The model can be estimated by Markov Chain Monte Carlo methods and the model is demonstrated on both simulated and real data sets.  相似文献   

16.
We investigate Bayesian optimal designs for changepoint problems. We find robust optimal designs which allow for arbitrary distributions before and after the change, arbitrary prior densities on the parameters before and after the change, and any log‐concave prior density on the changepoint. We define a new design measure for Bayesian optimal design problems as a means of finding the optimal design. Our results apply to any design criterion function concave in the design measure. We illustrate our results by finding the optimal design in a problem motivated by a previous clinical trial. The Canadian Journal of Statistics 37: 495–513; 2009 © 2009 Statistical Society of Canada  相似文献   

17.
The aim of this study is to apply the Bayesian method of identifying optimal experimental designs to a toxicokinetic-toxicodynamic model that describes the response of aquatic organisms to time dependent concentrations of toxicants. As for experimental designs, we restrict ourselves to pulses and constant concentrations. A design of an experiment is called optimal within this set of designs if it maximizes the expected gain of knowledge about the parameters. Focus is on parameters that are associated with the auxiliary damage variable of the model that can only be inferred indirectly from survival time series data. Gain of knowledge through an experiment is quantified both with the ratio of posterior to prior variances of individual parameters and with the entropy of the posterior distribution relative to the prior on the whole parameter space. The numerical methods developed to calculate expected gain of knowledge are expected to be useful beyond this case study, in particular for multinomially distributed data such as survival time series data.  相似文献   

18.
A Bayesian nonparametric model for Taguchi's on-line quality monitoring procedure for attributes is introduced. The proposed model may accommodate the original single shift setting to the more realistic situation of gradual quality deterioration and allows the incorporation of an expert's opinion on the production process. Based on the number of inspections to be carried out until a defective item is found, the Bayesian operation for the distribution function that represents the increasing sequence of defective fractions during a cycle considering a mixture of Dirichlet processes as prior distribution is performed. Bayes estimates for relevant quantities are also obtained.  相似文献   

19.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

20.
When Shannon entropy is used as a criterion in the optimal design of experiments, advantage can be taken of the classical identity representing the joint entropy of parameters and observations as the sum of the marginal entropy of the observations and the preposterior conditional entropy of the parameters. Following previous work in which this idea was used in spatial sampling, the method is applied to standard parameterized Bayesian optimal experimental design. Under suitable conditions, which include non-linear as well as linear regression models, it is shown in a few steps that maximizing the marginal entropy of the sample is equivalent to minimizing the preposterior entropy, the usual Bayesian criterion, thus avoiding the use of conditional distributions. It is shown using this marginal formulation that under normality assumptions every standard model which has a two-point prior distribution on the parameters gives an optimal design supported on a single point. Other results include a new asymptotic formula which applies as the error variance is large and bounds on support size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号