首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A common task in quality control is to determine a control limit for a product at the time of release that incorporates its risk of degradation over time. Such a limit for a given quality measurement will be based on empirical stability data, the intended shelf life of the product and the stability specification. The task is particularly important when the registered specifications for release and stability are equal. We discuss two relevant formulations and their implementations in both a frequentist and Bayesian framework. The first ensures that the risk of a batch failing the specification is comparable at release and at the end of shelf life. The second is to screen out batches at release time that are at high risk of failing the stability specification at the end of their shelf life. Although the second formulation seems more natural from a quality assurance perspective, it usually renders a control limit that is too stringent. In this paper we provide theoretical insight in this phenomenon, and introduce a heat-map visualisation that may help practitioners to assess the feasibility of implementing a limit under the second formulation. We also suggest a solution when infeasible. In addition, the current industrial benchmark is reviewed and contrasted to the two formulations. Computational algorithms for both formulations are laid out in detail, and illustrated on a dataset.  相似文献   

2.
Pharmaceutical companies and manufacturers of food products are legally required to label the product's shelf‐life on the packaging. For pharmaceutical products the requirements for how to determine the shelf‐life are highly regulated. However, the regulatory documents do not specifically define the shelf‐life. Instead, the definition is implied through the estimation procedure. In this paper, the focus is on the situation where multiple batches are used to determine a label shelf‐life that is applicable to all future batches. Consequently, the short‐comings of existing estimation approaches are discussed. These are then addressed by proposing a new definition of shelf‐life and label shelf‐life, where greater emphasis is placed on within and between batch variability. Furthermore, an estimation approach is developed and the properties of this approach are illustrated using a simulation study. Finally, the approach is applied to real data.  相似文献   

3.
Bayesian nonparametric methods have been applied to survival analysis problems since the emergence of the area of Bayesian nonparametrics. However, the use of the flexible class of Dirichlet process mixture models has been rather limited in this context. This is, arguably, to a large extent, due to the standard way of fitting such models that precludes full posterior inference for many functionals of interest in survival analysis applications. To overcome this difficulty, we provide a computational approach to obtain the posterior distribution of general functionals of a Dirichlet process mixture. We model the survival distribution employing a flexible Dirichlet process mixture, with a Weibull kernel, that yields rich inference for several important functionals. In the process, a method for hazard function estimation emerges. Methods for simulation-based model fitting, in the presence of censoring, and for prior specification are provided. We illustrate the modeling approach with simulated and real data.  相似文献   

4.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

5.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   

6.
Simulation-based designs for accelerated life tests   总被引:1,自引:0,他引:1  
In this paper we present a Bayesian decision theoretic approach to the design of accelerated life tests (ALT). We discuss computational issues regarding the evaluation of expectation and optimization steps in the solution of the decision problem. We illustrate how Monte Carlo methods can be used in preposterior analysis to find optimal designs and how the required computational effort can be avoided by using curve-fitting techniques. In so doing, we adopt the recent Monte-Carlo-based approaches of Muller and Parmigiani (1995. J. Amer. Statist. Assoc. 90, 503–510) and Muller (2000. Bayesian Statistics 6, forthcoming) to develop optimal Bayesian designs. These approaches facilitate the preposterior analysis by replacing it with a sequence of scatter plot smoothing/regression techniques and optimization of the corresponding fitted surfaces. We present our development by considering single and multiple-point fixed, as well as, sequential design problems when the underlying life model is exponential, and illustrate the implementation of our approach with some examples.  相似文献   

7.
Basket trials evaluate a single drug targeting a single genetic variant in multiple cancer cohorts. Empirical findings suggest that treatment efficacy across baskets may be heterogeneous. Most modern basket trial designs use Bayesian methods. These methods require the prior specification of at least one parameter that permits information sharing across baskets. In this study, we provide recommendations for selecting a prior for scale parameters for adaptive basket trials by using Bayesian hierarchical modeling. Heterogeneity among baskets attracts much attention in basket trial research, and substantial heterogeneity challenges the basic assumption of exchangeability of Bayesian hierarchical approach. Thus, we also allowed each stratum-specific parameter to be exchangeable or nonexchangeable with similar strata by using data observed in an interim analysis. Through a simulation study, we evaluated the overall performance of our design based on statistical power and type I error rates. Our research contributes to the understanding of the properties of Bayesian basket trial designs.  相似文献   

8.
ABSTRACT

This work presents advanced computational aspects of a new method for changepoint detection on spatio-temporal point process data. We summarize the methodology, based on building a Bayesian hierarchical model for the data and declaring prior conjectures on the number and positions of the changepoints, and show how to take decisions regarding the acceptance of potential changepoints. The focus of this work is about choosing an approach that detects the correct changepoint and delivers smooth reliable estimates in a feasible computational time; we propose Bayesian P-splines as a suitable tool for managing spatial variation, both under a computational and a model fitting performance perspective. The main computational challenges are outlined and a solution involving parallel computing in R is proposed and tested on a simulation study. An application is also presented on a data set of seismic events in Italy over the last 20 years.  相似文献   

9.
We propose a Bayesian nonparametric instrumental variable approach under additive separability that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specification with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV. Supplementary materials for this article are available online.  相似文献   

10.
A Bayesian approach based on the Markov Chain Monte Carlo technique is proposed for the non-homogeneous gamma process with power-law shape function. Vague and informative priors, formalized on some quantities having a “physical” meaning, are provided. Point and interval estimation of process parameters and some functions thereof are developed, as well as prediction on some observable quantities that are useful in defining the maintenance strategy is proposed. Some useful approximations are derived for the conditional and unconditional mean and median of the residual life to reduce computational time. Finally, the proposed approach is applied to a real dataset.  相似文献   

11.
We consider simulation-based methods for the design of multi-stress factor accelerated life tests (ALTs) in a Bayesian decision theoretic framework. Multi-stress factor ALTs are challenging due to the increased number of simulation runs required as a result of stress factor-level combinations. We propose the use of Latin hypercube sampling to reduce the simulation cost without loss of statistical efficiency. Exploration and optimization of expected utility function is carried out by a developed algorithm that utilizes Markov chain Monte Carlo methods and nonparametric smoothing techniques. A comparison of proposed approach to a full grid simulation is provided to illustrate computational cost reduction.  相似文献   

12.
There is an increasing amount of literature focused on Bayesian computational methods to address problems with intractable likelihood. One approach is a set of algorithms known as Approximate Bayesian Computational (ABC) methods. One of the problems with these algorithms is that their performance depends on the appropriate choice of summary statistics, distance measure and tolerance level. To circumvent this problem, an alternative method based on the empirical likelihood has been introduced. This method can be easily implemented when a set of constraints, related to the moments of the distribution, is specified. However, the choice of the constraints is sometimes challenging. To overcome this difficulty, we propose an alternative method based on a bootstrap likelihood approach. The method is easy to implement and in some cases is actually faster than the other approaches considered. We illustrate the performance of our algorithm with examples from population genetics, time series and stochastic differential equations. We also test the method on a real dataset.  相似文献   

13.
Summary.  Existing Bayesian model selection procedures require the specification of prior distributions on the parameters appearing in every model in the selection set. In practice, this requirement limits the application of Bayesian model selection methodology. To overcome this limitation, we propose a new approach towards Bayesian model selection that uses classical test statistics to compute Bayes factors between possible models. In several test cases, our approach produces results that are similar to previously proposed Bayesian model selection and model averaging techniques in which prior distributions were carefully chosen. In addition to eliminating the requirement to specify complicated prior distributions, this method offers important computational and algorithmic advantages over existing simulation-based methods. Because it is easy to evaluate the operating characteristics of this procedure for a given sample size and specified number of covariates, our method facilitates the selection of hyperparameter values through prior-predictive simulation.  相似文献   

14.
Interval-censored survival data arise often in medical applications and clinical trials [Wang L, Sun J, Tong X. Regression analyis of case II interval-censored failure time data with the additive hazards model. Statistica Sinica. 2010;20:1709–1723]. However, most of existing interval-censored survival analysis techniques suffer from challenges such as heavy computational cost or non-proportionality of hazard rates due to complicated data structure [Wang L, Lin X. A Bayesian approach for analyzing case 2 interval-censored data under the semiparametric proportional odds model. Statistics & Probability Letters. 2011;81:876–883; Banerjee T, Chen M-H, Dey DK, et al. Bayesian analysis of generalized odds-rate hazards models for survival data. Lifetime Data Analysis. 2007;13:241–260]. To address these challenges, in this paper, we introduce a flexible Bayesian non-parametric procedure for the estimation of the odds under interval censoring, case II. We use Bernstein polynomials to introduce a prior for modeling the odds and propose a novel and easy-to-implement sampling manner based on the Markov chain Monte Carlo algorithms to study the posterior distributions. We also give general results on asymptotic properties of the posterior distributions. The simulated examples show that the proposed approach is quite satisfactory in the cases considered. The use of the proposed method is further illustrated by analyzing the hemophilia study data [McMahan CS, Wang L. A package for semiparametric regression analysis of interval-censored data; 2015. http://CRAN.R-project.org/package=ICsurv.  相似文献   

15.
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET.  相似文献   

16.
Bayesian graphical modelling: a case-study in monitoring health outcomes   总被引:2,自引:0,他引:2  
Bayesian graphical modelling represents the synthesis of several recent developments in applied complex modelling. After describing a moderately challenging real example, we show how graphical models and Markov chain Monte Carlo methods naturally provide a direct path between model specification and the computational means of making inferences on that model. These ideas are illustrated with a range of modelling issues related to our example. An appendix discusses the BUGS software.  相似文献   

17.
We show how mutually utility independent hierarchies, which weigh the various costs of an experiment against benefits expressed through a mixed Bayes linear utility representing the potential gains in knowledge from the experiment, provide a flexible and intuitive methodology for experimental design which remains tractable even for complex multivariate problems. A key feature of the approach is that we allow imprecision in the trade-offs between the various costs and benefits. We identify the Pareto optimal designs under the imprecise specification and suggest a criterion for selecting between such designs. The approach is illustrated with respect to an experiment related to the oral glucose tolerance test.  相似文献   

18.
We propose a Bayesian approach for inference in a dynamic disequilibrium model. To circumvent the difficulties raised by the Maddala and Nelson (1974) specification in the dynamic case, we analyze a dynamic extended version of the disequilibrium model of Ginsburgh et al. (1980). We develop a Gibbs sampler based on the simulation of the missing observations. The feasibility of the approach is illustrated by an empirical analysis of the Polish credit market, for which we conduct a specification search using the posterior deviance criterion of Spiegelhalter et al. (2002).  相似文献   

19.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

20.
Müller et al. (Stat Methods Appl, 2017) provide an excellent review of several classes of Bayesian nonparametric models which have found widespread application in a variety of contexts, successfully highlighting their flexibility in comparison with parametric families. Particular attention in the paper is dedicated to modelling spatial dependence. Here we contribute by concisely discussing general computational challenges which arise with posterior inference with Bayesian nonparametric models and certain aspects of modelling temporal dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号