首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a sampling policy considering Bayesian risks. Various definitions of producer's risk and consumer's risk have been made. Bayesian risks for both producer and consumer are proven to give better information to decision-makers than classical definitions of the risks. So considering the Bayesian risk constraints, we seek to find optimal acceptance sampling policy by minimizing total cost, including the cost of rejecting the batch, the cost of inspection, and the cost of defective items detected during the operation. Proper distributions to construct the objective function of the model are specified. In order to demonstrate the application of the proposed model, we illustrate a numerical example. Furthermore, the results of the sensitivity analysis show that lot size, the cost of inspection, and the cost of one defective item are key factors in sampling policies. The acceptable quality level, the lot tolerance proportion defective, and Bayesian risks also affect the sampling policy, but variations of acceptable quality level and producer Bayesian risks, for values more than a specified value, cause no changes in sampling policy.  相似文献   

2.
We investigate ordinary least-squares and Bayesian methods for constructing interval estimates for historical lake pH's inferred from diatom sediments. The Bayesian method explicitly models several forms of variability, including the sampling and classification variability of the diatom records, estimation variability, and measurement error in observed pH's. The two methods produce similar interval estimates, but the Bayesian model allows design recommendations to be made.  相似文献   

3.
In this work we study robustness in Bayesian models through a generalization of the Normal distribution. We show new appropriate techniques in order to deal with this distribution in Bayesian inference. Then we propose two approaches to decide, in some applications, if we should replace the usual Normal model by this generalization. First, we pose this dilemma as a model rejection problem, using diagnostic measures. In the second approach we evaluate the model's predictive efficiency. We illustrate those perspectives with a simulation study, a non linear model and a longitudinal data model.  相似文献   

4.
The Bayesian analysis based on the partial likelihood for Cox's proportional hazards model is frequently used because of its simplicity. The Bayesian partial likelihood approach is often justified by showing that it approximates the full Bayesian posterior of the regression coefficients with a diffuse prior on the baseline hazard function. This, however, may not be appropriate when ties exist among uncensored observations. In that case, the full Bayesian and Bayesian partial likelihood posteriors can be much different. In this paper, we propose a new Bayesian partial likelihood approach for many tied observations and justify its use.  相似文献   

5.
Abstract

We develop a Bayesian statistical model for estimating bowhead whale population size from photo-identification data when most of the population is uncatchable. The proposed conditional likelihood function is a product of Darroch's model, formulated as a function of the number of good photos, and a binomial distribution of captured whales given the total number of good photos at each occasion. The full Bayesian model is implemented via adaptive rejection sampling for log concave densities. We apply the model to data from 1985 and 1986 bowhead whale photographic studies and the results compare favorably with the ones obtained in the literature. Also, a comparison with the maximum likelihood procedure with bootstrap simulation is considered using different vague priors for the capture probabilities.  相似文献   

6.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

7.
We develop a hierarchical Bayesian approach for inference in random coefficient dynamic panel data models. Our approach allows for the initial values of each unit's process to be correlated with the unit-specific coefficients. We impose a stationarity assumption for each unit's process by assuming that the unit-specific autoregressive coefficient is drawn from a logitnormal distribution. Our method is shown to have favorable properties compared to the mean group estimator in a Monte Carlo study. We apply our approach to analyze energy and protein intakes among individuals from the Philippines.  相似文献   

8.
Summary.  Repeated screening is a 100% sampling inspection of a batch of items followed by removal of the defective items and further iterations of inspection and removal. The reason for repeating the inspection is that the detection of a defective item happens with probability p <1. A missed defective item is a false negative result. The no false positive result is contemplated in this paper, which is motivated by a problem coming from the production of pharmaceutical pills. Bayesian posterior distributions for the quality of the lot are obtained for the case of both p known and p unknown. Batch rejection and batch acceptance control limits for the number of defective items at subsequent iterations can then be calculated. Theoretical connections to the problem of estimating the number-of-trials parameter of a binomial distribution are drawn.  相似文献   

9.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   

10.
本文首次将Elastic Net这种用于高度相关变量的惩罚方法用于面板数据的贝叶斯分位数回归,并基于非对称Laplace先验分布推导所有参数的后验分布,进而构建Gibbs抽样。为了验证模型的有效性,本文将面板数据的贝叶斯Elastic Net分位数回归方法(BQR. EN)与面板数据的贝叶斯分位数回归方法(BQR)、面板数据的贝叶斯Lasso分位数回归方法(BLQR)、面板数据的贝叶斯自适应Lasso分位数回归方法(BALQR)进行了多种情形下的全方位比较,结果表明BQR. EN方法适用于具有高度相关性、数据维度很高和尖峰厚尾分布特征的数据。进一步地,本文就BQR. EN方法在不同扰动项假设、不同样本量的情形展开模拟比较,验证了新方法的稳健性和小样本特性。最后,本文选取互联网金融类上市公司经济增加值(EVA)作为实证研究对象,检验新方法在实际问题中的参数估计与变量选择能力,实证结果符合预期。  相似文献   

11.
The purpose of this article is to present a new policy for designing an acceptance sampling plan based on the minimum proportion of the lot that should be inspected in the presence of inspection errors. It is assumed that inspection is not perfect and every defective item cannot be detected with complete certainty. The Bayesian method is used for obtaining the probability distribution function of the number of defective items in the lot. To design this model, two constraints of producer risk and consumer risk are considered during the inspection process by using two specified points on operating characteristic curve. In order to illustrate the application of the proposed model, an example is presented. In addition, a sensitivity analysis is performed to analyze the model performance under different scenarios of process parameters and the results are elaborated. Finally, the efficiency of the proposed model is compared with the sampling method of Spencer and Kevan de Lopez (2017) at the same conditions.  相似文献   

12.
Conventional methods apply symmetric prior distributions such as a normal distribution or a Laplace distribution for regression coefficients, which may be suitable for median regression and exhibit no robustness to outliers. This work develops a quantile regression on linear panel data model without heterogeneity from a Bayesian point of view, i.e. upon a location-scale mixture representation of the asymmetric Laplace error distribution, and provides how the posterior distribution is summarized using Markov chain Monte Carlo methods. Applying this approach to the 1970 British Cohort Study (BCS) data, it finds that a different maternal health problem has different influence on child's worrying status at different quantiles. In addition, applying stochastic search variable selection for maternal health problems to the 1970 BCS data, it finds that maternal nervous breakdown, among the 25 maternal health problems, contributes most to influence the child's worrying status.  相似文献   

13.
ABSTRACT

A general Bayesian random effects model for analyzing longitudinal mixed correlated continuous and negative binomial responses with and without missing data is presented. This Bayesian model, given some random effects, uses a normal distribution for the continuous response and a negative binomial distribution for the count response. A Markov Chain Monte Carlo sampling algorithm is described for estimating the posterior distribution of the parameters. This Bayesian model is illustrated by a simulation study. For sensitivity analysis to investigate the change of parameter estimates with respect to the perturbation from missing at random to not missing at random assumption, the use of posterior curvature is proposed. The model is applied to a medical data, obtained from an observational study on women, where the correlated responses are the negative binomial response of joint damage and continuous response of body mass index. The simultaneous effects of some covariates on both responses are also investigated.  相似文献   

14.
15.
In this paper, a multivariate Bayesian variable sampling interval (VSI) control chart for the economic design and optimization of statistical parameters is designed. Based on the VSI sampling strategy of a multivariate Bayesian control chart with dual control limits, the optimal expected cost function is constructed. The proposed model allows the determination of the scheme parameters that minimize the expected cost per time of the process. The effectiveness of the Bayesian VSI chart is estimated through economic comparisons with the Bayesian fixed sampling interval and the Hotelling's T2 chart. This study is an in-depth study on a Bayesian multivariate control chart with variable parameter. Furthermore, it is shown that significant cost improvement may be realized through the new model.  相似文献   

16.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

17.
Gene copy number (GCN) changes are common characteristics of many genetic diseases. Comparative genomic hybridization (CGH) is a new technology widely used today to screen the GCN changes in mutant cells with high resolution genome-wide. Statistical methods for analyzing such CGH data have been evolving. Existing methods are either frequentist's or full Bayesian. The former often has computational advantage, while the latter can incorporate prior information into the model, but could be misleading when one does not have sound prior information. In an attempt to take full advantages of both approaches, we develop a Bayesian-frequentist hybrid approach, in which a subset of the model parameters is inferred by the Bayesian method, while the rest parameters by the frequentist's. This new hybrid approach provides advantages over those of the Bayesian or frequentist's method used alone. This is especially the case when sound prior information is available on part of the parameters, and the sample size is relatively small. Spatial dependence and false discovery rate are also discussed, and the parameter estimation is efficient. As an illustration, we used the proposed hybrid approach to analyze a real CGH data.  相似文献   

18.
In the last years, many articles have been written about Bayesian model selection. In this article, a different and easier method is proposed and analyzed. The key idea of this method is based on the well-known property that, under the true model, the cumulative distribution function is distributed as a uniform distribution over the interval (0, 1). The method is first introduced for the continuous case and then for the discrete case by smoothing the cumulative distribution function. Some asymptotical properties of the method are obtained by developing an alternative to Helly's theorems. Finally, the performance of the method is evaluated by simulation, showing a good behavior.  相似文献   

19.
The theoretical foundation for a number of model selection criteria is established in the context of inhomogeneous point processes and under various asymptotic settings: infill, increasing domain and combinations of these. For inhomogeneous Poisson processes we consider Akaike's information criterion and the Bayesian information criterion, and in particular we identify the point process analogue of ‘sample size’ needed for the Bayesian information criterion. Considering general inhomogeneous point processes we derive new composite likelihood and composite Bayesian information criteria for selecting a regression model for the intensity function. The proposed model selection criteria are evaluated using simulations of Poisson processes and cluster point processes.  相似文献   

20.
A Gaussian process (GP) can be thought of as an infinite collection of random variables with the property that any subset, say of dimension n, of these variables have a multivariate normal distribution of dimension n, mean vector β and covariance matrix Σ [O'Hagan, A., 1994, Kendall's Advanced Theory of Statistics, Vol. 2B, Bayesian Inference (John Wiley & Sons, Inc.)]. The elements of the covariance matrix are routinely specified through the multiplication of a common variance by a correlation function. It is important to use a correlation function that provides a valid covariance matrix (positive definite). Further, it is well known that the smoothness of a GP is directly related to the specification of its correlation function. Also, from a Bayesian point of view, a prior distribution must be assigned to the unknowns of the model. Therefore, when using a GP to model a phenomenon, the researcher faces two challenges: the need of specifying a correlation function and a prior distribution for its parameters. In the literature there are many classes of correlation functions which provide a valid covariance structure. Also, there are many suggestions of prior distributions to be used for the parameters involved in these functions. We aim to investigate how sensitive the GPs are to the (sometimes arbitrary) choices of their correlation functions. For this, we have simulated 25 sets of data each of size 64 over the square [0, 5]×[0, 5] with a specific correlation function and fixed values of the GP's parameters. We then fit different correlation structures to these data, with different prior specifications and check the performance of the adjusted models using different model comparison criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号