共查询到20条相似文献,搜索用时 18 毫秒
1.
《Journal of Statistical Computation and Simulation》2012,82(2-4):263-279
We construct bootstrap confidence intervals for smoothing spline estimates based on Gaussian data, and penalized likelihood smoothing spline estimates based on data from .exponential families. Several vari- ations of bootstrap confidence intervals are considered and compared. We find that the commonly used ootstrap percentile intervals are inferior to the T intervals and to intervals based on bootstrap estimation of mean squared errors. The best variations of the bootstrap confidence intervals behave similar to the well known Bayesian confidence intervals. These bootstrap confidence intervals have an average coverage probability across the function being estimated, as opposed to a pointwise property. 相似文献
2.
Younan Chen 《Journal of applied statistics》2011,38(9):1963-1975
In modern quality engineering, dual response surface methodology is a powerful tool to model an industrial process by using both the mean and the standard deviation of the measurements as the responses. The least squares method in regression is often used to estimate the coefficients in the mean and standard deviation models, and various decision criteria are proposed by researchers to find the optimal conditions. Based on the inherent hierarchical structure of the dual response problems, we propose a Bayesian hierarchical approach to model dual response surfaces. Such an approach is compared with two frequentist least squares methods by using two real data sets and simulated data. 相似文献
3.
Standard response surface methodology employs a second order polynomial model to locate the stationary point ξ of the true response function. To make Bayesian analysis more direct and simpler, we refer to an alternative and equivalent parametrization, which contains ξ as parameter of interest. The marginal reference prior of ξ is derived in its general form and particular cases are also given in detail, showing the Bayesian role of rotatability. 相似文献
4.
Davide Farchione 《统计学通讯:理论与方法》2013,42(15):2680-2694
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths. 相似文献
5.
The lognormal distribution is currently used extensively to describe the distribution of positive random variables. This is especially the case with data pertaining to occupational health and other biological data. One particular application of the data is statistical inference with regards to the mean of the data. Other authors, namely Zou et al. (2009), have proposed procedures involving the so-called “method of variance estimates recovery” (MOVER), while an alternative approach based on simulation is the so-called generalized confidence interval, discussed by Krishnamoorthy and Mathew (2003). In this paper we compare the performance of the MOVER-based confidence interval estimates and the generalized confidence interval procedure to coverage of credibility intervals obtained using Bayesian methodology using a variety of different prior distributions to estimate the appropriateness of each. An extensive simulation study is conducted to evaluate the coverage accuracy and interval width of the proposed methods. For the Bayesian approach both the equal-tail and highest posterior density (HPD) credibility intervals are presented. Various prior distributions (Independence Jeffreys' prior, Jeffreys'-Rule prior, namely, the square root of the determinant of the Fisher Information matrix, reference and probability-matching priors) are evaluated and compared to determine which give the best coverage with the most efficient interval width. The simulation studies show that the constructed Bayesian confidence intervals have satisfying coverage probabilities and in some cases outperform the MOVER and generalized confidence interval results. The Bayesian inference procedures (hypothesis tests and confidence intervals) are also extended to the difference between two lognormal means as well as to the case of zero-valued observations and confidence intervals for the lognormal variance. In the last section of this paper the bivariate lognormal distribution is discussed and Bayesian confidence intervals are obtained for the difference between two correlated lognormal means as well as for the ratio of lognormal variances, using nine different priors. 相似文献
6.
Let X1, …,Xn, and Y1, … Yn be consecutive samples from a distribution function F which itself is randomly chosen according to the Ferguson (1973) Dirichlet-process prior distribution on the space of distribution functions. Typically, prediction intervals employ the observations X1,…, Xn in the first sample in order to predict a specified function of the future sample Y1, …, Yn. Here one- and two-sided prediction intervals for at least q of N future observations are developed for the situation in which, in addition to the previous sample, there is prior information available. The information is specified via the parameter α of the Dirichlet process prior distribution. 相似文献
7.
8.
A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables 总被引:7,自引:0,他引:7
Guillermo Mir -Quesada Enrique Del Castillo John J. Peterson 《Journal of applied statistics》2004,31(3):251-270
An approach for the multiple response robust parameter design problem based on a methodology by Peterson (2000) is presented. The approach is Bayesian, and consists of maximizing the posterior predictive probability that the process satisfies a set of constraints on the responses. In order to find a solution robust to variation in the noise variables, the predictive density is integrated not only with respect to the response variables but also with respect to the assumed distribution of the noise variables. The maximization problem involves repeated Monte Carlo integrations, and two different methods to solve it are evaluated. A Matlab code was written that rapidly finds an optimal (robust) solution in case it exists. Two examples taken from the literature are used to illustrate the proposed method. 相似文献
9.
《Journal of Statistical Computation and Simulation》2012,82(11):1651-1660
This paper is an effort to obtain Bayes estimators of Rayleigh parameter and its associated risk based on a conjugate prior (square root inverted gamma prior) with respect to both symmetric loss function (squared error loss), and asymmetric loss function (precautionary loss function). We also derive the highest posterior density (HPD) interval for the Rayleigh parameter as well as the HPD prediction intervals for a future observation from this distribution. An illustrative example to test how the Rayleigh distribution fits a real data set is presented. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different conditions. 相似文献
10.
N. Ch. Bhatra Charyulu 《统计学通讯:理论与方法》2017,46(7):3520-3525
To reduce the dimensionality of the second-order response surface design model, variance component indices under imposing and non imposing restrictions on the moment matrix toward the orthogonality are derived and presented and the same is illustrated with suitable examples in this article. 相似文献
11.
We propose a fully Bayesian model with a non-informative prior for analyzing misclassified binary data with a validation substudy. In addition, we derive a closed-form algorithm for drawing all parameters from the posterior distribution and making statistical inference on odds ratios. Our algorithm draws each parameter from a beta distribution, avoids the specification of initial values, and does not have convergence issues. We apply the algorithm to a data set and compare the results with those obtained by other methods. Finally, the performance of our algorithm is assessed using simulation studies. 相似文献
12.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
13.
It is assumed that a small random sample of fixed size n is drawn from a logarithmic series distribution with parameter θ and that it is desired to estimate θ by means of a two-sided confidence interval. In this note Crow's system of confidence intervals is compared, in shortness of intervals, with Clopper and Pearson's, and the corresponding randomized counterparts. 相似文献
14.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994) for a more general covariance kernel proposed by Chatterjee and Qin (2011). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011) from symmetrical case to asymmetrical U-type designs. 相似文献
15.
System characteristics of a redundant repairable system with two primary units and one standby are studied from a Bayesian viewpoint with different types of priors assumed for unknown parameters, in which the coverage factor is the same for an operating unit failure as that for a standby unit failure. Times to failure and times to repair of the operating and standby units are assumed to follow exponential distributions. When times to failure and times to repair with uncertain parameters, a Bayesian approach is adopted to evaluate system characteristics. Monte Carlo simulation is used to derive the posterior distribution for the mean time to system failure and the steady-state availability. Some numerical experiments are performed to illustrate the results derived in this paper. 相似文献
16.
Tonglin Zhang 《统计学通讯:理论与方法》2013,42(9):1703-1711
In Bayesian analysis, people usually report the highest posterior density (HPD) credible interval as an interval estimate of an unknown parameter. However, when the unknown parameter is the nonnegative normal mean, the Bayesian HPD credible interval under the uniform prior has quite a low minimum frequentist coverage probability. To enhance the minimum frequentist coverage probability of a credible interval, I propose a new method of reporting the Bayesian credible interval. Numerical results show that the new reported credible interval has a much higher minimum frequentist coverage probability than the HPD credible interval. 相似文献
17.
18.
Well-known nonparametric confidence intervals for quantiles are of the form (X i : n , X j : n ) with suitably chosen order statistics X i : n and X j : n , but typically their coverage levels differ from those prescribed. It appears that the coverage level of the confidence interval of the form (X i : n , X j : n ) with random indices I and J can be rendered equal, exactly to any predetermined level γ?∈?(0, 1). Best in the sense of minimum E(J???I), i.e., ‘the shortest’, two-sided confidence intervals are constructed. If no two-sided confidence interval exists for a given γ, the most accurate one-sided confidence intervals are constructed. 相似文献
19.
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example. 相似文献
20.
Hafiz Muhammad Arshad 《Journal of statistical planning and inference》2012,142(1):232-247
Response surface designs are widely used in industries like chemicals, foods, pharmaceuticals, bioprocessing, agrochemicals, biology, biomedicine, agriculture and medicine. One of the major objectives of these designs is to study the functional relationship between one or more responses and a number of quantitative input factors. However, biological materials have more run to run variation than in many other experiments, leading to the conclusion that smaller response surface designs are inappropriate. Thus designs to be used in these research areas should have greater replication. Gilmour (2006) introduced a wide class of designs called “subset designs” which are useful in situations in which run to run variation is high. These designs allow the experimenter to fit the second order response surface model. However, there are situations in which the second order model representation proves to be inadequate and unrealistic due to the presence of lack of fit caused by third or higher order terms in the true response surface model. In such situations it becomes necessary for the experimenter to estimate these higher order terms. In this study, the properties of subset designs, in the context of the third order response surface model, are explored. 相似文献