首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers the multiple comparisons problem for normal variances. We propose a solution based on a Bayesian model selection procedure to this problem in which no subjective input is considered. We construct the intrinsic and fractional priors for which the Bayes factors and model selection probabilities are well defined. The posterior probability of each model is used as a model selection tool. The behaviour of these Bayes factors is compared with the Bayesian information criterion of Schwarz and some frequentist tests.  相似文献   

2.
Based on two-sample rank order statistics, a repeated significance testing procedure for a multi-sample location problem is considered. The asymptotic distribution theory of the proposed tests is given under the null hypothesis as well as under local alternatives. A Bahadur efficiency result of the repeated significance test relative to the terminal test based solely on the target sample size is presented. In the adaptation of the proposed tests to multiple comparisons, an asymptotically equivalent test statistic in terms of the rank estimators of the location parameters is derived from which the Scheffé method of multiple comparisons can be obtained in a convinient way.  相似文献   

3.
Mixed treatment comparison (MTC) models rely on estimates of relative effectiveness from randomized clinical trials so as to respect randomization across treatment arms. This approach could potentially be simplified by an alternative parameterization of the way effectiveness is modeled. We introduce a treatment‐based parameterization of the MTC model that estimates outcomes on both the study and treatment levels. We compare the proposed model to the commonly used MTC models using a simulation study as well as three randomized clinical trial datasets from published systematic reviews comparing (i) treatments on bleeding after cirrhosis, (ii) the impact of antihypertensive drugs in diabetes mellitus, and (iii) smoking cessation strategies. The simulation results suggest similar or sometimes better performance of the treatment‐based MTC model. Moreover, from the real data analyses, little differences were observed on the inference extracted from both models. Overall, our proposed MTC approach performed as good, or better, than the commonly applied indirect and MTC models and is simpler, fast, and easier to implement in standard statistical software. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Statistical image restoration techniques are oriented mainly toward modelling the image degradation process in order to recover the original image. This usually involves formulating a criterion function that will yield some optimal estimate of the desired image. Often these techniques assume that the point spread function is known when the image is restored and indeed when we estimate the smoothing parameter. However in practice this assumption may not hold. This paper investigates empirically the effect of mis-specifying the point spread function on some data-based estimates of the regularization parameter and hence on the image reconstructions. Comparisons of image reconstruction quality are based on the mean absolute difference in pixel intensities between the true and reconstructed images.  相似文献   

5.
Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.  相似文献   

6.
Abstract.  Hazard rate estimation is an alternative to density estimation for positive variables that is of interest when variables are times to event. In particular, it is here shown that hazard rate estimation is useful for seismic hazard assessment. This paper suggests a simple, but flexible, Bayesian method for non-parametric hazard rate estimation, based on building the prior hazard rate as the convolution mixture of a Gaussian kernel with an exponential jump-size compound Poisson process. Conditions are given for a compound Poisson process prior to be well-defined and to select smooth hazard rates, an elicitation procedure is devised to assign a constant prior expected hazard rate while controlling prior variability, and a Markov chain Monte Carlo approximation of the posterior distribution is obtained. Finally, the suggested method is validated in a simulation study, and some Italian seismic event data are analysed.  相似文献   

7.
Summary.  We consider maximum likelihood methods for estimating the end point of a distribution. The likelihood function is modified by a prior distribution that is imposed on the location parameter. The prior is explicit and meaningful, and has a general form that adapts itself to different settings. Results on convergence rates and limiting distributions are given. In particular, it is shown that the limiting distribution is non-normal in non-regular cases. Parametric bootstrap techniques are suggested for quantifying the accuracy of the estimator. We illustrate performance by applying the method to multiparameter Weibull and gamma distributions.  相似文献   

8.
Data collected in various scientific fields are count data. One way to analyze such data is to compare the individual levels of the factor treatment using multiple comparisons. However, the measured individuals are often clustered – e.g. according to litter or rearing. This must be considered when estimating the parameters by a repeated measurement model. In addition, ignoring the overdispersion to which count data is prone leads to an increase of the type one error rate. We carry out simulation studies using several different data settings and compare different multiple contrast tests with parameter estimates from generalized estimation equations and generalized linear mixed models in order to observe coverage and rejection probabilities. We generate overdispersed, clustered count data in small samples as can be observed in many biological settings. We have found that the generalized estimation equations outperform generalized linear mixed models if the variance-sandwich estimator is correctly specified. Furthermore, generalized linear mixed models show problems with the convergence rate under certain data settings, but there are model implementations with lower implications exists. Finally, we use an example of genetic data to demonstrate the application of the multiple contrast test and the problems of ignoring strong overdispersion.  相似文献   

9.
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.  相似文献   

10.
In this paper, we discuss a simple fully Bayesian analysis of the change-point problem for the directional data in the parametric framework with von Mises or circular normal distribution as the underlying distribution. We first discuss the problem of detecting change in the mean direction of the circular normal distribution using a latent variable approach when the concentration parameter is unknown. Then, a simpler approach, beginning with proper priors for all the unknown parameters – the sampling importance resampling technique – is used to obtain the posterior marginal distribution of the change-point. The method is illustrated using the wind data [E.P. Weijers, A. Van Delden, H.F. Vugts and A.G.C.A. Meesters, The composite horizontal wind field within convective structures of the atmospheric surface layer, J. Atmos. Sci. 52 (1995. 3866–3878]. The method can be adapted for a variety of situations involving both angular and linear data and can be used with profit in the context of statistical process control in Phase I of control charting and also in Phase II in conjunction with control charts.  相似文献   

11.
In this paper a new method called the EMS algorithm is used to solve Wicksell's corpuscle problem, that is the determination of the distribution of the sphere radii in a medium given the radii of their profiles in a random slice. The EMS algorithm combines the EM algorithm, a procedure for obtaining maximum likelihood estimates of parameters from incomplete data, with simple smoothing. The method is tested on simulated data from three different sphere radii densities, namely a bimodal mixture of Normals, a Weibull and a Normal. The effect of varying the level of smoothing, the number of classes in which the data is binned and the number of classes for which the estimated density is evaluated, is investigated. Comparisons are made between these results and those obtained by others in this field.  相似文献   

12.
This article deals with the problem of Bayesian inference concerning the common scale parameter of several Pareto distributions. Bayesian hypothesis testing of, and Bayesian interval estimation for, the common scale parameter is given. Numerical studies including a comparison study, a simulation study, and a practical application study are given in order to illustrate our procedures and to demonstrate the performance, advantages, and merits of the Bayesian procedures over the classical and generalized variable procedures.  相似文献   

13.
The performance of paired versus joint ranking procedures for pairwise multiple comparisons is investigated using approxiirete Bahadur efficiency, When the populations to be compared are widely separated, or-when the. data arise from a shift model with an underlying unimodal density, the paired ranking procedure is found to be better for comparing two adjacent populations while the joint ranking procedure is more efficient for comparing the two most distant populations  相似文献   

14.
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the P‐value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Instead of requiring a prior distribution, the DI satisfies a worst‐case minimax prediction criterion. Replacing a (possibly pseudo‐) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage side information, either in data associated with comparisons other than the comparison at hand or in the parameter value of a simple null hypothesis. Two case studies, one involving multiple populations and the other involving multiple biological features, indicate that the DI is robust to the type of side information used when that information is assigned the weight of a single observation. Such robustness suggests that very little adjustment for multiple comparisons is warranted if the sample size is at least moderate. The Canadian Journal of Statistics 39: 610–631; 2011. © 2011 Statistical Society of Canada  相似文献   

15.
An approach for the multiple response robust parameter design problem based on a methodology by Peterson (2000) is presented. The approach is Bayesian, and consists of maximizing the posterior predictive probability that the process satisfies a set of constraints on the responses. In order to find a solution robust to variation in the noise variables, the predictive density is integrated not only with respect to the response variables but also with respect to the assumed distribution of the noise variables. The maximization problem involves repeated Monte Carlo integrations, and two different methods to solve it are evaluated. A Matlab code was written that rapidly finds an optimal (robust) solution in case it exists. Two examples taken from the literature are used to illustrate the proposed method.  相似文献   

16.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

17.
18.
Importance measures are used to estimate the relative importance of components to system reliability. Phased mission systems (PMS) have many components working in several phases with different success criteria, and their component structural importance is distinct in different phases. Additionally, reliability parameters of components in PMS always have uncertainty in practice. Therefore, existing component importance measures based on either the partial derivative of system structure function or component structural importance may have difficulties in PMS importance analysis. This paper presents a simulation method to evaluate the component global importance for PMS based on the variance-based method and the Monte-Carlo method. To facilitate the practical use, we further discuss the correlation relationship between the component global importance and its possible influence factors, and present here a fitting model for evaluating component global importance. Finally, two examples are given to show that the fitting model displays quite reasonable component importance.  相似文献   

19.
ABSTRACT

In this paper, the maximum value test is proposed and considered for two-sample problem solving with lifetime data. This test is a distribution-free test under non-censoring and is a not distribution-free test under censoring. The formula of the limit distribution of the proposed maximal value test is represented in the general case. The distribution of the test statistic has been studied experimentally. Also, we propose the estimate of a p-value calculation of the maximum value test instead of the Monte-Carlo simulation. This test is useful and applicable in case of choosing among the logrank test, the Cox–Mantel test, the Q test and Generalized Wilcoxon tests, for instance, the Gehan's Generalized Wilcoxon test and the Peto and Peto's Generalized Wilcoxon test.  相似文献   

20.
Abstract. Image analysis frequently deals with shape estimation and image reconstruction. The objects of interest in these problems may be thought of as random sets, and one is interested in finding a representative, or expected, set. We consider a definition of set expectation using oriented distance functions and study the properties of the associated empirical set. Conditions are given such that the empirical average is consistent, and a method to calculate a confidence region for the expected set is introduced. The proposed method is applied to both real and simulated data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号