首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

2.
The focus of this paper is objective priors for spatially correlated data with nugget effects. In addition to the Jeffreys priors and commonly used reference priors, two types of “exact” reference priors are derived based on improper marginal likelihoods. An “equivalence” theorem is developed in the sense that the expectation of any function of the score functions of the marginal likelihood function can be taken under marginal likelihoods. Interestingly, these two types of reference priors are identical.  相似文献   

3.
The author shows how geostatistical data that contain measurement errors can be analyzed objectively by a Bayesian approach using Gaussian random fields. He proposes a reference prior and two versions of Jeffreys' prior for the model parameters. He studies the propriety and the existence of moments for the resulting posteriors. He also establishes the existence of the mean and variance of the predictive distributions based on these default priors. His reference prior derives from a representation of the integrated likelihood that is particularly convenient for computation and analysis. He further shows that these default priors are not very sensitive to some aspects of the design and model, and that they have good frequentist properties. Finally, he uses a data set of carbon/nitrogen ratios from an agricultural field to illustrate his approach.  相似文献   

4.
5.
6.
This paper investigates statistical issues that arise in interlaboratory studies known as Key Comparisons when one has to link several comparisons to or through existing studies. An approach to the analysis of such a data is proposed using Gaussian distributions with heterogeneous variances. We develop conditions for the set of sufficient statistics to be complete and for the uniqueness of uniformly minimum variance unbiased estimators (UMVUE) of the contrast parametric functions. New procedures are derived for estimating these functions with estimates of their uncertainty. These estimates lead to associated confidence intervals for the laboratories (or studies) contrasts. Several examples demonstrate statistical inference for contrasts based on linkage through the pilot laboratories. Monte Carlo simulation results on performance of approximate confidence intervals are also reported.  相似文献   

7.
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem.  相似文献   

8.
Consider the problem of estimating the mean of a p (≥3)-variate multi-normal distribution with identity variance-covariance matrix and with unweighted sum of squared error loss. A class of minimax, noncomparable (i.e. no estimate in the class dominates any other estimate in the class) estimates is proposed; the class contains rules dominating the simple James-Stein estimates. The estimates are essentially smoothed versions of the scaled, truncated James-Stein estimates studied by Efron and Morris. Explicit and analytically tractable expressions for their risks are obtained and are used to give guidelines for selecting estimates within the class.  相似文献   

9.
The class of joint mean‐covariance models uses the modified Cholesky decomposition of the within subject covariance matrix in order to arrive to an unconstrained, statistically meaningful reparameterisation. The new parameterisation of the covariance matrix has two sets of parameters that separately describe the variances and correlations. Thus, with the mean or regression parameters, these models have three sets of distinct parameters. In order to alleviate the problem of inefficient estimation and downward bias in the variance estimates, inherent in the maximum likelihood estimation procedure, the usual REML estimation procedure adjusts for the degrees of freedom lost due to the estimation of the mean parameters. Because of the parameterisation of the joint mean covariance models, it is possible to adapt the usual REML procedure in order to estimate the variance (correlation) parameters by taking into account the degrees of freedom lost by the estimation of both the mean and correlation (variance) parameters. To this end, here we propose adjustments to the estimation procedures based on the modified and adjusted profile likelihoods. The methods are illustrated by an application to a real data set and simulation studies. The Canadian Journal of Statistics 40: 225–242; 2012 © 2012 Statistical Society of Canada  相似文献   

10.
A general methodology is presented for finding suitable Poisson log-linear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution is studied for two- and three-way contingency tables, in which the regression coefficients are interpretable in terms of odds ratios in the table. Efficient and accurate schemes are proposed for calculating the posterior model probabilities. The methods are illustrated for a large number of two-way simulated tables and for two three-way tables. These methods appear to be useful in selecting the best log-linear model and in estimating parameters of interest that reflect uncertainty in the true model.  相似文献   

11.
The authors develop a Markov model for the analysis of longitudinal categorical data which facilitates modelling both marginal and conditional structures. A likelihood formulation is employed for inference, so the resulting estimators enjoy the optimal properties such as efficiency and consistency, and remain consistent when data are missing at random. Simulation studies demonstrate that the proposed method performs well under a variety of situations. Application to data from a smoking prevention study illustrates the utility of the model and interpretation of covariate effects. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

12.
Clinical trials usually involve efficient and ethical objectives such as maximizing the power and minimizing the total failure number. Interim analysis is now a standard technique in practice to achieve these objectives. Randomized urn models have been extensively studied in the literature. In this paper, we propose to perform interim analysis on clinical trials based on urn models and study its properties. We show that the urn composition, allocation of patients and parameter estimators can be approximated by a joint Gaussian process. Consequently, sequential test statistics of the proposed procedure converge to a Brownian motion in distribution and the sequential test statistics asymptotically satisfy the canonical joint distribution defined in Jennison & Turnbull (Jennison & Turnbull 2000. Group Sequential Methods with Applications to Clinical Trials, Chapman and Hall/CRC). These results provide a solid foundation and open a door to perform the interim analysis on randomized clinical trials with urn models in practice. Furthermore, we demonstrate our proposal through examples and simulations by applying sequential monitoring and stochastic curtailment techniques. The Canadian Journal of Statistics 40: 550–568; 2012 © 2012 Statistical Society of Canada  相似文献   

13.
We develop clustering procedures for longitudinal trajectories based on a continuous-time hidden Markov model (CTHMM) and a generalized linear observation model. Specifically, in this article we carry out finite and infinite mixture model-based clustering for a CTHMM and achieve inference using Markov chain Monte Carlo (MCMC). For a finite mixture model with a prior on the number of components, we implement reversible-jump MCMC to facilitate the trans-dimensional move between models with different numbers of clusters. For a Dirichlet process mixture model, we utilize restricted Gibbs sampling split–merge proposals to improve the performance of the MCMC algorithm. We apply our proposed algorithms to simulated data as well as a real-data example, and the results demonstrate the desired performance of the new sampler.  相似文献   

14.
One feature of the usual polychotomous logistic regression model for categorical outcomes is that a covariate must be included in all the regression equations. If a covariate is not important in all of them, the procedure will estimate unnecessary parameters. More flexible approaches allow different subsets of covariates in different regressions. One alternative uses individualized regressions which express the polychotomous model as a series of dichotomous models. Another uses a model in which a reduced set of parameters is simultaneously estimated for all the regressions. Large-sample efficiencies of these procedures were compared in a variety of circumstances in which there was a common baseline category for the outcome and the covariates were normally distributed. For a correctly specified model, the reduced estimates were over 100% efficient for nonzero slope parameters and up to 500% efficient when the baseline frequency and the effect of interest were small. The individualized estimates could have efficiencies less than 50% when the effect of interest was large, but were also up to 130% efficient when the baseline frequency was large and the effect of interest was small. Efficiency was usually enhanced by correlation among the covariates. For an underspecified reduced model, asymptotic bias in the reduced estimates was approximately proportional to the magnitude of the omitted parameter and to the reciprocal of the baseline frequency.  相似文献   

15.
The number of components is an important feature in finite mixture models. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic does not have a chi-square limit distribution. It is very difficult to find a test with a specified significance level, and this is especially true for testing k — 1 versus k components. Most of the existing work has concentrated on finding a comparable approximation to the limit distribution of the log-likelihood-ratio statistic. In this paper, we use a statistic similar to the usual log likelihood ratio, but its null distribution is asymptotically normal. A simulation study indicates that the method has good power at detecting extra components. We also discuss how to improve the power of the test, and some simulations are performed.  相似文献   

16.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
18.
A two-stage procedure is described for assessing subject-specific and marginal agreement for data from a test-retest reliability study of a binary classification procedure. Subject-specific agreement is parametrized through the log odds ratio, while marginal agreement is reflected by the log ratio of the off-diagonal Poisson means. A family of agreement measures in the interval [-1, 1] is presented for both types of agreement. The conditioning argument described facilitates exact inference. The proposed methodology is demonstrated by way of an example involving hypothetical data chosen for illustrative purposes, and data from a National Health Survey Study (Rogot and Goldberg 1966).  相似文献   

19.
Let X1, X2, …, Xn be identically, independently distributed N(i,1) random variables, where i = 0, ±1, ±2, … Hammersley (1950) showed that d = [X?n], the nearest integer to the sample mean, is the maximum likelihood estimator of i. Khan (1973) showed that d is minimax and admissible with respect to zero-one loss. This note now proves a conjecture of Stein to the effect that in the class of integer-valued estimators d is minimax and admissible under squared-error loss.  相似文献   

20.
In this paper we consider a system which has three subsystems A, B and C. A and B are one unit systems and two unit systems respectively, and C is exposed to a damage process. The units of B have exponential life times. In model 1, A has a general life time and the damage process of C is Poisson and in model 2, A has an exponential life time and C is exposed to a renewal damage process. Introducing a repair facility which repairs all the failures one by one, this paper presents the joint Laplace-Stieltjes transforms of the ur> and down times. Marginal down time distributions an- calculated whon there exists a repair facility vor every damage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号