首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
This paper addresses the problem of obtaining maximum likelihood estimates for the parameters of the Pearson Type I distribution (beta distribution with unknown end points and shape parameters). Since they do not seem to have appeared in the literature, the likelihood equations and the information matrix are derived. The regularity conditions which ensure asymptotic normality and efficiency are examined, and some apparent conflicts in the literature are noted. To ensure regularity, the shape parameters must be greater than two, giving an (assymmetrical) bell-shaped distribution with high contact in the tails. A numerical investigation was carried out to explore the bias and variance of the maximum likelihood estimates and their dependence on sample size. The numerical study indicated that only for large samples (n ≥ 1000) does the bias in the estimates become small and does the Cramér-Rao bound give a good approximation for their variance. The likelihood function has a global maximum which corresponds to parameter estimates that are inadmissable. Useful parameter estimates can be obtained at a local maximum, which is sometimes difficult to locate when the sample size is small.  相似文献   

2.
Data are increasingly being collected in the form of images, especially in fields using remote sensing and microscopy. Statisticians are becoming interested in developing techniques to handle the highly structured data of images. Statistical work in this area is surveyed, and two problems discussed in more detail. The first is a form of image segmentation, classifying the pixels of a satellite picture by land use. The second is the summarization of electron micrographs.  相似文献   

3.
In this paper, we consider the Bayesian analysis of competing risks data, when the data are partially complete in both time and type of failures. It is assumed that the latent cause of failures have independent Weibull distributions with the common shape parameter, but different scale parameters. When the shape parameter is known, it is assumed that the scale parameters have Beta–Gamma priors. In this case, the Bayes estimates and the associated credible intervals can be obtained in explicit forms. When the shape parameter is also unknown, it is assumed that it has a very flexible log-concave prior density functions. When the common shape parameter is unknown, the Bayes estimates of the unknown parameters and the associated credible intervals cannot be obtained in explicit forms. We propose to use Markov Chain Monte Carlo sampling technique to compute Bayes estimates and also to compute associated credible intervals. We further consider the case when the covariates are also present. The analysis of two competing risks data sets, one with covariates and the other without covariates, have been performed for illustrative purposes. It is observed that the proposed model is very flexible, and the method is very easy to implement in practice.  相似文献   

4.
We consider hidden Markov models with an unknown number of regimes for the segmentation of the pixel intensities of digital images that consist of a small set of colours. New reversible jump Markov chain Monte Carlo algorithms to estimate both the dimension and the unknown parameters of the model are introduced. Parameters are updated by random walk Metropolis–Hastings moves, without updating the sequence of the hidden Markov chain. The segmentation (i.e. the estimation of the hidden regimes) is a further aim and is performed by means of a number of competing algorithms. We apply our Bayesian inference and segmentation tools to digital images, which are linearized through the Peano–Hilbert scan, and perform experiments and comparisons on both synthetic images and a real brain magnetic resonance image.  相似文献   

5.
We discuss the impact of misspecifying fully parametric proportional hazards and accelerated life models. For the uncensored case, misspecified accelerated life models give asymptotically unbiased estimates of covariate effect, but the shape and scale parameters depend on the misspecification. The covariate, shape and scale parameters differ in the censored case. Parametric proportional hazards models do not have a sound justification for general use: estimates from misspecified models can be very biased, and misleading results for the shape of the hazard function can arise. Misspecified survival functions are more biased at the extremes than the centre. Asymptotic and first order results are compared. If a model is misspecified, the size of Wald tests will be underestimated. Use of the sandwich estimator of standard error gives tests of the correct size, but misspecification leads to a loss of power. Accelerated life models are more robust to misspecification because of their log-linear form. In preliminary data analysis, practitioners should investigate proportional hazards and accelerated life models; software is readily available for several such models.  相似文献   

6.
The use of Bayesian models for the reconstruction of images degraded by both some blurring function H and the presence of noise has become popular in recent years. Making an analogy between classical degradation processes and resampling, we propose a Bayesian model for generating finer resolution images. The approach involves defining resampling, or aggregation, as a linear operator applied to an original picture to produce derived lower resolution data which represent our available experimental infor-mation. Within this framework, the operation of making inference on the orginal data can be viewed as an inverse linear transformation problem. This problem, formalized through Bayes' theorem, can be solved by the classical maximum a posteriori estimation procedure. Image local characteristics are assumed to follow a Gaussian Markov random field. Under some mild assumptions, simple, iterative and local operations are involved, making parallel 'relaxation' processing feasible. experimental results are shown on some images, for which good subsampling estimates are obtained.  相似文献   

7.
8.
This paper addresses the image modeling problem under the assumption that images can be represented by third-order, hidden Markov mesh random field models. The range of applications of the techniques described hereafter comprises the restoration of binary images, the modeling and compression of image data, as well as the segmentation of gray-level or multi-spectral images, and image sequences under the short-range motion hypothesis. We outline coherent approaches to both the problems of image modeling (pixel labeling) and estimation of model parameters (learning). We derive a real-time labeling algorithm-based on a maximum, marginal a posteriori probability criterion-for a hidden third-order Markov mesh random field model. Our algorithm achieves minimum time and space complexities simultaneously, and we describe what we believe to be the most appropriate data structures to implement it. Critical aspects of the computer simulation of a real-time implementation are discussed, down to the computer code level. We develop an (unsupervised) learning technique by which the model parameters can be estimated without ground truth information. We lay bare the conditions under which our approach can be made time-adaptive in order to be able to cope with short-range motion in dynamic image sequences. We present extensive experimental results for both static and dynamic images from a wide variety of sources. They comprise standard, infra-red and aerial images, as well as a sequence of ultrasound images of a fetus and a series of frames from a motion picture sequence. These experiments demonstrate that the method is subjectively relevant to the problems of image restoration, segmentation and modeling.  相似文献   

9.
This paper compares minimum distance estimation with best linear unbiased estimation to determine which technique provides the most accurate estimates for location and scale parameters as applied to the three parameter Pareto distribution. Two minimum distance estimators are developed for each of the three distance measures used (Kolmogorov, Cramer‐von Mises, and Anderson‐Darling) resulting in six new estimators. For a given sample size 6 or 18 and shape parameter 1(1)4, the location and scale parameters are estimated. A Monte Carlo technique is used to generate the sample sets. The best linear unbiased estimator and the six minimum distance estimators provide parameter estimates based on each sample set. These estimates are compared using mean square error as the evaluation tool. Results show that the best linear unbaised estimator provided more accurate estimates of location and scale than did the minimum estimators tested.  相似文献   

10.
A large number of models have been derived from the two-parameter Weibull distribution including the inverse Weibull (IW) model which is found suitable for modeling the complex failure data set. In this paper, we present the Bayesian inference for the mixture of two IW models. For this purpose, the Bayes estimates of the parameters of the mixture model along with their posterior risks using informative as well as the non-informative prior are obtained. These estimates have been attained considering two cases: (a) when the shape parameter is known and (b) when all parameters are unknown. For the former case, Bayes estimates are obtained under three loss functions while for the latter case only the squared error loss function is used. Simulation study is carried out in order to explore numerical aspects of the proposed Bayes estimators. A real-life data set is also presented for both cases, and parameters obtained under case when shape parameter is known are tested through testing of hypothesis procedure.  相似文献   

11.
12.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

13.
Mis-specification analyses of gamma and Wiener degradation processes   总被引:2,自引:0,他引:2  
Degradation models are widely used these days to assess the lifetime information of highly reliable products if there exist some quality characteristics (QC) whose degradation over time can be related to the reliability of the product. In this study, motivated by a laser data, we investigate the mis-specification effect on the prediction of product's MTTF (mean-time-to-failure) when the degradation model is wrongly fitted. More specifically, we derive an expression for the asymptotic distribution of quasi-MLE (QMLE) of the product's MTTF when the true model comes from gamma degradation process, but is wrongly assumed to be Wiener degradation process. The penalty for the model mis-specification can then be addressed sequentially. The result demonstrates that the effect on the accuracy of the product's MTTF prediction strongly depends on the ratio of critical value to the scale parameter of the gamma degradation process. The effects on the precision of the product's MTTF prediction are observed to be serious when the shape and scale parameters of the gamma degradation process are large. We then carry out a simulation study to evaluate the penalty of the model mis-specification, using which we show that the simulation results are quite close to the theoretical ones even when the sample size and termination time are not large. For the reverse mis-specification problem, i.e., when the true degradation is a Wiener process, but is wrongly assumed to be a gamma degradation process, we carry out a Monte Carlo simulation study to examine the effect of the corresponding model mis-specification. The obtained results reveal that the effect of this model mis-specification is negligible.  相似文献   

14.
Statistical image restoration techniques are oriented mainly toward modelling the image degradation process in order to recover the original image. This usually involves formulating a criterion function that will yield some optimal estimate of the desired image. Often these techniques assume that the point spread function is known when the image is restored and indeed when we estimate the smoothing parameter. However in practice this assumption may not hold. This paper investigates empirically the effect of mis-specifying the point spread function on some data-based estimates of the regularization parameter and hence on the image reconstructions. Comparisons of image reconstruction quality are based on the mean absolute difference in pixel intensities between the true and reconstructed images.  相似文献   

15.
Most real-world shapes and images are characterized by high variability- they are not rigid, like crystals, for example—but they are strongly structured. Therefore, a fundamental task in the understanding and analysis of such image ensembles is the construction of models that incorporate both variability and structure in a mathematically precise way. The global shape models introduced in Grenander's general pattern theory are intended to do this. In this paper, we describe the representation of two-dimensional mitochondria and membranes in electron microscope photographs, and three-dimensional amoebae in optical sectioning microscopy. There are three kinds of variability to all of these patterns, which these representations accommodate. The first is the variability in shape and viewing orientation. For this, the typical structure is represented via linear, circular and spherical templates, with the variability accomodated via the application of transformations applied to the templates. The transformations form groups: scale, rotation and translation. They are locally applied throughout the continuum and of high dimension. The second is the textural variability; the inside and outside of these basic shapes are subject to random variation, as well as sensor noise. For this, statistical sensor models and Markov random field texture models are used to connect the constituent structures of the shapes to the measured data. The third variability type is associated with the fact that each scene is made up of a variable number of shapes; this number is not assumed to be known a priori. Each scene has a variable number of parameters encoding the transformations of the templates appropriate for that scene. For this, a single posterior distribution is defined over the countable union of spaces representing models of varying numbers of shapes. Bayesian inference is performed via computation of the conditional expectation of the parametrically defined shapes under the posterior. These conditional mean estimates are generated using jump-diffusion processes. Results for membranes, mitochondria and amoebae are shown.  相似文献   

16.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

17.
18.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

19.
We examine the use of Confocal Laser Tomographic images for detecting glaucoma. From the clinical aspect, the optic nerve head's (ONH) area contains all the relevant information on glaucoma. The shape of ONH is approximately a skewed cup. We summarize its shape by three biological landmarks on the neural-rim and the fourth landmark as the point of the maximum depth, which is approximately the point where the optic nerve enters this eye cup. These four landmarks are extracted from the images related to some Rhesus monkeys before and after inducing glaucoma. Previous analysis on Bookstein shape coordinates of these four landmarks revealed only marginally significant findings. From clinical experience, it is believed that the ratio depth to diameter of the eye cup provides a useful measure of the shape change. We consider the bootstrap distribution of this normalized 'depth' (G) and give evidence that it provides an appropriate measure of the shape change. This measure G is labelled as the glaucoma index. Further experiments are in progress to validate its use for glaucoma in humans.  相似文献   

20.
In estimating the proportion ‘cured’ after adjuvant treatment, a population of cancer patients can be assumed to be a mixture of two Gompertz subpopulations, those who will die of other causes with no evidence of disease relapse and those who will die of their primary cancer. Estimates of the parameters of the component dying of other causes can be obtained from census data, whereas maximum likelihood estimates for the proportion cured and for the parameters of the component of patients dying of cancer can be obtained from follow-up data.

This paper examines, through simulation of follow-up data, the feasibility of maximum likelihood estimation of a mixture of two Gompertz distributions when censoring occurs. Means, variances and mean square error of the maximum likelihood estimates and the estimated asymptotic variance-covariance matrix is obtained from the simulated samples. The relationship of these variances with sample size, proportion censored, mixing proportion and population parameters are considered.

Moderate sample size typical of cooperative trials yield clinically acceptable estimates. Both increasing sample size and decreasing proportion of censored data decreases variance and covariance of the unknown parameters. Useful results can be obtained with data which are as much as 50% censored. Moreover, if the sample size is sufficiently large, survival data which are as much as 70% censored can yield satisfactory results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号