首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Relative motion between the camera and the object results in the recording of a motion-blurred image. Under certain idealized conditions, such blurring can be mathematically corrected. We refer to this as ‘motion deblurring’. We start with some idealized assumptions under which the motion deblurring problem is a linear inverse problem with certain positivity constraints; LININPOS problems, for short. Such problems, even in the case of no statistical noise, can be solved using the maximum likelihood/EM approach in the following sense. If they have a solution, the ML/EM iterative method will converge to it; otherwise, it will converge to the nearest approximation of a solution, where ‘nearest’ is interpreted in a likelihood sense or, equivalently, in a Kullback-Leibler information divergence sense. We apply the ML/EM algorithm to such problems and discuss certain special cases, such as motion along linear or circular paths with or without acceleration. The idealized assumptions under which the method is developed are hardly ever satisfied in real applications, so we experiment with the method under conditions that violate these assumptions. Specifically, we experimented with an image created through a computer-simulated digital motion blurring corrupted with noise, and with an image of a moving toy cart recorded with a 35 mm camera while in motion. The gross violations of the idealized assumptions, especially in the toy cart example, led to a host of very difficult problems which always occur under real-life conditions and need to be addressed. We discuss these problems in detail and propose some ‘engineering solutions' that, when put together, appear to lead to a good methodology for certain motion deblurring problems. Some of the issues we discuss, in various degrees of detail, include estimating the speed of motion which is referred to as ‘blur identification’; non-zero-background artefacts and pre- and post- processing of the images to remove such artefacts; the need to ‘stabilize’ the solution because of the inherent ill-posedness of the problem; and computer implemetation.  相似文献   

2.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

3.
Within the context of California's public report of coronary artery bypass graft (CABG) surgery outcomes, we first thoroughly review popular statistical methods for profiling healthcare providers. Extensive simulation studies are then conducted to compare profiling schemes based on hierarchical logistic regression (LR) modeling under various conditions. Both Bayesian and frequentist's methods are evaluated in classifying hospitals into ‘better’, ‘normal’ or ‘worse’ service providers. The simulation results suggest that no single method would dominate others on all accounts. Traditional schemes based on LR tend to identify too many false outliers, while those based on hierarchical modeling are relatively conservative. The issue of over shrinkage in hierarchical modeling is also investigated using the 2005–2006 California CABG data set. The article provides theoretical and empirical evidence in choosing the right methodology for provider profiling.  相似文献   

4.
In any other circumstance, it might make sense to define the extent of the terrain (Data Science) first, and then locate and describe the landmarks (Principles). But this data revolution we are experiencing defies a cadastral survey. Areas are continually being annexed into Data Science. For example, biometrics was traditionally statistics for agriculture in all its forms but now, in Data Science, it means the study of characteristics that can be used to identify an individual. Examples of non-intrusive measurements include height, weight, fingerprints, retina scan, voice, photograph/video (facial landmarks and facial expressions) and gait. A multivariate analysis of such data would be a complex project for a statistician, but a software engineer might appear to have no trouble with it at all. In any applied-statistics project, the statistician worries about uncertainty and quantifies it by modelling data as realisations generated from a probability space. Another approach to uncertainty quantification is to find similar data sets, and then use the variability of results between these data sets to capture the uncertainty. Both approaches allow ‘error bars’ to be put on estimates obtained from the original data set, although the interpretations are different. A third approach, that concentrates on giving a single answer and gives up on uncertainty quantification, could be considered as Data Engineering, although it has staked a claim in the Data Science terrain. This article presents a few (actually nine) statistical principles for data scientists that have helped me, and continue to help me, when I work on complex interdisciplinary projects.  相似文献   

5.
The concept of ‘residuation’ is extended so that all ‘generalized residual designs’ (in the sense of Shrinkhande and Singhi) are in fact ‘residual’ with respect to the extended type of residuation. A measure of departure from the usual type of residuation is given in general, and stronger estimates of this measure are given for affine designs.  相似文献   

6.
The likelihood ratio (LR) measures the relative weight of forensic data regarding two hypotheses. Several levels of uncertainty arise if frequentist methods are chosen for its assessment: the assumed population model only approximates the true one, and its parameters are estimated through a limited database. Moreover, it may be wise to discard part of data, especially that only indirectly related to the hypotheses. Different reductions define different LRs. Therefore, it is more sensible to talk about ‘a’ LR instead of ‘the’ LR, and the error involved in the estimation should be quantified. Two frequentist methods are proposed in the light of these points for the ‘rare type match problem’, that is, when a match between the perpetrator's and the suspect's DNA profile, never observed before in the database of reference, is to be evaluated.  相似文献   

7.
In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008).  相似文献   

8.
An algorithm for sampling from non-log-concave multivariate distributions is proposed, which improves the adaptive rejection Metropolis sampling (ARMS) algorithm by incorporating the hit and run sampling. It is not rare that the ARMS is trapped away from some subspace with significant probability in the support of the multivariate distribution. While the ARMS updates samples only in the directions that are parallel to dimensions, our proposed method, the hit and run ARMS (HARARMS), updates samples in arbitrary directions determined by the hit and run algorithm, which makes it almost not possible to be trapped in any isolated subspaces. The HARARMS performs the same as ARMS in a single dimension while more reliable in multidimensional spaces. Its performance is illustrated by a Bayesian free-knot spline regression example. We showed that it overcomes the well-known ‘lethargy’ property and decisively find the global optimal number and locations of the knots of the spline function.  相似文献   

9.
The choice of prior distributions for the variances can be important and quite difficult in Bayesian hierarchical and variance component models. For situations where little prior information is available, a ‘nonin-formative’ type prior is usually chosen. ‘Noninformative’ priors have been discussed by many authors and used in many contexts. However, care must be taken using these prior distributions as many are improper and thus, can lead to improper posterior distributions. Additionally, in small samples, these priors can be ‘informative’. In this paper, we investigate a proper ‘vague’ prior, the uniform shrinkage prior (Strawder-man 1971; Christiansen & Morris 1997). We discuss its properties and show how posterior distributions for common hierarchical models using this prior lead to proper posterior distributions. We also illustrate the attractive frequentist properties of this prior for a normal hierarchical model including testing and estimation. To conclude, we generalize this prior to the multivariate situation of a covariance matrix.  相似文献   

10.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

11.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

12.
13.
A general framework is presented for Bayesian inference of multivariate time series exhibiting long-range dependence. The series are modelled using a vector autoregressive fractionally integrated moving-average (VARFIMA) process, which can capture both short-term correlation structure and long-range dependence characteristics of the individual series, as well as interdependence and feedback relationships between the series. To facilitate a sampling-based Bayesian approach, the exact joint posterior density is derived for the parameters, in a form that is computationally simpler than direct evaluation of the likelihood, and a modified Gibbs sampling algorithm is used to generate samples from the complete conditional distribution associated with each parameter. The paper also shows how an approximate form of the joint posterior density may be used for long time series. The procedure is illustrated using sea surface temperatures measured at three locations along the central California coast. These series are believed to be interdependent due to similarities in local atmospheric conditions at the different locations, and previous studies have found that they exhibit ‘long memory’ when studied individually. The approach adopted here permits investigation of the effects on model estimation of the interdependence and feedback relationships between the series.  相似文献   

14.
15.
Consider the model of k populations whose densities are nonreg-ular in the sense that they involve one or two unknown truncation parameters. In this paper a unified treatment of the problem of Bahadur efficiency of the likelihood ratio test for such a model is presented. The Bahadur efficiency of a certain test based on the union-intersection principle is also studied. Some of these results are then extended to a larger class of nonregular densities.  相似文献   

16.
Wald's approximation to the ARL(average run length in cusum) (cumulative sum) procedures are given for an exponential family of densities. From these approximations it is shown that Page's (1954) cusum procedure is (in a sense) identical with a cusum procedure defined in terms of likelihood ratios. Moreover, these approximations are improved by estimating the excess over the boundary and their closeness is examined by numerical comparisons with some exact results. Some examples are also given.  相似文献   

17.
Investigators often gather longitudinal data to assess changes in responses over time within subjects and to relate these changes to within‐subject changes in predictors. Missing data are common in such studies and predictors can be correlated with subject‐specific effects. Maximum likelihood methods for generalized linear mixed models provide consistent estimates when the data are ‘missing at random’ (MAR) but can produce inconsistent estimates in settings where the random effects are correlated with one of the predictors. On the other hand, conditional maximum likelihood methods (and closely related maximum likelihood methods that partition covariates into between‐ and within‐cluster components) provide consistent estimation when random effects are correlated with predictors but can produce inconsistent covariate effect estimates when data are MAR. Using theory, simulation studies, and fits to example data this paper shows that decomposition methods using complete covariate information produce consistent estimates. In some practical cases these methods, that ostensibly require complete covariate information, actually only involve the observed covariates. These results offer an easy‐to‐use approach to simultaneously protect against bias from both cluster‐level confounding and MAR missingness in assessments of change.  相似文献   

18.
In the expert use problem, hierarchical models provide an ideal perspective for classifying understanding and generalising the aggregative algoithms suitable to compose experts' opinions in a single synthesis distribution. After suggesting to look at Peter A. Morris' (1971, 1974, 1977) Bayesian model in such a light, this paper addresses the problem of modelling the multidimensional ‘performance function’, which encodes aggregator's beliefs about each expert's assessment ability and the degree of dependence among the experts. Whenever the aggregator has not an empirically founded probability distribution for the experts' performances, the proposed fiducial procedure provides a rational and very flexible tool for enabling the performance function to be specified with a relatively small number of assessments: moreover, it warrants aggregator's beliefs about the experts in terms of personal long run frequencies.  相似文献   

19.
Numerous optimization problems arise in survey designs. The problem of obtaining an optimal (or near optimal) sampling design can be formulated and solved as a mathematical programming problem. In multivariate stratified sample surveys usually it is not possible to use the individual optimum allocations for sample sizes to various strata for one reason or another. In such situations some criterion is needed to work out an allocation which is optimum for all characteristics in some sense. Such an allocation may be called an optimum compromise allocation. This paper examines the problem of determining an optimum compromise allocation in multivariate stratified random sampling, when the population means of several characteristics are to be estimated. Formulating the problem of allocation as an all integer nonlinear programming problem, the paper develops a solution procedure using a dynamic programming technique. The compromise allocation discussed is optimal in the sense that it minimizes a weighted sum of the sampling variances of the estimates of the population means of various characteristics under study. A numerical example illustrates the solution procedure and shows how it compares with Cochran's average allocation and proportional allocation.  相似文献   

20.
For the lifetime (or negative) exponential distribution, the trimmed likelihood estimator has been shown to be explicit in the form of a β‐trimmed mean which is representable as an estimating functional that is both weakly continuous and Fréchet differentiable and hence qualitatively robust at the parametric model. It also has high efficiency at the model. The robustness is in contrast to the maximum likelihood estimator (MLE) involving the usual mean which is not robust to contamination in the upper tail of the distribution. When there is known right censoring, it may be perceived that the MLE which is the most asymptotically efficient estimator may be protected from the effects of ‘outliers’ due to censoring. We demonstrate that this is not the case generally, and in fact, based on the functional form of the estimators, suggest a hybrid defined estimator that incorporates the best features of both the MLE and the β‐trimmed mean. Additionally, we study the pure trimmed likelihood estimator for censored data and show that it can be easily calculated and that the censored observations are not always trimmed. The different trimmed estimators are compared by a modest simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号