首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss the detection of a connected shape in a noisy image. Two types of image are considered: in the first a degraded outline of the shape is visible, while in the second the data are a corrupted version of the shape itself. In the first type the shape is defined by a thin outline of pixels with records that are different from those at pixels inside and outside the shape, while in the second type the shape is defined by its edge and pixels inside and outside the shape have different records. Our motivation is the identification of cross-sectional head shapes in ultrasound images of human fetuses. We describe and discuss a new approach to detecting shapes in images of the first type that uses a specially designed filter function that iteratively identifies the outline pixels of the head. We then suggest a way based on the cascade algorithm introduced by Jubb and Jennison (1991) of improving and considerably increasing the speed of a method proposed by Storvik (1994) for detecting edges in images of the second type.  相似文献   

2.
3.
Many biological and medical studies have as a response of interest the time to occurrence of some event,X, such as the occurrence of cessation of smoking, conception, a particular symptom or disease, remission, relapse, death due to some specific disease, or simply death. Often it is impossible to measureX due to the occurrence of some other competing event, usually termed a competing risk. This competing event may be the withdrawal of the subject from the study (for whatever reason), death from some cause other than the one of interest, or any eventuality that precludes the main event of interest from occurring. Usually the assumption is made that all such censoring times and lifetimes are independent. In this case one uses either the Kaplan-Meier estimator or the Nelson-Aalen estimator to estimate the survival function. However, if the competing risk or censoring times are not independent ofX, then there is no generally acceptable way to estimate the survival function. There has been considerable work devoted to this problem of dependent competing risks scattered throughout the statistical literature in the past several years and this paper presents a survey of such work.  相似文献   

4.
Summary.  Meteorological and environmental data that are collected at regular time intervals on a fixed monitoring network can be usefully studied combining ideas from multiple time series and spatial statistics, particularly when there are little or no missing data. This work investigates methods for modelling such data and ways of approximating the associated likelihood functions. Models for processes on the sphere crossed with time are emphasized, especially models that are not fully symmetric in space–time. Two approaches to obtaining such models are described. The first is to consider a rotated version of fully symmetric models for which we have explicit expressions for the covariance function. The second is based on a representation of space–time covariance functions that is spectral in just the time domain and is shown to lead to natural partially nonparametric asymmetric models on the sphere crossed with time. Various models are applied to a data set of daily winds at 11 sites in Ireland over 18 years. Spectral and space–time domain diagnostic procedures are used to assess the quality of the fits. The spectral-in-time modelling approach is shown to yield a good fit to many properties of the data and can be applied in a routine fashion relative to finding elaborate parametric models that describe the space–time dependences of the data about as well.  相似文献   

5.
SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth.  相似文献   

6.
Within the context of California's public report of coronary artery bypass graft (CABG) surgery outcomes, we first thoroughly review popular statistical methods for profiling healthcare providers. Extensive simulation studies are then conducted to compare profiling schemes based on hierarchical logistic regression (LR) modeling under various conditions. Both Bayesian and frequentist's methods are evaluated in classifying hospitals into ‘better’, ‘normal’ or ‘worse’ service providers. The simulation results suggest that no single method would dominate others on all accounts. Traditional schemes based on LR tend to identify too many false outliers, while those based on hierarchical modeling are relatively conservative. The issue of over shrinkage in hierarchical modeling is also investigated using the 2005–2006 California CABG data set. The article provides theoretical and empirical evidence in choosing the right methodology for provider profiling.  相似文献   

7.
The International Council for Harmonization (ICH) E9(R1) addendum recommends choosing an appropriate estimand based on the study objectives in advance of trial design. One defining attribute of an estimand is the intercurrent event, specifically what is considered an intercurrent event and how it should be handled. The primary objective of a clinical study is usually to assess a product's effectiveness and safety based on the planned treatment regimen instead of the actual treatment received. The estimand using the treatment policy strategy, which collects and analyzes data regardless of the occurrence of intercurrent events, is usually utilized. In this article, we explain how missing data can be handled using the treatment policy strategy from the authors' viewpoint in connection with antihyperglycemic product development programs. The article discusses five statistical methods to impute missing data occurring after intercurrent events. All five methods are applied within the framework of the treatment policy strategy. The article compares the five methods via Markov Chain Monte Carlo simulations and showcases how three of these five methods have been applied to estimate the treatment effects published in the labels for three antihyperglycemic agents currently on the market.  相似文献   

8.
Statistical methods of risk assessment for continuous variables   总被引:1,自引:0,他引:1  
Adverse health effects for continuous responses are not as easily defined as adverse health effects for binary responses. Kodell and West (1993) developed methods for defining adverse effects for continuous responses and the associated risk. Procedures were developed for finding point estimates and upper confidence limits for additional risk under the assumption of a normal distribution and quadratic mean response curve with equal variances at each dose level. In this paper, methods are developed for point estimates and upper confidence limits for additional risk at experimental doses when the equal variance assumption is relaxed. An interpolation procedure is discussed for obtaining information at doses other than the experimental doses. A small simulation study is presented to test the performance of the methods discussed.  相似文献   

9.
A technique is presented for enhancing and combining electron microscope images of small clystalline areas. Phases obtained by Fourier transforming electron micrographs are merged with available more precise amplitudes, in a Fourier synthesis, to obtain a final estimated image. The procedure is illustrated with 42 individual images of the purple membrane from Halobacterium halobium. To show the power of combination, results based on 1, 2, 4, 8, 16, 32 and 42 images are presented. An estimate based solely on the micrograph data, i.e. ignoring the precise amplitudes, is also presented and is seen to be notably poorer. The level of uncertainty of the final image is assessed by stimulating 10 final images and superposing the results.  相似文献   

10.
11.
We address statistical issues involved in the partially clustered design where clusters are only employed in the intervention arm, but not in the control arm. We develop a cluster adjusted t-test to compare group treatment effects with individual treatment effects for continuous outcomes in which the individual level data are used as the unit of the analysis in both arms, we develop an approach for determining sample sizes using this cluster adjusted t-test, and use simulation to demonstrate the consistent accuracy of the proposed cluster adjusted t-test and power estimation procedures. Two real examples illustrate how to use the proposed methods.  相似文献   

12.
Abstract

In this paper, we propose maximum entropy in the mean methods for propensity score matching classification problems. We provide a new methodological approach and estimation algorithms to handle explicitly cases when data is available: (i) in interval form; (ii) with bounded measurement or observational errors; or (iii) both as intervals and with bounded errors. We show that entropy in the mean methods for these three cases generally outperform benchmark error-free approaches.  相似文献   

13.
14.
Many optimal curve fitting and approximation problems have the same structure as certain estimation problems involving random processes. This structural correspondence has many useful consequences for curve fitting problems, including recursive algorithms and computable error bounds. The basic facts of this correspondence are reviewed and some new results on error bounds and optimal sampling are presented.  相似文献   

15.
16.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   

17.
18.
Variance estimation is an important topic in nonparametric regression. In this paper, we propose a pairwise regression method for estimating the residual variance. Specifically, we regress the squared difference between observations on the squared distance between design points, and then estimate the residual variance as the intercept. Unlike most existing difference-based estimators that require a smooth regression function, our method applies to regression models with jump discontinuities. Our method also applies to the situations where the design points are unequally spaced. Finally, we conduct extensive simulation studies to evaluate the finite-sample performance of the proposed method and compare it with some existing competitors.  相似文献   

19.
20.
This paper compares Models-3/Community Multiscale Air Quality (CMAQ) outputs at multiple resolutions by interpolating from coarse resolution to fine resolution and analyzing the interpolation difference. Spatial variograms provide a convenient way to investigate the spatial character of interpolation differences and, importantly, to distinguish between naive (nearest neighbor) interpolation and bilinear interpolation, which takes a weighted average of four neighboring cells. For example, when the higher resolution is three times the lower, the variogram of the difference between naive interpolation of the lower resolution output and the higher resolution output shows a depression at every third lag. This phenomenon is related to the blocky nature of naive interpolation and demonstrates the inferiority of naive interpolation to bilinear interpolation in a way that pixelwise comparisons cannot. Theoretical investigations show when one can expect to observe this periodic depression in the variogram of interpolation differences. Naive interpolation is in fact used widely in a number of settings; our results suggest that it should be routinely replaced by bilinear interpolation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号