首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors discuss a graph‐based approach for testing spatial point patterns. This approach falls under the category of data‐random graphs, which have been introduced and used for statistical pattern recognition in recent years. The authors address specifically the problem of testing complete spatial randomness against spatial patterns of segregation or association between two or more classes of points on the plane. To this end, they use a particular type of parameterized random digraph called a proximity catch digraph (PCD) which is based on relative positions of the data points from various classes. The statistic employed is the relative density of the PCD, which is a U‐statistic when scaled properly. The authors derive the limiting distribution of the relative density, using the standard asymptotic theory of U‐statistics. They evaluate the finite‐sample performance of their test statistic by Monte Carlo simulations and assess its asymptotic performance via Pitman's asymptotic efficiency, thereby yielding the optimal parameters for testing. They further stress that their methodology remains valid for data in higher dimensions.  相似文献   

2.
The authors describe a model‐based kappa statistic for binary classifications which is interpretable in the same manner as Scott's pi and Cohen's kappa, yet does not suffer from the same flaws. They compare this statistic with the data‐driven and population‐based forms of Scott's pi in a population‐based setting where many raters and subjects are involved, and inference regarding the underlying diagnostic procedure is of interest. The authors show that Cohen's kappa and Scott's pi seriously underestimate agreement between experts classifying subjects for a rare disease; in contrast, the new statistic is robust to changes in prevalence. The performance of the three statistics is illustrated with simulations and prostate cancer data.  相似文献   

3.
In this paper a new method called the EMS algorithm is used to solve Wicksell's corpuscle problem, that is the determination of the distribution of the sphere radii in a medium given the radii of their profiles in a random slice. The EMS algorithm combines the EM algorithm, a procedure for obtaining maximum likelihood estimates of parameters from incomplete data, with simple smoothing. The method is tested on simulated data from three different sphere radii densities, namely a bimodal mixture of Normals, a Weibull and a Normal. The effect of varying the level of smoothing, the number of classes in which the data is binned and the number of classes for which the estimated density is evaluated, is investigated. Comparisons are made between these results and those obtained by others in this field.  相似文献   

4.
Abstract

Nonparametric density estimates are obtained by the method of asymptotic regression (AR) on empirical stochastic processes. Rates of convergence for the density estimator are obtained in various norms. The methodology is applied to density estimation in two inverse problems: deconvolution and Wicksell's corpuscle problem.  相似文献   

5.
The authors propose methods based on the stratified Cox proportional hazards model that account for the fact that the data have been collected according to a complex survey design. The methods they propose are based on the theory of estimating equations in conjunction with empirical process theory. The authors also discuss issues concerning ignorable sampling design, and the use of weighted and unweighted procedures. They illustrate their methodology by an analysis of jobless spells in Statistics Canada's Survey of Labour and Income Dynamics. They discuss briefly problems concerning weighting, model checking, and missing or mismeasured data. They also identify areas for further research.  相似文献   

6.
The authors propose a robust bounded‐influence estimator for binary regression with continuous outcomes, an alternative to logistic regression when the investigator's interest focuses on the proportion of subjects who fall below or above a cut‐off value. The authors show both theoretically and empirically that in this context, the maximum likelihood estimator is sensitive to model misspecifications. They show that their robust estimator is more stable and nearly as efficient as maximum likelihood when the hypotheses are satisfied. Moreover, it leads to safer inference. The authors compare the different estimators in a simulation study and present an analysis of hypertension on Harlem survey data.  相似文献   

7.
The authors consider children's behavioural and emotional problems and their relationships with possible predictors. They propose a multivariate transitional mixed‐effects model for a longitudinal study and simultaneously address non‐ignorable missing data in responses and covariates, measurement errors in covariates, and multivariate modelling of the responses and covariate processes. A real dataset is analysed in details using the proposed method with some interesting results. The Canadian Journal of Statistics 37: 435–452; 2009 © 2009 Statistical Society of Canada  相似文献   

8.
In monomorphic species, determination of sex from behavior is prone to errors. The authors develop capture‐recapture survival models that account for uncertainty in the assessment of sex. They examine parameter redundancy for four basic models with constant or time‐dependent survival and encounter probabilities. They further develop a more refined and more appropriate model for an Audouin's gull data set where four distinct behavioral clues have been used. They examine how useful it is to incorporate the least reliable of the clues and the genetic determination of sex available for only a handful of individuals. They finally discuss the implications of their findings for the design of field studies.  相似文献   

9.
Although quantile regression estimators are robust against low leverage observations with atypically large responses (Koenker & Bassett 1978), they can be seriously affected by a few points that deviate from the majority of the sample covariates. This problem can be alleviated by downweighting observations with high leverage. Unfortunately, when the covariates are not elliptically distributed, Mahalanobis distances may not be able to correctly identify atypical points. In this paper the authors discuss the use of weights based on a new leverage measure constructed using Rosenblatt's multivariate transformation which is able to reflect nonelliptical structures in the covariate space. The resulting weighted estimators are consistent, asymptotically normal, and have a bounded influence function. In addition, the authors also discuss a selection criterion for choosing the downweighting scheme. They illustrate their approach with child growth data from Finland. Finally, their simulation studies suggest that this methodology has good finite‐sample properties.  相似文献   

10.
The authors show how Kendall's tau can be adapted to test against serial dependence in a univariate time series context. They provide formulas for the mean and variance of circular and noncircular versions of this statistic, and they prove its asymptotic normality under the hypothesis of independence. They present also a Monte Carlo study comparing the power and size of a test based on Kendall's tau with the power and size of competing procedures based on alternative parametric and nonparametric measures of serial dependence. In particular, their simulations indicate that Kendall's tau outperforms Spearman's rho in detecting first‐order autoregressive dependence, despite the fact that these two statistics are asymptotically equivalent under the null hypothesis, as well as under local alternatives.  相似文献   

11.
The authors develop a functional linear model in which the values at time t of a sample of curves yi (t) are explained in a feed‐forward sense by the values of covariate curves xi(s) observed at times s ±.t. They give special attention to the case s ± [t — δ, t], where the lag parameter δ is estimated from the data. They use the finite element method to estimate the bivariate parameter regression function β(s, t), which is defined on the triangular domain s ± t. They apply their model to the problem of predicting the acceleration of the lower lip during speech on the basis of electromyographical recordings from a muscle depressing the lip. They also provide simulation results to guide the calibration of the fitting process.  相似文献   

12.
The authors propose new rank statistics for testing the white noise hypothesis in a time series. These statistics are Cramér‐von Mises and Kolmogorov‐Smirnov functionals of an empirical distribution function whose mean is related to a serial version of Kendall's tau through a linear transform. The authors determine the asymptotic behaviour of the underlying serial process and the large‐sample distribution of the proposed statistics under the null hypothesis of white noise. They also present simulation results showing the power of their tests.  相似文献   

13.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

14.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

15.
Motivated by problems of modelling torsional angles in molecules, Singh, Hnizdo & Demchuk (2002) proposed a bivariate circular model which is a natural torus analogue of the bivariate normal distribution and a natural extension of the univariate von Mises distribution to the bivariate case. The authors present here a multivariate extension of the bivariate model of Singh, Hnizdo & Demchuk (2002). They study the conditional distributions and investigate the shapes of marginal distributions for a special case. The methods of moments and pseudo‐likelihood are considered for the estimation of parameters of the new distribution. The authors investigate the efficiency of the pseudo‐likelihood approach in three dimensions. They illustrate their methods with protein data of conformational angles  相似文献   

16.
The authors derive the asymptotic mean and bias of Kendall's tau and Spearman's rho in the presence of left censoring in the bivariate Gaussian copula model. They show that tie corrections for left‐censoring brings the value of these coefficients closer to zero. They also present a bias reduction method and illustrate it through two applications.  相似文献   

17.
The authors propose a bootstrap procedure which estimates the distribution of an estimating function by resampling its terms using bootstrap techniques. Studentized versions of this so‐called estimating function (EF) bootstrap yield methods which are invariant under reparametrizations. This approach often has substantial advantage, both in computation and accuracy, over more traditional bootstrap methods and it applies to a wide class of practical problems where the data are independent but not necessarily identically distributed. The methods allow for simultaneous estimation of vector parameters and their components. The authors use simulations to compare the EF bootstrap with competing methods in several examples including the common means problem and nonlinear regression. They also prove symptotic results showing that the studentized EF bootstrap yields higher order approximations for the whole vector parameter in a wide class of problems.  相似文献   

18.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

19.
The authors extend Fisher's method of combining two independent test statistics to test homogeneity of several two‐parameter populations. They explore two procedures combining asymptotically independent test statistics: the first pools two likelihood ratio statistics and the other, score test statistics. They then give specific results to test homogeneity of several normal, negative binomial or beta‐binomial populations. Their simulations provide evidence that in this context, Fisher's method performs generally well, even when the statistics to be combined are only asymptotically independent. They are led to recommend Fisher's test based on score statistics, since the latter have simple forms, are easy to calculate, and have uniformly good level properties.  相似文献   

20.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号