首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 308 毫秒
1.
Geometric Anisotropic Spatial Point Pattern Analysis and Cox Processes   总被引:1,自引:0,他引:1  
We consider spatial point processes with a pair correlation function, which depends only on the lag vector between a pair of points. Our interest is in statistical models with a special kind of ‘structured’ anisotropy: the pair correlation function is geometric anisotropic if it is elliptical but not spherical. In particular, we study Cox process models with an elliptical pair correlation function, including shot noise Cox processes and log Gaussian Cox processes, and we develop estimation procedures using summary statistics and Bayesian methods. Our methodology is illustrated on real and synthetic datasets of spatial point patterns.  相似文献   

2.
Log Gaussian Cox Processes   总被引:10,自引:0,他引:10  
Planar Cox processes directed by a log Gaussian intensity process are investigated in the univariate and multivariate cases. The appealing properties of such models are demonstrated theoretically as well as through data examples and simulations. In particular, the first, second and third-order properties are studied and utilized in the statistical analysis of clustered point patterns. Also empirical Bayesian inference for the underlying intensity surface is considered.  相似文献   

3.
In environmetrics, interest often centres around the development of models and methods for making inference on observed point patterns assumed to be generated by latent spatial or spatio‐temporal processes, which may have a hierarchical structure. In this research, motivated by the analysis of spatio‐temporal storm cell data, we generalize the Neyman–Scott parent–child process to account for hierarchical clustering. This is accomplished by allowing the parents to follow a log‐Gaussian Cox process thereby incorporating correlation and facilitating inference at all levels of the hierarchy. This approach is applied to monthly storm cell data from the Bismarck, North Dakota radar station from April through August 2003 and we compare these results to simpler cluster processes to demonstrate the advantages of accounting for both levels of correlation present in these hierarchically clustered point patterns. The Canadian Journal of Statistics 47: 46–64; 2019 © 2019 Statistical Society of Canada  相似文献   

4.
Log Gaussian Cox processes as introduced in Moller et al. (1998) are extended to space-time models called log Gaussian Cox birth processes. These processes allow modelling of spatial and temporal heterogeneity in time series of increasing point processes consisting of different types of points. The models are shown to be easy to analyse yet flexible enough for a detailed statistical analysis of a particular agricultural experiment concerning the development of two weed species on an organic barley field. Particularly, the aspects of estimation, model validation and intensity surface prediction are discussed.  相似文献   

5.
Abstract. Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.  相似文献   

6.
7.
Statistical inference for highly multivariate point pattern data is challenging due to complex models with large numbers of parameters. In this paper, we develop numerically stable and efficient parameter estimation and model selection algorithms for a class of multivariate log Gaussian Cox processes. The methodology is applied to a highly multivariate point pattern data set from tropical rain forest ecology.  相似文献   

8.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

9.
In this article, we consider the product-limit quantile estimator of an unknown quantile function under a censored dependent model. This is a parallel problem to the estimation of the unknown distribution function by the product-limit estimator under the same model. Simultaneous strong Gaussian approximations of the product-limit process and product-limit quantile process are constructed with rate O[(log n)] for some λ > 0. The strong Gaussian approximation of the product-limit process is then applied to derive the laws of the iterated logarithm for product-limit process.  相似文献   

10.
A generalized Cox regression model is studied for the covariance analysis of competing risks data subject to independent random censoring. The information of the maximum partial likelihood estimates is compared with that of maximum likelihood estimates assuming a log linear hazard function.The method of generalized variance is used to define the efficiency of estimation between the two models. This is then applied to two-sample problems with two exponentially censoring rates. Numerical results are summarized ane presented graphically.The detailed results indicate that the semi-parametric model wrks well for a higher rate of censoring. A method of generalizing the result to type 1 censoring and the efficiency of estimating the coefficient of the covariate are discussecd. A brief account of using the results to help design experiments is also given.  相似文献   

11.
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robust regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (DA) algorithm and a corresponding Haar PX‐DA algorithm that can be used to explore this posterior. This paper provides conditions (on the mixing density) for geometric ergodicity of the Markov chains underlying these Markov chain Monte Carlo algorithms. Letting d denote the dimension of the response, the main result shows that the DA and Haar PX‐DA Markov chains are geometrically ergodic whenever the mixing density is generalized inverse Gaussian, log‐normal, inverted Gamma (with shape parameter larger than d /2) or Fréchet (with shape parameter larger than d /2). The results also apply to certain subsets of the Gamma, F and Weibull families.  相似文献   

12.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

13.
The authors provide a rigorous large sample theory for linear models whose response variable has been subjected to the Box‐Cox transformation. They provide a continuous asymptotic approximation to the distribution of estimators of natural parameters of the model. They show, in particular, that the maximum likelihood estimator of the ratio of slope to residual standard deviation is consistent and relatively stable. The authors further show the importance for inference of normality of the errors and give tests for normality based on the estimated residuals. For non‐normal errors, they give adjustments to the log‐likelihood and to asymptotic standard errors.  相似文献   

14.
Abstract. This study gives a generalization of Birch's log‐linear model numerical invariance result. The generalization is given in the form of a sufficient condition for numerical invariance that is simple to verify in practice and is applicable for a much broader class of models than log‐linear models. Unlike Birch's log‐linear result, the generalization herein does not rely on any relationship between sufficient statistics and maximum likelihood estimates. Indeed the generalization does not rely on the existence of a reduced set of sufficient statistics. Instead, the concept of homogeneity takes centre stage. Several examples illustrate the utility of non‐log‐linear models, the invariance (and non‐invariance) of fitted values, and the invariance (and non‐invariance) of certain approximating distributions.  相似文献   

15.
Some general remarks are made about likelihood factorizations, distinguishing parameter-based factorizations and concentration-graph factorizations. Two parametric families of distributions for mixed discrete and continuous variables are discussed. Conditions on graphs are given for the circumstances under which their joint analysis can be split into separate analyses, each involving a reduced set of component variables and parameters. The result shows marked differences between the two families although both involve the same necessary condition on prime graphs. This condition is both necessary and sufficient for simplified estimation in Gaussian and for discrete log linear models.  相似文献   

16.
This paper characterizes the asymptotic behaviour of the likelihood ratio test statistic (LRTS) for testing homogeneity (i.e. no mixture) against gamma mixture alternatives. Under the null hypothesis, the LRTS is shown to be asymptotically equivalent to the square of Davies's Gaussian process test statistic and diverges at a log n rate to infinity in probability. Based on the asymptotic analysis, we propose and demonstrate a computationally efficient method to simulate the null distributions of the LRTS for small to moderate sample sizes.  相似文献   

17.
Parametric methods for the calculation of reference intervals in clinical studies often rely on the identification of a suitable transformation so that the transformed data can be assumed to be drawn from a Gaussian distribution. In this paper, the two-stage transformation recommended by the International Federation for Clinical Chemistry is compared with a novel generalised Box–Cox family of transformations. Investigation is also made of sample sizes needed to achieve certain criteria of reliability in the calculated reference interval. Simulations are used to show that the generalised Box–Cox family achieves a lower bias than the two-stage transformation. It was found that there is a possibility that the two-stage transformation will result in percentile estimates that cannot be back-transformed to obtain the required reference intervals, a difficulty not observed when using the generalised Box–Cox family introduced in this paper.  相似文献   

18.
The generating function of a marginal distribution of the reduced Palm distribution of a spatial point process is considered. It serves as a bivariate summary function, providing more information than some other popular univariate summary functions, such as the reduced second-moment function and the nearest-neighbour distance distribution function. Simulation confirmed that the new summary function is more informative when applied to patterns that exhibit both clustering and regularity on the same scale of observation.  相似文献   

19.
Analytical properties of regression and the variance–covariance matrix of asymmetric generalized scale mixture of multivariate Gaussian variables are presented. The analysis includes an in-depth analytical investigation of the first two conditional moments of the mixing variable. Exact computable expressions for the prediction and the conditional variance are presented for the generalized hyperbolic distribution using the inversion theorem for Fourier transforms. An application to financial log returns is demonstrated via the classical Euler approximation. The methodology is illustrated by analyzing the regression of intraday log returns for CISCO against the corresponding data from S&P 500.  相似文献   

20.
One of the main aims of early phase clinical trials is to identify a safe dose with an indication of therapeutic benefit to administer to subjects in further studies. Ideally therefore, dose‐limiting events (DLEs) and responses indicative of efficacy should be considered in the dose‐escalation procedure. Several methods have been suggested for incorporating both DLEs and efficacy responses in early phase dose‐escalation trials. In this paper, we describe and evaluate a Bayesian adaptive approach based on one binary response (occurrence of a DLE) and one continuous response (a measure of potential efficacy) per subject. A logistic regression and a linear log‐log relationship are used respectively to model the binary DLEs and the continuous efficacy responses. A gain function concerning both the DLEs and efficacy responses is used to determine the dose to administer to the next cohort of subjects. Stopping rules are proposed to enable efficient decision making. Simulation results shows that our approach performs better than taking account of DLE responses alone. To assess the robustness of the approach, scenarios where the efficacy responses of subjects are generated from an E max model, but modelled by the linear log–log model are also considered. This evaluation shows that the simpler log–log model leads to robust recommendations even under this model showing that it is a useful approximation to the difficulty in estimating E max model. Additionally, we find comparable performance to alternative approaches using efficacy and safety for dose‐finding. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号