首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Foam models, especially random tessellations, are powerful tools to study the relations between the geometric structure of foams and their physical properties. In this paper, we propose the use of random Laguerre tessellations, weighted versions of the well-known Voronoi tessellations, as models for the microstructure of foams. Based on geometric characteristics estimated from a tomographic image of a closed-cell polymer foam, we fit a Laguerre tessellation model to the material. It is shown that this model allows for a better fit of the geometric structure of the foam than some classical Voronoi tessellation models.  相似文献   

2.
3.
We consider the distributions of Goodman and Kruskal's G, Kendall's tau-b, and correlation coefficients rho and rho-s for sample sizes 10‘10’40 from 2×3 tables. The results are compared with asymptotic theory. It is found that the convergence of G to its asymptotic normal distribution is much slower than the convergence of the other measures to theirs, and that G is more likely to be significantly biased. However, the variances and biases of all four measures come close to their asymptotic values for quite moderate sample sizes.  相似文献   

4.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

5.
Tobias Niebuhr 《Statistics》2017,51(5):1118-1131
We consider time series being observed at random time points. In addition to Parzen's classical modelling by amplitude modulating sequences, we state another modelling using an integer-valued sequence as the observation times. Limiting results are presented for the sample mean and are generalized to the class of functions of smooth means. Motivated by the complicated limiting behaviour, (moving) block bootstrap possibilities are investigated. Conditional on the used modelling for the irregular spacings, one is lead to different interpretations for the block length and hence bootstrap approaches. The block length either can be interpreted as the time (resulting in an observation string of fixed length containing a random number of observations) or as the number of observations (resulting in an observation string of variable length containing a fixed number of values). Both bootstrap approaches are shown to be asymptotically valid for the sample mean. Numerical examples and an application to real-world ozone data conclude the study.  相似文献   

6.
Cui  Ruifei  Groot  Perry  Heskes  Tom 《Statistics and Computing》2019,29(2):311-333

We consider the problem of causal structure learning from data with missing values, assumed to be drawn from a Gaussian copula model. First, we extend the ‘Rank PC’ algorithm, designed for Gaussian copula models with purely continuous data (so-called nonparanormal models), to incomplete data by applying rank correlation to pairwise complete observations and replacing the sample size with an effective sample size in the conditional independence tests to account for the information loss from missing values. When the data are missing completely at random (MCAR), we provide an error bound on the accuracy of ‘Rank PC’ and show its high-dimensional consistency. However, when the data are missing at random (MAR), ‘Rank PC’ fails dramatically. Therefore, we propose a Gibbs sampling procedure to draw correlation matrix samples from mixed data that still works correctly under MAR. These samples are translated into an average correlation matrix and an effective sample size, resulting in the ‘Copula PC’ algorithm for incomplete data. Simulation study shows that: (1) ‘Copula PC’ estimates a more accurate correlation matrix and causal structure than ‘Rank PC’ under MCAR and, even more so, under MAR and (2) the usage of the effective sample size significantly improves the performance of ‘Rank PC’ and ‘Copula PC.’ We illustrate our methods on two real-world datasets: riboflavin production data and chronic fatigue syndrome data.

  相似文献   

7.
In investigating the correlation between an alcohol biomarker and self-report, we developed a method to estimate the canonical correlation between two high-dimensional random vectors with a small sample size. In reviewing the relevant literature, we found that our method is somewhat similar to an existing method, but that the existing method has been criticized as lacking theoretical grounding in comparison with an alternative approach. We provide theoretical and empirical grounding for our method, and we customize it for our application to produce a novel method, which selects linear combinations that are step functions with a sparse number of steps.  相似文献   

8.
Longitudinal count data with excessive zeros frequently occur in social, biological, medical, and health research. To model such data, zero-inflated Poisson (ZIP) models are commonly used, after separating zero and positive responses. As longitudinal count responses are likely to be serially correlated, such separation may destroy the underlying serial correlation structure. To overcome this problem recently observation- and parameter-driven modelling approaches have been proposed. In the observation-driven model, the response at a specific time point is modelled through the responses at previous time points after incorporating serial correlation. One limitation of the observation-driven model is that it fails to accommodate the presence of any possible over-dispersion, which frequently occurs in the count responses. This limitation is overcome in a parameter-driven model, where the serial correlation is captured through the latent process using random effects. We compare the results obtained by the two models. A quasi-likelihood approach has been developed to estimate the model parameters. The methodology is illustrated with analysis of two real life datasets. To examine model performance the models are also compared through a simulation study.  相似文献   

9.
We consider stationary Poisson–Voronoi tessellations (PVT) in the Euclidean plane and study the properties of Voronoi tessellations induced by linear Poisson processes on the edges of the PVT. We are especially interested in simulation algorithms for the typical cell. Two different simulation algorithms are introduced. The first algorithm directly simulates the typical cell, whereas the second algorithm simulates cells from which distributional properties of the typical cell can be obtained. This second algorithm can also be used for simulating the typical cell of other Cox–Voronoi tessellations. The implementation of both algorithms is tested for their correctness using random software tests. Then different cell characteristics are studied by simulation and compared with the typical cell of PVT and Cox–Voronoi tessellations based on linear Poisson processes on the lines of Poisson line processes. Our results can be applied, for example, in the analysis of telecommunication networks and vesicle paths on cytoskeletal networks.  相似文献   

10.
Detecting dependence between marks and locations of marked point processes   总被引:1,自引:0,他引:1  
Summary.  We introduce two characteristics for stationary and isotropic marked point proces- ses, E ( h ) and V ( h ), and describe their use in investigating mark–point interactions. These quantities are functions of the interpoint distance h and denote the conditional expectation and the conditional variance of a mark respectively, given that there is a further point of the process a distance h away. We present tests based on E and V for the hypothesis that the values of the marks can be modelled by a random field which is independent of the unmarked point process. We apply the methods to two data sets in forestry.  相似文献   

11.
A three-parameter extension of the exponential distribution is introduced and studied in this paper. The new distribution is quite flexible and can be used effectively in modelling survival data, reliability problems, fatigue life studies and hydrological data. It can have constant, decreasing, increasing, upside-down bathtub (unimodal), bathtub-shaped and decreasing–increasing–decreasing hazard rate functions. We provide a comprehensive account of the mathematical properties of the new distribution and various structural quantities are derived. We discuss maximum likelihood estimation of the model parameters for complete sample and for censored sample. An empirical application of the new model to real data is presented for illustrative purposes. We hope that the new distribution will serve as an alternative model to other models available in the literature for modelling real data in many areas.  相似文献   

12.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

13.
In longitudinal data analysis with random subject effects, there is often within subject serial correlation and possibly unequally spaced observations. This serial correlation can be partially confounded with the random between subject effects. In real data, it is often not clear whether there is serial correlation, random subject effects or both. Using inference based on the likelihood function, it is not always possible to identify the correct model, especially in small samples. However, it is important that some effort be made to attempt to find a good model rather than just making assumptions. This often means trying models with random coefficients, with serial correlation, and with both. Model selection criteria such as likelihood ratio tests and Akaike's Information Criterion (AIC) can be used. The problem of modelling serial correlation with unequally spaced observations is addressed. A real data example is presented where there is an apparent heterogeneity of variances, possible serial correlation and between subject random effects. In this example, it turns out that the random subject effects explains both the serial correlation and the variance heterogeneity.  相似文献   

14.
We consider the testing hypothesis that two random vectors of p and q components are independent in canonical correlation analysis. In this paper we investigate the powers of the test based on the largest root criterion. As the exact distribution are expressed by the zonal polynomials, the computation is possible only for p=2, and also it is necessary to calculate using quadruplex precision because we lose the significance by subtraction. So in Table I we obtain the percentage points of the largest root criterion for the computation of the quadruplex precision. Then we calculate the power when p=2 and q = 3 to 11 (2). The results show that for the fixed n–q the power becomes smaller when q increases, and for the fixed p1 of the alternative hypothesis (p1, P2) the power does not become significantly large when P2 increases. We can also find the sample size required for the power agnist some alternative hypothesis to be about 0.9. the numerical results may be useful to find the quality of approximation by using formula of the asyptotic distribution.  相似文献   

15.
In this article we propose a novel non-parametric sampling approach to estimate posterior distributions from parameters of interest. Starting from an initial sample over the parameter space, this method makes use of this initial information to form a geometrical structure known as Voronoi tessellation over the whole parameter space. This rough approximation to the posterior distribution provides a way to generate new points from the posterior distribution without any additional costly model evaluations. By using a traditional Markov Chain Monte Carlo (MCMC) over the non-parametric tessellation, the initial approximate distribution is refined sequentially. We applied this method to a couple of climate models to show that this hybrid scheme successfully approximates the posterior distribution of the model parameters.  相似文献   

16.
Abstract. We introduce a class of Gibbs–Markov random fields built on regular tessellations that can be understood as discrete counterparts of Arak–Surgailis polygonal fields. We focus first on consistent polygonal fields, for which we show consistency, Markovianity and solvability by means of dynamic representations. Next, we develop disagreement loop as well as path creation and annihilation dynamics for their general Gibbsian modifications, which cover most lattice‐based Gibbs–Markov random fields subject to certain mild conditions. Applications to foreground–background image segmentation problems are discussed.  相似文献   

17.
Abstract. An objective of randomized placebo‐controlled preventive HIV vaccine efficacy trials is to assess the relationship between the vaccine effect to prevent infection and the genetic distance of the exposing HIV to the HIV strain represented in the vaccine construct. Motivated by this objective, recently a mark‐specific proportional hazards (PH) model with a continuum of competing risks has been studied, where the genetic distance of the transmitting strain is the continuous ‘mark’ defined and observable only in failures. A high percentage of genetic marks of interest may be missing for a variety of reasons, predominantly because rapid evolution of HIV sequences after transmission before a blood sample is drawn from which HIV sequences are measured. This research investigates the stratified mark‐specific PH model with missing marks where the baseline functions may vary with strata. We develop two consistent estimation approaches, the first based on the inverse probability weighted complete‐case (IPW) technique, and the second based on augmenting the IPW estimator by incorporating auxiliary information predictive of the mark. We investigate the asymptotic properties and finite‐sample performance of the two estimators, and show that the augmented IPW estimator, which satisfies a double robustness property, is more efficient.  相似文献   

18.
The 2 × 2 crossover trial uses subjects as their own control to reduce the intersubject variability in the treatment comparison, and typically requires fewer subjects than a parallel design. The generalized estimating equations (GEE) methodology has been commonly used to analyze incomplete discrete outcomes from crossover trials. We propose a unified approach to the power and sample size determination for the Wald Z-test and t-test from GEE analysis of paired binary, ordinal and count outcomes in crossover trials. The proposed method allows misspecification of the variance and correlation of the outcomes, missing outcomes, and adjustment for the period effect. We demonstrate that misspecification of the working variance and correlation functions leads to no or minimal efficiency loss in GEE analysis of paired outcomes. In general, GEE requires the assumption of missing completely at random. For bivariate binary outcomes, we show by simulation that the GEE estimate is asymptotically unbiased or only minimally biased, and the proposed sample size method is suitable under missing at random (MAR) if the working correlation is correctly specified. The performance of the proposed method is illustrated with several numerical examples. Adaption of the method to other paired outcomes is discussed.  相似文献   

19.
We introduce a class of random fields that can be understood as discrete versions of multicolour polygonal fields built on regular linear tessellations. We focus first on a subclass of consistent polygonal fields, for which we show Markovianity and solvability by means of a dynamic representation. This representation is used to design new sampling techniques for Gibbsian modifications of such fields, a class which covers lattice‐based random fields. A flux‐based modification is applied to the extraction of the field tracks network from a Synthetic Aperture Radar image of a rural area.  相似文献   

20.
Classical time-series theory assumes values of the response variable to be ‘crisp’ or ‘precise’, which is quite often violated in reality. However, forecasting of such data can be carried out through fuzzy time-series analysis. This article presents an improved method of forecasting based on LR fuzzy sets as membership functions. As an illustration, the methodology is employed for forecasting India's total foodgrain production. For the data under consideration, superiority of proposed method over other competing methods is demonstrated in respect of modelling and forecasting on the basis of mean square error and average relative error criteria. Finally, out-of-sample forecasts are also obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号