首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 180 毫秒
1.
A strictly stationary time series is modelled directly, once the variables' realizations fit into a table: no knowledge of a distribution is required other than the prior discretization. A multiplicative model with combined random ‘Auto-Regressive’ and ‘Moving-Average’ parts is considered for the serial dependence. Based on a multi-sequence of unobserved series that serve as differences and differences of differences from the main building block, a causal version is obtained; a condition that secures an exponential rate of convergence for its expected random coefficients is presented. For the remainder, writing the conditional probability as a function of past conditional probabilities, is within reach: subject to the presence of the moving-average segment in the original equation, what could be a long process of elimination with mathematical arguments concludes with a new derivation that does not support a simplistic linear dependence on the lagged probability values.  相似文献   

2.
A result is presented concerning the null distribution of a statistic used to determine the number of multiplicative components in a fixed two-way model. This result suggests critical values which are compared with previously suggested critical values.  相似文献   

3.
Consider repeated event-count data from a sequence of exposures, during each of which a subject can experience some number of events, which is reported at ‘visits’ following each exposure. Within-subject heterogeneity not accounted for by visit-varying covariates is called ‘visit-level’ heterogeneity. Using generalized linear mixed models with log link for longitudinal Poisson regression, I model visit-level heterogeneity by cumulatively adding ‘disturbances’ to the random intercept of each subject over visits to create a ‘disturbed-random-intercept$rsquo; model. I also create a ‘disturbed-random-slope’ model, where the slope is over visits, and both intercept and slope are random but only the slope is disturbed. Simulation studies compare fixed-effect estimation for these models in data with 15 visits, large visit-level heterogeneity, and large multiplicative overdispersion. These studies show statistically significant superiority of the disturbed-random-intercept model. Examples with epidemiological data compare results of this model with those from other published models.  相似文献   

4.
ABSTRACT

Standard statistical techniques do not provide methods for analyzing data from nonreplicated factorial experiments. Such experiments occur for several reasons. Many experimenters may prefer conducting experiments having a large number of factor levels with no replications than conducting experiments with a few factor levels with replications particularly in pilot studies. Such experiments may allow one to identify factor combinations to be used in follow-up experiments. Another possibility is when the experimenter thinks that an experiment is replicated when in fact it is not. This occurs when a naive researcher believes that sub-samples are replicates when in reality they are not. Nonreplicated two-way experiments have been extensively studied. This paper discusses the analysis of nonreplicated three-way experiments. In particular, estimation of σ2 is discussed and a test is derived for testing whether three-factor interaction is absent in sub-areas of three-way data using a nonreplicated three-way multiplicative interaction model with a single multiplicative term. Approximate null distribution of the derived test statistic is studied using Monte Carlo studies and results are illustrated through an example.  相似文献   

5.
A sample of n subjects is observed in each of two states, S1-and S2. In each state, a subject is in one of two conditions, X or Y. Thus, a subject may be recorded as showing a change if its condition in the two states is ‘Y,X’ or ‘X,Y’ and, otherwise, the condition is unchanged. We consider a Bayesian test of the null hypothesis that the probability of an ‘X,Y’ change exceeds that of a ‘Y,X’ change by amount kO. That is, we develop the posterior distribution of kO, the difference between the two probabilities and reject the null hypothesis if k lies outside the appropriate posterior probability interval. The performance of the method is assessed by Monte Carlo and other numerical studies and brief tables of exact critical values are presented  相似文献   

6.
Cui  Ruifei  Groot  Perry  Heskes  Tom 《Statistics and Computing》2019,29(2):311-333

We consider the problem of causal structure learning from data with missing values, assumed to be drawn from a Gaussian copula model. First, we extend the ‘Rank PC’ algorithm, designed for Gaussian copula models with purely continuous data (so-called nonparanormal models), to incomplete data by applying rank correlation to pairwise complete observations and replacing the sample size with an effective sample size in the conditional independence tests to account for the information loss from missing values. When the data are missing completely at random (MCAR), we provide an error bound on the accuracy of ‘Rank PC’ and show its high-dimensional consistency. However, when the data are missing at random (MAR), ‘Rank PC’ fails dramatically. Therefore, we propose a Gibbs sampling procedure to draw correlation matrix samples from mixed data that still works correctly under MAR. These samples are translated into an average correlation matrix and an effective sample size, resulting in the ‘Copula PC’ algorithm for incomplete data. Simulation study shows that: (1) ‘Copula PC’ estimates a more accurate correlation matrix and causal structure than ‘Rank PC’ under MCAR and, even more so, under MAR and (2) the usage of the effective sample size significantly improves the performance of ‘Rank PC’ and ‘Copula PC.’ We illustrate our methods on two real-world datasets: riboflavin production data and chronic fatigue syndrome data.

  相似文献   

7.
This paper is concerned with the Bernstein estimator [Vitale, R.A. (1975), ‘A Bernstein Polynomial Approach to Density Function Estimation’, in Statistical Inference and Related Topics, ed. M.L. Puri, 2, New York: Academic Press, pp. 87–99] to estimate a density with support [0, 1]. One of the major contributions of this paper is an application of a multiplicative bias correction [Terrell, G.R., and Scott, D.W. (1980), ‘On Improving Convergence Rates for Nonnegative Kernel Density Estimators’, The Annals of Statistics, 8, 1160–1163], which was originally developed for the standard kernel estimator. Moreover, the renormalised multiplicative bias corrected Bernstein estimator is studied rigorously. The mean squared error (MSE) in the interior and mean integrated squared error of the resulting bias corrected Bernstein estimators as well as the additive bias corrected Bernstein estimator [Leblanc, A. (2010), ‘A Bias-reduced Approach to Density Estimation Using Bernstein Polynomials’, Journal of Nonparametric Statistics, 22, 459–475] are shown to be O(n?8/9) when the underlying density has a fourth-order derivative, where n is the sample size. The condition under which the MSE near the boundary is O(n?8/9) is also discussed. Finally, numerical studies based on both simulated and real data sets are presented.  相似文献   

8.
When modelling two-way analysis of variance interactions by a multiplicative term-[Formula] asymptotic variances and covariances are derived for the parameters p, yi and Sj using maximum likelihood theory. The asymptotic framework is defined by a2/K where K is the number of observations per combination of the two factors and a2 the common variance of the eijk values. The results can be applied when K = 1. Two Monte Carlo studies were carried out to check the validity of the formulae for small values of 02/K and to assess their usefulness when replacing the unknown parameters by their estimations. The formulae fit well but the confidence regions produced are too narrow if the interaction term is small. The procedure is illustrated with two examples.  相似文献   

9.
With reference to a specific dataset, we consider how to perform a flexible non‐parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location‐dependent first‐order term and pairwise interaction only. A priori we assume that the first‐order term is a shot noise process, and that the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior distribution using a Metropolis–Hastings algorithm in the ‘conventional’ way involves evaluating ratios of unknown normalizing constants. We avoid this problem by applying a recently introduced auxiliary variable technique. In the present setting, the auxiliary variable used is an example of a partially ordered Markov point process model.  相似文献   

10.
A complication that may arise in some bioequivalence studies is that of ‘incomplete subject profiles’, caused by missing values that occur at one or more sampling points in the concentration–time curve for some study subjects. We assess the impact of incomplete subject profiles on the assessment of bioequivalence in a standard two‐period crossover design. The specific aim of the investigation is to assess the impact of four different patterns of missing concentration values on the coverage level of a 90% nominal two‐sided confidence interval for the ratio of geometric means and then to consider the impact on the probability of concluding bioequivalence. An overall conclusion from the results is that random missingness – that is, missingness for reasons unrelated to the bioavailability of the formulation involved or, more generally, to any aspect of the study design and conduct – has a damaging effect on the study conclusions only when the number of missing values is fairly large. On the other hand, a missingness pattern that potentially has a very damaging effect on the study conclusions is that which arises when values are missing ‘late’ in the concentration–time curve. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

11.
The theory of the multiplicative definition of second order interaction is considered. Necessary and sufficient conditions in terms of the correlations in the marginal two-dimensional distributions are found for the existence of a “perfect” set of probabilities. Some progress is reported on the proof of the uniqueness of the multiplicative set of probabilities for a given set of two-dimensional distributions. The multiplicative property of such sets is preserved under pooling of marginal sets only in trivial cases, and so the multiplicative definition cannot be said to be a straightforward generalization from the definition of no interaction in two-dimensional distributions.  相似文献   

12.
The generalized regression (greg) predictor for the finite population total of a real variable is often employed when values of an auxiliary variable are available. Several variance estimators for it do well in large samples though bearing no optimality properties. We find a variance estimator which, under a restrictive model, has an optimality property under ‘exact’ as well as ‘asymptotic’ analysis. But this involves model parameters. Under a further restriction on the model, two model-parameter-free variance estimators are derived sharing the same ‘asymptotic’ optimality. Numerical illustrations through simulation are presented to demonstrate marginal improvements in using them rather than their predecessors. Two of the latter, though not optimal, are simpler, intuitively appealing, compete well in large samples, generally applicable and should be persisted with in practice.  相似文献   

13.
A correlation curve measures the strength of the association between two variables locally at different values of covariate. This paper studies how to estimate the correlation curve under the multiplicative distortion measurement errors setting. The unobservable variables are both distorted in a multiplicative fashion by an observed confounding variable. We obtain asymptotic normality results for the estimated correlation curve. We conduct Monte Carlo simulation experiments to examine the performance of the proposed estimator. The estimated correlation curve is applied to analyze a real dataset for an illustration.  相似文献   

14.
Mehrotra (1997) presented an ‘;improved’ Brown and Forsythe (1974) statistic which is designed to provide a valid test of mean equality in independent groups designs when variances are heterogeneous. In particular, the usual Brown and Fosythe procedure was modified by using a Satterthwaite approximation for numerator degrees of freedom instead of the usual value of number of groups minus one. Mehrotra then, through Monte Carlo methods, demonstrated that the ‘improved’ method resulted in a robust test of significance in cases where the usual Brown and Forsythe method did not. Accordingly, this ‘improved’ procedure was recommended. We show that under conditions likely to be encountered in applied settings, that is, conditions involving heterogeneous variances as well as nonnormal data, the ‘improved’ Brown and Forsythe procedure results in depressed or inflated rates of Type I error in unbalanced designs. Previous findings indicate, however, that one can obtain a robust test by adopting a heteroscedastic statistic with the robust estimators, rather than the usual least squares estimators, and further improvement can be expected when critical significance values are obtained through bootstrapping methods.  相似文献   

15.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, non-linear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low-energy states (‘annealing’), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ‘relaxation’ algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.  相似文献   

16.
Let Y be distributed symmetrically about Xβ. Natural generalizations of odd location statistics, say T‘Y’, and even location-free statistics, say W‘Y’, that were used by Hogg ‘1960, 1967)’ are introduced. We show that T‘Y’ is distributed symmetrically about β and thus E[T‘Y’] = β and that each element of T‘Y’ is uncorrelated with each element of W‘Y’. Applications of this result are made to R-estiraators and the result is extended to a multivariate linear model situation.  相似文献   

17.
New aligned-rank test procedures for the composite null hypothesis of no interaction effects (without placing restrictions on the two main effects) against appropriate composite general alternatives are developed for the standard two-way layout with a single observation per cell. Relative power performances of the two new aligned-rank procedures and existing tests due to Tukey (1949) and to de Kroon & van der Laan (1981) are examined via Monte Carlo simulation. Extensive power studies conducted on the 5 × 6 and 5 × 9 two-way layouts with one observation per cell show superior performance of the new procedures for a variety of interaction effects. Simulated critical values for the new procedures are provided in settings where the number of levels for each of the factors is between 3 and 9, inclusive.  相似文献   

18.
This article considers the problem of testing slopes in k straight lines with'heterogeneous variances. The statistic Fβ is proposed and the null and non-null distributions of Fβ derived under normality assumption. The power function values are then approximated by Laguerre polynomial expansion for normal and non-normal universes. For the example given in Graybill ‘1976, p. 295’, it is shown that the Satterthwaite approximation provides a close approximation to the null and non-null distributions in all the cases; it is also shown that the Fβ test is quite robust with respect to departure from normality in the case of mixtures of two normals.  相似文献   

19.
The likelihood ratio (LR) measures the relative weight of forensic data regarding two hypotheses. Several levels of uncertainty arise if frequentist methods are chosen for its assessment: the assumed population model only approximates the true one, and its parameters are estimated through a limited database. Moreover, it may be wise to discard part of data, especially that only indirectly related to the hypotheses. Different reductions define different LRs. Therefore, it is more sensible to talk about ‘a’ LR instead of ‘the’ LR, and the error involved in the estimation should be quantified. Two frequentist methods are proposed in the light of these points for the ‘rare type match problem’, that is, when a match between the perpetrator's and the suspect's DNA profile, never observed before in the database of reference, is to be evaluated.  相似文献   

20.
This paper presents a method of fitting factorial models to recidivism data consisting of the (possibly censored) time to ‘fail’ of individuals, in order to test for differences between groups. Here ‘failure’ means rearrest, reconviction or reincarceration, etc. A proportion P of the sample is assumed to be ‘susceptible’ to failure, i.e. to fail eventually, while the remaining 1-P are ‘immune’, and never fail. Thus failure may be described in two ways: by the probability P that an individual ever fails again (‘probability of recidivism’), and by the rate of failure Λ for the susceptibles. Related analyses have been proposed previously: this paper argues that a factorial approach, as opposed to regression approaches advocated previously, offers simplified analysis and interpretation of these kinds of data. The methods proposed, which are also applicable in medical statistics and reliability analyses, are demonstrated on data sets in which the factors are Parole Type (released to freedom or on parole), Age group (≤ 20 years, 20–40 years, > 40 years), and Marital Status. The outcome (failure) is a return to prison following first or second release.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号