首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed. The technique operates by embedding the models in a composite model parameterised by an integer k which identifies a switching time when dynamics change from one model to the other. Likelihood ratios can then be estimated from the posterior density of k using Markov chain methods. The techniques are illustrated by a simulation study involving an immigration-death model and validated using analytic results derived for this case. They are also applied to compare the fit of stochastic epidemic models to historical data on a smallpox epidemic. In addition to estimating likelihood ratios, the method can be used for direct estimation of likelihoods by selecting one of the models in the comparison to have a known likelihood for the observations. Some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.  相似文献   

2.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

3.
The importance of the normal distribution for fitting continuous data is well known. However, in many practical situations data distribution departs from normality. For example, the sample skewness and the sample kurtosis are far away from 0 and 3, respectively, which are nice properties of normal distributions. So, it is important to have formal tests of normality against any alternative. D'Agostino et al. [A suggestion for using powerful and informative tests of normality, Am. Statist. 44 (1990), pp. 316–321] review four procedures Z 2(g 1), Z 2(g 2), D and K 2 for testing departure from normality. The first two of these procedures are tests of normality against departure due to skewness and kurtosis, respectively. The other two tests are omnibus tests. An alternative to the normal distribution is a class of skew-normal distributions (see [A. Azzalini, A class of distributions which includes the normal ones, Scand. J. Statist. 12 (1985), pp. 171–178]). In this paper, we obtain a score test (W) and a likelihood ratio test (LR) of goodness of fit of the normal regression model against the skew-normal family of regression models. It turns out that the score test is based on the sample skewness and is of very simple form. The performance of these six procedures, in terms of size and power, are compared using simulations. The level properties of the three statistics LR, W and Z 2(g 1) are similar and close to the nominal level for moderate to large sample sizes. Also, their power properties are similar for small departure from normality due to skewness (γ1≤0.4). Of these, the score test statistic has a very simple form and computationally much simpler than the other two statistics. The LR statistic, in general, has highest power, although it is computationally much complex as it requires estimates of the parameters under the normal model as well as those under the skew-normal model. So, the score test may be used to test for normality against small departure from normality due to skewness. Otherwise, the likelihood ratio statistic LR should be used as it detects general departure from normality (due to both skewness and kurtosis) with, in general, largest power.  相似文献   

4.
Consider a nonparametric nonseparable regression model Y = ?(Z, U), where ?(Z, U) is strictly increasing in U and UU[0, 1]. We suppose that there exists an instrument W that is independent of U. The observable random variables are Y, Z, and W, all one-dimensional. We construct test statistics for the hypothesis that Z is exogenous, that is, that U is independent of Z. The test statistics are based on the observation that Z is exogenous if and only if V = FY|Z(Y|Z) is independent of W, and hence they do not require the estimation of the function ?. The asymptotic properties of the proposed tests are proved, and a bootstrap approximation of the critical values of the tests is shown to be consistent and to work for finite samples via simulations. An empirical example using the U.K. Family Expenditure Survey is also given. As a byproduct of our results we obtain the asymptotic properties of a kernel estimator of the distribution of V, which equals U when Z is exogenous. We show that this estimator converges to the uniform distribution at faster rate than the parametric n? 1/2-rate.  相似文献   

5.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

6.
《Statistics》2012,46(6):1187-1209
ABSTRACT

According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for each alternative hypothesis, this leads to weighing evidence without any shrinkage toward a presumption of the truth of each null hypothesis. That lack of shrinkage can lead to many false positives in settings with large numbers of hypotheses. A related problem is that point hypotheses cannot have more support than their alternatives. Both problems may be solved by fusing the realistic model with a model of a more restricted parameter space for use with the general law of likelihood. Applying the proposed framework of model fusion to data sets from genomics and education yields intuitively reasonable weights of evidence.  相似文献   

7.
This paper deals with √n-consistent estimation of the parameter μ in the RCAR(l) model defined by the difference equation Xj=(μ+Uj)Xj-l+ej (jε Z), where {ej: jε Z} and {Uj: jε Z} are two independent sets of i.i.d. random variables with zero means, positive finite variances and E[(μ+U1)2] < 1. A class of asymptotically normal estimators of μ indexed by a family of bounded measurable functions is introduced. Then an estimator is constructed which is asymptotically equivalent to the best estimator in that class. This estimator, asymptotically equivalent to the quasi-maximum likelihood estimator derived in Nicholls & Quinn (1982), is much simpler to calculate and is asymptotically normal without the additional moment conditions those authors impose.  相似文献   

8.
A two-stage procedure is studied for estimating changes in the parameters of the multi-parameter exponential family, given a sample X 1,…,X n. The first step is a likelihood ratio test of the hypothesis Hoof no change. Upon rejection of this hypothesis, the change point index and pre- and post-change parameters are estimated by maximum likelihood. The asymptotic (n → ∞) distribution of the log-likelihood ratio statistic is obtained under both Hoand local alternatives. The m.l.e.fs o of the pre- and post-change parameters are shown to be asymptotically jointly normal. The distribution of the change point estimate is obtained under local alternatives. Performance of the procedure for moderate samples is studied by Monte Carlo methods.  相似文献   

9.
Marginalised models, also known as marginally specified models, have recently become a popular tool for analysis of discrete longitudinal data. Despite being a novel statistical methodology, these models introduce complex constraint equations and model fitting algorithms. On the other hand, there is a lack of publicly available software to fit these models. In this paper, we propose a three-level marginalised model for analysis of multivariate longitudinal binary outcome. The implicit function theorem is introduced to approximately solve the marginal constraint equations explicitly. probit link enables direct solutions to the convolution equations. Parameters are estimated by maximum likelihood via a Fisher–Scoring algorithm. A simulation study is conducted to examine the finite-sample properties of the estimator. We illustrate the model with an application to the data set from the Iowa Youth and Families Project. The R package pnmtrem is prepared to fit the model.  相似文献   

10.
We provide general conditions to ensure the valid Laplace approximations to the marginal likelihoods under model misspecification, and derive the Bayesian information criteria including all terms of order Op(1). Under conditions in theorem 1 of Lv and Liu [J. R. Statist. Soc. B, 76, (2014), 141–167] and a continuity condition for prior densities, asymptotic expansions with error terms of order op(1) are derived for the log-marginal likelihoods of possibly misspecified generalized linear models. We present some numerical examples to illustrate the finite sample performance of the proposed information criteria in misspecified models.  相似文献   

11.
Let U, V and W be independent random variables, U and V having a gamma distribution with respective shape parameters a and b, and W having a non-central gamma distribution with shape and non-centrality parameters c and δ, respectively. Define X = U/(U + W) and Y = V/(V + W). Clearly, X and Y are correlated each having a non-central beta type 1 distribution, X ~ NCB1 (a,c;d){X \sim {\rm NCB1} (a,c;\delta)} and Y ~ NCB1 (b,c;d){Y \sim {\rm NCB1} (b,c;\delta)} . In this article we derive the joint probability density function of X and Y and study its properties.  相似文献   

12.
We study the r-content Δ of the r -simplex generated by r+ 1 independent random points in R”. Each random point Zj is isotropic and distributed according to λ||Zj||2 ~ beta-type-2(n/2, v), λ, v > 0. We provide an asymptotic normality result which is analogous to the conjecture made by Miles (1971). A method is introduced to work out the exact density of W = (rλ)r(r!Δ)2/(r + |)r+l and hence that of Δ. The distribution of W is also related to some hypothesis-testing problems in multivariate analysis. Furthermore, by using this method, the distribution of W or Δ can easily be simulated.  相似文献   

13.
14.
Summary In this paper likelihood is characterized as an index which measures how much a model fits a sample. Some properties required to an index of fit are introduced and discussed, while stressing how they describe aspects inner to idea of fit. Finally we prove that, if an index of fit is maximal when the model reaches the distribution of the sample, then such an index is an increasing continuous transform of , where thep i's are the theoretical relative frequencies provided by the model and theq i's are the actual relative frequencies of the sample.  相似文献   

15.
Assume that X 1, X 2,…, X n is a sequence of i.i.d. random variables with α-stable distribution (α ∈ (0,2], the stable exponent, is the unknown parameter). We construct minimum distance estimators for α by minimizing the Kolmogorov distance or the Cramér–von-Mises distance between the empirical distribution function G n , and a class of distributions defined based on the sum-preserving property of stable random variables. The minimum distance estimators can also be obtained by minimizing a U-statistic estimate of an empirical distribution function involving the stable exponent. They share the same invariance property with the maximum likelihood estimates. In this article, we prove the strong consistency of the minimum distance estimators. We prove the asymptotic normality of our estimators. Simulation study shows that the new estimators are competitive to the existing ones and perform very closely even to the maximum likelihood estimator.  相似文献   

16.
In the formula of the likelihood ratio test on fourfold tables with matched pairs of binary data, only the two parts b and c, which represent changes, are considered; the retained parts a and d, which represent concordant observations, are not included. To develop the test by considering all the four parts and the mixture distribution of likelihood ratio chi-squares, a formula based on the entire sample is proposed. The revised formula is the same as the unrevised one when a + d is zero. The revised test is more valid than the revised McNemar's test in most cases.  相似文献   

17.
This paper investigates a regression model for orthogonal matrices introduced by Prentice (1989). It focuses on the special case of 3 × 3 rotation matrices. The model under study expresses the dependent rotation matrix V as A1UAt2 perturbed by experimental errors, where A1 and A2 are unknown 3 × 3 rotation matrices and U is an explanatory 3 × 3 rotation matrix. Several specifications for the errors in this regression model are proposed. The asymptotic distributions, as the sample size n becomes large or as the experimental errors become small, of the least squares estimators for A1 and A2 are derived. A new algorithm for calculating the least squares estimates of A1 and A2 is presented. The independence model is not a submodel of Prentice's regression model, thus the independence between the U and the V sample cannot be tested when fitting Prentice's model. To overcome this difficulty, permutation tests of independence are investigated. Examples dealing with postural variations of subjects performing a drilling task and with the calibration of a camera system for motion analysis using a magnetic tracking device illustrate the methodology of this paper.  相似文献   

18.
This article presents a constrained maximization of the Shapiro Wilk W statistic for estimating parameters of the Johnson S B distribution. The gradient of the W statistic with respect to the minimum and range parameters is used within a quasi-Newton framework to achieve a fit for all four parameters. The method is evaluated with measures of bias and precision using pseudo-random samples from three different S B populations. The population means were estimated with an average relative bias of less than 0.1% and the population standard deviations with less than 4.0% relative bias. The methodology appears promising as a tool for fitting this sometimes difficult distribution.  相似文献   

19.
20.
Consider an inhomogeneous Poisson process X on [0, T] whose unk-nown intensity function “switches” from a lower function g* to an upper function h* at some unknown point ?* that has to be identified. We consider two known continuous functions g and h such that g*(t) ? g(t) < h(t) ? h*(t) for 0 ? t ? T. We describe the behavior of the generalized likelihood ratio and Wald’s tests constructed on the basis of a misspecified model in the asymptotics of large samples. The power functions are studied under local alternatives and compared numerically with help of simulations. We also show the following robustness result: the Type I error rate is preserved even though a misspecified model is used to construct tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号