首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

2.
When estimating treatment effect on count outcome of given population, one uses different models in different studies, resulting in non-comparable measures of treatment effect. Here we show that the marginal rate differences in these studies are comparable measures of treatment effect. We estimate the marginal rate differences by log-linear models and show that their finite-sample maximum-likelihood estimates are unbiased and highly robust with respect to effects of dispersing covariates on outcome. We get approximate finite-sample distributions of these estimates by using the asymptotic normal distribution of estimates of the log-linear model parameters. This method can be easily applied to practice.  相似文献   

3.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

4.
We study a factor analysis model with two normally distributed observations and one factor. In the case when the errors have equal variance, the maximum likelihood estimate of the factor loading is given in closed form. Exact and approximate distributions of the maximum likelihood estimate are considered. The exact distribution function is given in a complex form that involves the incomplete Beta function. Approximations to the distribution function are given for the cases of large sample sizes and small error variances. The accuracy of the approximations is discussed  相似文献   

5.
Elimination of a nuisance variable is often non‐trivial and may involve the evaluation of an intractable integral. One approach to evaluate these integrals is to use the Laplace approximation. This paper concentrates on a new approximation, called the partial Laplace approximation, that is useful when the integrand can be partitioned into two multiplicative disjoint functions. The technique is applied to the linear mixed model and shows that the approximate likelihood obtained can be partitioned to provide a conditional likelihood for the location parameters and a marginal likelihood for the scale parameters equivalent to restricted maximum likelihood (REML). Similarly, the partial Laplace approximation is applied to the t‐distribution to obtain an approximate REML for the scale parameter. A simulation study reveals that, in comparison to maximum likelihood, the scale parameter estimates of the t‐distribution obtained from the approximate REML show reduced bias.  相似文献   

6.
The problems of existence and uniqueness of maximum likelihood estimates for logistic regression were completely solved by Silvapulle in 1981 and Albert and Anderson in 1984. In this paper, we extend the well-known results by Silvapulle and by Albert and Anderson to weighted logistic regression. We analytically prove the equivalence between the overlap condition used by Albert and Anderson and that used by Silvapulle. We show that the maximum likelihood estimate of weighted logistic regression does not exist if there is a complete separation or a quasicomplete separation of the data points, and exists and is unique if there is an overlap of data points. Our proofs and results for weighted logistic apply to unweighted logistic regression.  相似文献   

7.
In this article, we estimate confidence regions of the common measures of (baseline, treatment effect) in observational studies, where the measure of a baseline is baseline risk or baseline odds, while the measure of a treatment effect is odds ratio, risk difference, risk ratio or attributable fraction, and where confounding is controlled in estimation of both the baseline and treatment effect. We use only one logistic model to generate approximate distributions of the maximum-likelihood estimates of these measures and thus obtain the maximum-likelihood-based confidence regions for these measures. The method is presented via a real medical example.  相似文献   

8.
This paper deals with the regression analysis of failure time data when there are censoring and multiple types of failures. We propose a semiparametric generalization of a parametric mixture model of Larson & Dinse (1985), for which the marginal probabilities of the various failure types are logistic functions of the covariates. Given the type of failure, the conditional distribution of the time to failure follows a proportional hazards model. A marginal like lihood approach to estimating regression parameters is suggested, whereby the baseline hazard functions are eliminated as nuisance parameters. The Monte Carlo method is used to approximate the marginal likelihood; the resulting function is maximized easily using existing software. Some guidelines for choosing the number of Monte Carlo replications are given. Fixing the regression parameters at their estimated values, the full likelihood is maximized via an EM algorithm to estimate the baseline survivor functions. The methods suggested are illustrated using the Stanford heart transplant data.  相似文献   

9.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   

10.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

11.
We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.  相似文献   

12.
Modelling volatility in the form of conditional variance function has been a popular method mainly due to its application in financial risk management. Among others, we distinguish the parametric GARCH models and the nonparametric local polynomial approximation using weighted least squares or gaussian likelihood function. We introduce an alternative likelihood estimate of conditional variance and we show that substitution of the error density with its estimate yields similar asymptotic properties, that is, the proposed estimate is adaptive to the error distribution. Theoretical comparison with existing estimates reveals substantial gains in efficiency, especially if error distribution has fatter tails than Gaussian distribution. Simulated data confirm the theoretical findings while an empirical example demonstrates the gains of the proposed estimate.  相似文献   

13.
We present an approximate leaving-one-out technique for estimating the error rate in logistic discrimination. The new measure is based on the one-step approximation of a(i), the maximum likelihood estimate of the parameter vector based on the sample without the ith case. Some inequalities between the resubstitution error rate, the approximate and exact leaving-one-out error rates for the multiple group logistic model are investigated. Monte-Carlo simulations assess the adequacy of the approximate leaving-one-out method as an estimate of the actual error rate. The usefulness of this approach is demonstrated by means of two medical examples.  相似文献   

14.
15.
To model growth curves in survival analysis and biological studies the logistic distribution has been widely used. In this article, we propose a goodness-of-fit test for the logistic distribution based on an estimate of the Gini index. The exact distribution of the proposed test statistic and also its asymptotic distribution are presented. In order to compute the proposed test statistic, parameters of the logistic distribution are estimated by approximate maximum likelihood estimators (AMLEs), which are simple explicit estimators. Through Monte Carlo simulations, power comparisons of the proposed test with some known competing tests are carried. Finally, an illustrative example is presented and analyzed.  相似文献   

16.
This paper addresses the estimation for the unknown scale parameter of the half-logistic distribution based on a Type-I progressively hybrid censoring scheme. We evaluate the maximum likelihood estimate (MLE) via numerical method, and EM algorithm, and also the approximate maximum likelihood estimate (AMLE). We use a modified acceptance rejection method to obtain the Bayes estimate and corresponding highest posterior confidence intervals. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyze one dataset for illustrative purposes.  相似文献   

17.
Truncated Cauchy distribution with four unknown parameters is considered and derivation and existence of the maximum likelihood estimates is investigated here. We provide a sufficient condition for the maximum likelihood estimate of the scale parameter to be finite, and also show that the condition is necessary for sufficiently large samples. Note that all the moments of the truncated Cauchy distribution exist which makes it much more attractive as a model when compared to the regular Cauchy. We also study, using simulations, the small sample properties of the maximum likelihood estimates.  相似文献   

18.
Based on the large-sample normal distribution of the sample log odds ratio and its asymptotic variance from maximum likelihood logistic regression, shortest 95% confidence intervals for the odds ratio are developed. Although the usual confidence interval on the odds ratio is unbiased, the shortest interval is not. That is, while covering the true odds ratio with the stated probability, the shortest interval covers some values below the true odds ratio with higher probability. The upper and lower limits of the shortest interval are shifted to the left of those of the usual interval, with greater shifts in the upper limits. With the log odds model γ + , in which X is binary, simulation studies showed that the approximate average percent difference in length is 7.4% for n (sample size) = 100, and 3.8% for n = 200. Precise estimates of the covering probabilities of the two types of intervals were obtained from simulation studies, and are compared graphically. For odds ratio estimates greater (less) than one, shortest intervals are more (less) likely to include one than are the usual intervals. The usual intervals are likelihood-based and the shortest intervals are not. The usual intervals have minimum expected length among the class of unbiased intervals. Shortest intervals do not provide important advantages over the usual intervals, which we recommend for practical use.  相似文献   

19.
The generalized maximum likelihood estimate (GMLE) assumptions are studied for four product-limit estimates (PLE): Censoring PLE (Kaplan-Meier estimate), truncation PLE, censoring-truncation PLE, and the degenerated PLE - the empirical distribution function. This paper shows that all the PLE's are also the GMLE's even if they are derived from partial likelihoods by natural parameterization techniques. However, a counter example is given to show that Kiefer Wolfowitz's assumption (1956) for consistency of GMLE can hardly be satisfied for un-dominated case.  相似文献   

20.
Three general algorithms that use different strategies are proposed for computing the maximum likelihood estimate of a semiparametric mixture model. They seek to maximize the likelihood function by, respectively, alternating the parameters, profiling the likelihood and modifying the support set. All three algorithms make a direct use of the recently proposed fast and stable constrained Newton method for computing the nonparametric maximum likelihood of a mixing distribution and employ additionally an optimization algorithm for unconstrained problems. The performance of the algorithms is numerically investigated and compared for solving the Neyman-Scott problem, overcoming overdispersion in logistic regression models and fitting two-level mixed effects logistic regression models. Satisfactory results have been obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号