首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 476 毫秒
1.
Linear random effects models for longitudinal data discussed by Laird and Ware (1982), Jennrich and Schluchter (1986), Lange and Laird (1989), and others are extended in a straight forward manner to nonlinear random effects models. This results in a simple computational approach which accommodates patterned covariance matrices and data insufficient for fitting each subject separately. The technique is demonstrated with an interesting medical data set, and a short, simple SAS PROC IML program based on the EM algorithm is presented.  相似文献   

2.
In the estimation of cell probabilities from a two–way contingency table, suppose that a priori the classification variables are believed independent. New empirical Bayes and Bayes estimators are proposed which shrink the observed proportions towards classical estimates under the model of independence. The estimators, based on a Dirichlet mixture class of priors, compare favorably to an estimator of Laird (1978) that is based on a normal prior on terms of a log–linear model. The methods are generalized to three–way tables.  相似文献   

3.
The procedure suggested by DerSimonian and Laird is the simplest and most commonly used method for fitting the random effects model for meta-analysis. Here it is shown that, unless all studies are of similar size, this is inefficient when estimating the between-study variance, but is remarkably efficient when estimating the treatment effect. If formal inference is restricted to statements about the treatment effect, and the sample size is large, there is little point in implementing more sophisticated methodology. However, it is further demonstrated, for a simple special case, that use of the profile likelihood results in actual coverage probabilities for 95% confidence intervals that are closer to nominal levels for smaller sample sizes. Alternative methods for making inferences for the treatment effect may therefore be preferable if the sample size is small, but the DerSimonian and Laird procedure retains its usefulness for larger samples.  相似文献   

4.
Most models for incomplete data are formulated within the selection model framework. Pattern-mixture models are increasingly seen as a viable alternative, both from an interpretational as well as from a computational point of view (Little 1993, Hogan and Laird 1997, Ekholm and Skinner 1998). Whereas most applications are either for continuous normally distributed data or for simplified categorical settings such as contingency tables, we show how a multivariate odds ratio model (Molenberghs and Lesaffre 1994, 1998) can be used to fit pattern-mixture models to repeated binary outcomes with continuous covariates. Apart from point estimation, useful methods for interval estimation are presented and data from a clinical study are analyzed to illustrate the methods.  相似文献   

5.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   

6.
A question of fundamental importance for meta-analysis of heterogeneous multidimensional data studies is how to form a best consensus estimator of common parameters, and what uncertainty to attach to the estimate. This issue is addressed for a class of unbalanced linear designs which include classical growth curve models. The solution obtained is similar to the popular DerSimonian and Laird (1986) method for a simple meta-analysis model. By using almost unbiased variance estimators, an estimator of the covariance matrix of this procedure is derived. Combination of these methods is illustrated by two examples and are compared via simulation.  相似文献   

7.
The seemingly unrelated regression model is viewed in the context of repeated measures analysis. Regression parameters and the variance-covariance matrix of the seemingly unrelated regression model can be estimated by using two-stage Aitken estimation. The first stage is to obtain a consistent estimator of the variance-covariance matrix. The second stage uses this matrix to obtain the generalized least squares estimators of the regression parameters. The maximum likelihood (ML) estimators of the regression parameters can be obtained by performing the two-stage estimation iteratively. The iterative two-stage estimation procedure is shown to be equivalent to the EM algorithm (Dempster, Laird, and Rubin, 1977) proposed by Jennrich and Schluchter (1986) and Laird, Lange, and Stram (1987) for repeated measures data. The equivalence of the iterative two-stage estimator and the ML estimator has been previously demonstrated empirically in a Monte Carlo study by Kmenta and Gilbert (1968). It does not appear to be widely known that the two estimators are equivalent theoretically. This paper demonstrates this equivalence.  相似文献   

8.
Empirical Bayes approaches have often been applied to the problem of estimating small-area parameters. As a compromise between synthetic and direct survey estimators, an estimator based on an empirical Bayes procedure is not subject to the large bias that is sometimes associated with a synthetic estimator, nor is it as variable as a direct survey estimator. Although the point estimates perform very well, naïve empirical Bayes confidence intervals tend to be too short to attain the desired coverage probability, since they fail to incorporate the uncertainty which results from having to estimate the prior distribution. Several alternative methodologies for interval estimation which correct for the deficiencies associated with the naïve approach have been suggested. Laird and Louis (1987) proposed three types of bootstrap for correcting naïve empirical Bayes confidence intervals. Calling the methodology of Laird and Louis (1987) an unconditional bias-corrected naïve approach, Carlin and Gelfand (1991) suggested a modification to the Type III parametric bootstrap which corrects for bias in the naïve intervals by conditioning on the data. Here we empirically evaluate the Type II and Type III bootstrap proposed by Laird and Louis, as well as the modification suggested by Carlin and Gelfand (1991), with the objective of examining coverage properties of empirical Bayes confidence intervals for small-area proportions.  相似文献   

9.
Incomplete data subject to non‐ignorable non‐response are often encountered in practice and have a non‐identifiability problem. A follow‐up sample is randomly selected from the set of non‐respondents to avoid the non‐identifiability problem and get complete responses. Glynn, Laird, & Rubin analyzed non‐ignorable missing data with a follow‐up sample under a pattern mixture model. In this article, maximum likelihood estimation of parameters of the categorical missing data is considered with a follow‐up sample under a selection model. To estimate the parameters with non‐ignorable missing data, the EM algorithm with weighting, proposed by Ibrahim, is used. That is, in the E‐step, the weighted mean is calculated using the fractional weights for imputed data. Variances are estimated using the approximated jacknife method. Simulation results are presented to compare the proposed method with previously presented methods.  相似文献   

10.
A problem which occurs in the practice of meta-analysis is that one or more component studies may have sparse data, such as zero events in the treatment and control groups. Two possible approaches were explored using simulations. The corrected method, in which one half was added to each cell was compared to the uncorrected method. These methods were compared over a range of sparse data situations in terms of coverage rates using three summary statistics:the Mantel-Haenszel odds ratio and the dersimonian and Laird odds ratio and rate difference. The uncorrected method performed better only when using the Mantel-Haenszel odds ratio with very little heterogeneity present. For all other sparse data applications, the continuity correction performed better and is recommended for use in meta-analyses of similar scope  相似文献   

11.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

12.
In this paper we propose test statistics for a general hypothesis concerning the adequacy of multivariate random-effects covariance structures in a multivariate growth curve model with differing numbers of random effects (Lange, N., N.M. Laird, J. Amer. Statist. Assoc. 84 (1989) 241–247). Since the exact likelihood ratio (LR) statistic for the hypothesis is complicated, it is suggested to use a modified LR statistic. An asymptotic expansion of the null distribution of the statistic is obtained. The exact LR statistic is also discussed.  相似文献   

13.
Methods of estimating unknown parameters of a trend function for trend-renewal processes are investigated in the case when the renewal distribution function is unknown. If the renewal distribution is unknown, then the likelihood function of the trend-renewal process is unknown and consequently the maximum likelihood method cannot be used. In such a situation we propose three other methods of estimating the trend parameters. The methods proposed can also be used to predict future occurrence times. The performance of the estimators based on these methods is illustrated numerically for some trend-renewal processes for which the statistical inference is analytically intractable.  相似文献   

14.
Some problems of point and interval prediction in a trend-renewal process (TRP) are considered. TRP’s, whose realizations depend on a renewal distribution as well as on a trend function, comprise the non-homogeneous Poisson and renewal processes and serve as useful reliability models for repairable systems. For these processes, some possible ideas and methods for constructing the predicted next failure time and the prediction interval for the next failure time are presented. A method of constructing the predictors is also presented in the case when the renewal distribution of a TRP is unknown (and consequently, the likelihood function of this process is unknown). Using the prediction methods proposed, simulations are conducted to compare the predicted times and prediction intervals for a TRP with completely unknown renewal distribution with the corresponding results for the TRP with a Weibull renewal distribution and power law type trend function. The prediction methods are also applied to some real data.  相似文献   

15.
Confidence interval (CI) is very useful for trend estimation in meta-analysis. It provides a type of interval estimate of the regression slope as well as an indicator of the reliability of the estimate. Thus a precise calculation of confidence interval at an expected level is important. It is always difficult to explicitly quantify the CIs when there is publication bias in meta-analysis. Various CIs have been proposed, including the most widely used DerSimonian–Laird CI and the recently proposed Henmi–Copas CI. The latter provides a robust solution when there are non-ignorable missing data due to publication bias. In this paper we extended the idea into meta-analysis for trend estimation. We applied the method in different scenarios and showed that this type of CI is more robust than the others.  相似文献   

16.
In this article, a competing risks model based on exponential distributions is considered under the adaptive Type-II progressively censoring scheme introduced by Ng et al. [2009, Naval Research Logistics 56:687-698], for life testing or reliability experiment. Moreover, we assumed that some causes of failures are unknown. The maximum likelihood estimators (MLEs) of unknown parameters are established. The exact conditional and the asymptotic distributions of the obtained estimators are derived to construct the confidence intervals as well as the two different bootstraps of different unknown parameters. Under suitable priors on the unknown parameters, Bayes estimates and the corresponding two sides of Bayesian probability intervals are obtained. Also, for the purpose of evaluating the average bias and mean square error of the MLEs, and comparing the confidence intervals based on all mentioned methods, a simulation study was carried out. Finally, we present one real dataset to conduct the proposed methods.  相似文献   

17.
We consider two problems concerning locating change points in a linear regression model. One involves jump discontinuities (change-point) in a regression model and the other involves regression lines connected at unknown points. We compare four methods for estimating single or multiple change points in a regression model, when both the error variance and regression coefficients change simultaneously at the unknown point(s): Bayesian, Julious, grid search, and the segmented methods. The proposed methods are evaluated via a simulation study and compared via some standard measures of estimation bias and precision. Finally, the methods are illustrated and compared using three real data sets. The simulation and empirical results overall favor both the segmented and Bayesian methods of estimation, which simultaneously estimate the change point and the other model parameters, though only the Bayesian method is able to handle both continuous and dis-continuous change point problems successfully. If it is known that regression lines are continuous then the segmented method ranked first among methods.  相似文献   

18.
The Shewhart, Bonferroni-adjustment, and analysis of means (ANOM) control charts are typically applied to monitor the mean of a quality characteristic. The Shewhart and Bonferroni procedure are utilized to recognize special causes in production process, where the control limits are constructed by assuming normal distribution for known parameters (mean and standard deviation), and approximately normal distribution regarding to unknown parameters. The ANOM method is an alternative to the analysis of variance method. It can be used to establish the mean control charts by applying equicorrelated multivariate non central t distribution. In this article, we establish new control charts, in phases I and II monitoring, based on normal and t distributions having as a cause a known (or unknown) parameter (standard deviation). Our proposed methods are at least as effective as the classical Shewhart methods and have some advantages.  相似文献   

19.
Let X1,X2…be i.i.d. observations from a mixture density. The support of the unknown prior distribution is the union of two unknown intervals. The paper deals with an empirical Bayes testing approach (?≤ c against>c where c is an unknown parameter to be estimated) in order to classify the observed variables as coming from one population or the other as ? belongs to one or the other unknown interval. Two methods are proposed in which asymptotically optimal decision rules are constructed avoiding the estimation of the unknown prior. The first method deals with the case of exponential families and is a generalization of the method of Johns and Van Ryzin (1971, 1972) whereas the second one deals with families that are closed under convolution and is a Fourier method. The application of the Fourier method to some densities (i.e. contaminated Gaussian distributions, exponential distribution, double-exponential distribution) which are interesting in view of applications and which cannot be studied by means of the direct method, is also considered herein.  相似文献   

20.
The proven optimality properties of empirical Bayes estimators and their documented successful performance in practice have made them popular. Although many statisticians have used these estimators since the landmark paper of James and Stein (1961), relatively few have proposed techniques for protecting them from the effects of outlying observations or outlying parameters. One notable series of studies in protection against outlying parameters was conducted by Efron and Morris (1971, 1972, 1975). In the fully Bayesian case, a general discussion on robust procedures can be found in Berger (1984, 1985). Here we implement and evaluate a different approach for outlier protection in a random-effects model which is based on appropriate specification of the prior distribution. When unusual parameters are present, we estimate the prior as a step function, as suggested by Laird and Louis (1987). This procedure is evaluated empirically, using a number of simulated data sets to compare the effects of the step-function prior with those of the normal and Laplace priors on the prediction of small-area proportions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号