首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
The composite likelihood is amongst the computational methods used for estimation of the generalized linear mixed model (GLMM) in the context of bivariate meta-analysis of diagnostic test accuracy studies. Its advantage is that the likelihood can be derived conveniently under the assumption of independence between the random effects, but there has not been a clear analysis of the merit or necessity of this method. For synthesis of diagnostic test accuracy studies, a copula mixed model has been proposed in the biostatistics literature. This general model includes the GLMM as a special case and can also allow for flexible dependence modelling, different from assuming simple linear correlation structures, normality and tail independence in the joint tails. A maximum likelihood (ML) method, which is based on evaluating the bi-dimensional integrals of the likelihood with quadrature methods, has been proposed, and in fact it eases any computational difficulty that might be caused by the double integral in the likelihood function. Both methods are thoroughly examined with extensive simulations and illustrated with data of a published meta-analysis. It is shown that the ML method has no non-convergence issues or computational difficulties and at the same time allows estimation of the dependence between study-specific sensitivity and specificity and thus prediction via summary receiver operating curves.  相似文献   

2.
Spatially correlated survival data are frequently observed in ecological and epidemiological studies. An assumption in the clustered survival models is inter-cluster independence, which may not be adequate to model the dependence in spatial settings. For survival data, the likelihood function based on a spatial frailty may be complicated. In this paper, we develop a weighted estimating equation for spatially right-censored data. Some large sample properties for the estimate are developed. We also conduct simulations to compare estimation performance with other methods. A data set from a study of forest decline in Wisconsin is used to illustrate the proposed method.  相似文献   

3.
Summary.  We consider the problem of estimating the proportion of true null hypotheses, π 0, in a multiple-hypothesis set-up. The tests are based on observed p -values. We first review published estimators based on the estimator that was suggested by Schweder and Spjøtvoll. Then we derive new estimators based on nonparametric maximum likelihood estimation of the p -value density, restricting to decreasing and convex decreasing densities. The estimators of π 0 are all derived under the assumption of independent test statistics. Their performance under dependence is investigated in a simulation study. We find that the estimators are relatively robust with respect to the assumption of independence and work well also for test statistics with moderate dependence.  相似文献   

4.
Multiple comparison, which tests tens of thousands hypotheses simultaneously, has been developed extensively under the assumption of independence of test statistics. Practically the independence assumption, without which multiple comparison inference becomes very challenging, may not be suitable. Fan, Han and Gu proposed an efficient way to estimate the false discovery proportion when the test statistics are normally distributed with arbitrary covariance dependence. The normal distribution assumption constrains the employment of their estimator. In this article, we generalize their method to the cases where the test statistics are built on the normally distributed sample.  相似文献   

5.
The assumption of serial independence of disturbances is the starting point of most of the work done on analyzing market disequilibrium models. We derive tests for serial dependence given normality and homoscedasticity using the Lagrange multiplier (LM) test principle. Although the likelihood function under serial dependence is very complicated and involves multiple integrals of dimensions equal to the sample size, the test statistic we obtain through the LM principle is very simple. We apply the test to the housing-start data of Fair and Jaffee (1972) and study its finite sample properties through simulation. The test seems to perform quite well in finite samples in terms of size and power. We present an analysis of disequilibrium models that assumes that the disturbances are logistic rather than normal. The relative performances of these distributions are investigated by simulation.  相似文献   

6.
In this article, three innovative panel error-correction model (PECM) tests are proposed. These tests are based on the multivariate versions of the Wald (W), likelihood ratio (LR), and Lagrange multiplier (LM) tests. Using Monte Carlo simulations, the size and power of the tests are investigated when the error terms exhibit both cross-sectional dependence and independence. We find that the LM test is the best option when the error terms follow independent white-noise processes. However, in the more empirically relevant case of cross-sectional dependence, we conclude that the W test is the optimal choice. In contrast to previous studies, our method is general and does not rely on the strict assumption that a common factor causes the cross-sectional dependency. In an empirical application, our method is also demonstrated in terms of the Fisher effect—a hypothesis about the existence of which there is still no clear consensus. Based on our sample of the five Nordic countries we utilize our powerful test and discover evidence which, in contrast to most previous research, confirms the Fisher effect.  相似文献   

7.
For some discrete state series, such as DNA sequences, it can often be postulated that its probabilistic behaviour is given by a Markov chain. For making the decision on whether or not an uncharacterized piece of DNA is part of the coding region of a gene, under the Markovian assumption, there are two statistical tools that are essential to be considered: the hypothesis testing of the order in a Markov chain and the estimators of transition probabilities. In order to improve the traditional statistical procedures for both of them when stationarity assumption can be considered, a new version for understanding the homogeneity hypothesis is proposed so that log-linear modelling is applied for conditional independence jointly with homogeneity restrictions on the expected means of transition counts in the sequence. In addition we can consider a variety of test-statistics and estimators by using φ-divergence measures. As special case of them the well-known likelihood ratio test-statistics and maximum-likelihood estimators are obtained.  相似文献   

8.
To estimate the total number of distinct species in a given region, Bayesian methods along with quadrat sampling procedures have been used by several authors. A key underlying assumption relies on the independence among the species. In this note, we analyse these estimates allowing a generalized binomial dependence between species.  相似文献   

9.
We study the benefit of exploiting the gene–environment independence (GEI) assumption for inferring the joint effect of genotype and environmental exposure on disease risk in a case–control study. By transforming the problem into a constrained maximum likelihood estimation problem we derive the asymptotic distribution of the maximum likelihood estimator (MLE) under the GEI assumption (MLE‐GEI) in a closed form. Our approach uncovers a transparent explanation of the efficiency gained by exploiting the GEI assumption in more general settings, thus bridging an important gap in the existing literature. Moreover, we propose an easy‐to‐implement numerical algorithm for estimating the model parameters in practice. Finally, we conduct simulation studies to compare the proposed method with the traditional prospective logistic regression method and the case‐only estimator. The Canadian Journal of Statistics 47: 473–486; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In this paper, we consider the Clayton–Oakes model with marginal proportional hazards and use the full model structure to improve on efficiency compared with the independence analysis. We derive a likelihood based estimating equation for the regression parameters as well as for the correlation parameter of the model. We give the large sample properties of the estimators arising from this estimating equation. Finally, we investigate the small sample properties of the estimators through Monte Carlo simulations.  相似文献   

11.
传统的分层模型假设组与组之间独立,没有考虑组之间的相关性。而以地理单元分组的数据往往具有空间依赖性,个体不仅受本地区的影响,也可能受相邻地区的影响。此时,传统分层模型层-2残差分布的假设不再成立。为了处理空间分层数据,将空间统计和空间计量经济模型的思想引入到分层模型中,既纳入分层的思想,又顾及空间相关性,提出了空间分层线性模型,并给出了其固定效应、方差协方差成分和空间回归参数的最大似然估计,在运用EM算法时,结合运用了Fisher得分算法。  相似文献   

12.
ABSTRACT

In clustered survival data, the dependence among individual survival times within a cluster has usually been described using copula models and frailty models. In this paper we propose a profile likelihood approach for semiparametric copula models with different cluster sizes. We also propose a likelihood ratio method based on profile likelihood for testing the absence of association parameter (i.e. test of independence) under the copula models, leading to the boundary problem of the parameter space. For this purpose, we show via simulation study that the proposed likelihood ratio method using an asymptotic chi-square mixture distribution performs well as sample size increases. We compare the behaviors of the two models using the profile likelihood approach under a semiparametric setting. The proposed method is demonstrated using two well-known data sets.  相似文献   

13.
In a breakthrough paper, Benjamini and Hochberg (J Roy Stat Soc Ser B 57:289–300, 1995) proposed a new error measure for multiple testing, the FDR; and developed a distribution-free procedure to control it under independence among the test statistics. In this paper we argue by extensive simulation and theoretical considerations that the assumption of independence is not needed. Along the lines of (Ann Stat 32:1035–1061, 2004b), we moreover provide a more powerful method, that exploits an estimator of the number of false nulls among the tests. We propose a whole family of iterative estimators that prove robust under dependence and independence between the test statistics. These estimators can be used to improve also classical multiple testing procedures, and in general to estimate the weight of a known component in a mixture distribution. Innovations are illustrated by simulations.  相似文献   

14.
Portmanteau tests are typically used to test serial independence even if, by construction, they are generally powerful only in presence of pairwise dependence between lagged variables. In this article, we present a simple statistic defining a new serial independence test, which is able to detect more general forms of dependence. In particular, differently from the Portmanteau tests, the resulting test is powerful also under a dependent process characterized by pairwise independence. A diagram, based on p-values from the proposed test, is introduced to investigate serial dependence. Finally, the effectiveness of the proposal is evaluated in a simulation study and with an application on financial data. Both show that the new test, used in synergy with the existing ones, helps in the identification of the true data-generating process. Supplementary materials for this article are available online.  相似文献   

15.
It is well known that the traditional Pearson correlation in many cases fails to capture non-linear dependence structures in bivariate data. Other scalar measures capable of capturing non-linear dependence exist. A common disadvantage of such measures, however, is that they cannot distinguish between negative and positive dependence, and typically the alternative hypothesis of the accompanying test of independence is simply “dependence”. This paper discusses how a newly developed local dependence measure, the local Gaussian correlation, can be used to construct local and global tests of independence. A global measure of dependence is constructed by aggregating local Gaussian correlation on subsets of \(\mathbb{R}^{2}\) , and an accompanying test of independence is proposed. Choice of bandwidth is based on likelihood cross-validation. Properties of this measure and asymptotics of the corresponding estimate are discussed. A bootstrap version of the test is implemented and tried out on both real and simulated data. The performance of the proposed test is compared to the Brownian distance covariance test. Finally, when the hypothesis of independence is rejected, local independence tests are used to investigate the cause of the rejection.  相似文献   

16.
One of the major objections to the standard multiple-recapture approach to population estimation is the assumption of homogeneity of individual 'capture' probabilities. Modelling individual capture heterogeneity is complicated by the fact that it shows up as a restricted form of interaction among lists in the contingency table cross-classifying list memberships for all individuals. Traditional log-linear modelling approaches to capture–recapture problems are well suited to modelling interactions among lists but ignore the special dependence structure that individual heterogeneity induces. A random-effects approach, based on the Rasch model from educational testing and introduced in this context by Darroch and co-workers and Agresti, provides one way to introduce the dependence resulting from heterogeneity into the log-linear model; however, previous efforts to combine the Rasch-like heterogeneity terms additively with the usual log-linear interaction terms suggest that a more flexible approach is required. In this paper we consider both classical multilevel approaches and fully Bayesian hierarchical approaches to modelling individual heterogeneity and list interactions. Our framework encompasses both the traditional log-linear approach and various elements from the full Rasch model. We compare these approaches on two examples, the first arising from an epidemiological study of a population of diabetics in Italy, and the second a study intended to assess the 'size' of the World Wide Web. We also explore extensions allowing for interactions between the Rasch and log-linear portions of the models in both the classical and the Bayesian contexts.  相似文献   

17.
Among many classification methods, linear discriminant analysis (LDA) is a favored tool due to its simplicity, robustness, and predictive accuracy but when the number of genes is larger than the number of observations, it cannot be applied directly because the within-class covariance matrix is singular. Also, diagonal LDA (DLDA) is a simpler model compared to LDA and has better performance in some cases. However, in reality, DLDA requires a strong assumption based on mutual independence. In this article, we propose the modified LDA (MLDA). MLDA is based on independence, but uses the information that has an effect on classification performance with the dependence structure. We suggest two approaches. One is the case of using gene rank. The other involves no use of gene rank. We found that MLDA has better performance than LDA, DLDA, or K-nearest neighborhood and is comparable with support vector machines in real data analysis and the simulation study.  相似文献   

18.
Control charts are commonly used to monitor quality of a process or product characterized by a quality characteristic or a vector of quality characteristics. However, in many practical situations the quality of a process or product can be characterized by a function or profile. Here we consider a linear function and investigate the violation of common independence assumption implicitly considered in most control charting applications. We specifically consider the case when profiles are not independent from each other over time. In this article, the effect of autocorrelation between profiles is investigated using average run length (ARL) criterion. Simulation results indicate significant impact on the ARL values when autocorrelation is overlooked. In addition, three methods based on time series approach are used to eliminate the effect of autocorrelation. Their performances are compared using ARL criterion.  相似文献   

19.
We develop a novel nonparametric likelihood ratio test for independence between two random variables using a technique that is free of the common constraints of defining a given set of specific dependence structures. Our methodology revolves around an exact density-based empirical likelihood ratio test statistic that approximates in a distribution-free fashion the corresponding most powerful parametric likelihood ratio test. We demonstrate that the proposed test is very powerful in detecting general structures of dependence between two random variables, including nonlinear and/or random-effect dependence structures. An extensive Monte Carlo study confirms that the proposed test is superior to the classical nonparametric procedures across a variety of settings. The real-world applicability of the proposed test is illustrated using data from a study of biomarkers associated with myocardial infarction. Supplementary materials for this article are available online.  相似文献   

20.
Summary.  The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号