首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
This article discusses the spatiotemporal surveillance problem of detecting rate changes of Poisson data considering non-homogenous population sample size. By applying Monte Carlo simulations, we investigate the performance of several likelihood-based approaches under various scenarios depending on four factors: (1) population trend, (2) change magnitude, (3) change coverage, and (4) change time. Our article evaluates the performance of spatiotemporal surveillance methods based on the average run length at different change times. The simulation results show that no method is uniformly better than others in all scenarios. The difference between the generalized likelihood ratio (GLR) approach and the weighted likelihood ratio (WLR) approach depends mainly on population size, not change coverage, change magnitude, or change time. We find that changes associated with a small population in time periods and/or spatial regions favor the WLR approach, but those associated with a large population favor the GLR under any trends of population changes.  相似文献   

2.
A stratified study is often designed for adjusting several independent trials in modern medical research. We consider the problem of non-inferiority tests and sample size determinations for a nonzero risk difference in stratified matched-pair studies, and develop the likelihood ratio and Wald-type weighted statistics for testing a null hypothesis of non-zero risk difference for each stratum in stratified matched-pair studies on the basis of (1) the sample-based method and (2) the constrained maximum likelihood estimation (CMLE) method. Sample size formulae for the above proposed statistics are derived, and several choices of weights for Wald-type weighted statistics are considered. We evaluate the performance of the proposed tests according to type I error rates and empirical powers via simulation studies. Empirical results show that (1) the likelihood ratio and the Wald-type CMLE test based on harmonic means of the stratum-specific sample size (SSIZE) weight (the Cochran's test) behave satisfactorily in the sense that their significance levels are much closer to the prespecified nominal level; (2) the likelihood ratio test is better than Nam's [2006. Non-inferiority of new procedure to standard procedure in stratified matched-pair design. Biometrical J. 48, 966–977] score test; (3) the sample sizes obtained by using SSIZE weight are smaller than other weighted statistics in general; (4) the Cochran's test statistic is generally much better than other weighted statistics with CMLE method. A real example from a clinical laboratory study is used to illustrate the proposed methodologies.  相似文献   

3.
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice.  相似文献   

4.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence.  相似文献   

6.
A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed.  相似文献   

7.
The problem of testing for treatment effect based on binary response data is considered, assuming that the sample size for each experimental unit and treatment combination is random. It is assumed that the sample size follows a distribution that belongs to a parametric family. The uniformly most powerful unbiased tests, which are equivalent to the likelihood ratio tests, are obtained when the probability of the sample size being zero is positive. For the situation where the sample sizes are always positive, the likelihood ratio tests are derived. These test procedures, which are unconditional on the random sample sizes, are useful even when the random sample sizes are not observed. Some examples are presented as illustration.  相似文献   

8.
We define the mixture likelihood approach to clustering by discussing the sampling distribution of the likelihood ratio test of the null hypothesis that we have observed a sample of observations of a variable having the bivariate normal distribution versus the alternative that the variable has the bivariate normal mixture with unequal means and common within component covariance matrix. The empirical distribution of the likelihood ratio test indicates that convergence to the chi-squared distribution with 2 df is at best very slow, that the sample size should be 5000 or more for the chi-squared result to hold, and that for correlations between 0.1 and 0.9 there is little, if any, dependence of the null distribution on the correlation. Our simulation study suggests a heuristic function based on the gamma.  相似文献   

9.
We propose a new procedure for combining multiple tests in samples of right-censored observations. The new method is based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. We prove a version of Wilks’ theorem for the multiple constrained censored empirical likelihood ratio, which provides a simple reference distribution for the test statistic of our proposed method. A useful application of the proposed method is, for example, examining the survival experience of different populations by combining different weighted log-rank tests. Real data examples are given using the log-rank and Gehan-Wilcoxon tests. In a simulation study of two sample survival data, we compare the proposed method of combining tests to previously developed procedures. The results demonstrate that, in addition to its computational simplicity, the combined test performs comparably to, and in some situations more reliably than previously developed procedures. Statistical software is available in the R package ‘emplik’.  相似文献   

10.
The standard log-rank test has been extended by adopting various weight functions. Cancer vaccine or immunotherapy trials have shown a delayed onset of effect for the experimental therapy. This is manifested as a delayed separation of the survival curves. This work proposes new weighted log-rank tests to account for such delay. The weight function is motivated by the time-varying hazard ratio between the experimental and the control therapies. We implement a numerical evaluation of the Schoenfeld approximation (NESA) for the mean of the test statistic. The NESA enables us to assess the power and to calculate the sample size for detecting such delayed treatment effect and also for a more general specification of the non-proportional hazards in a trial. We further show a connection between our proposed test and the weighted Cox regression. Then the average hazard ratio using the same weight is obtained as an estimand of the treatment effect. Extensive simulation studies are conducted to compare the performance of the proposed tests with the standard log-rank test and to assess their robustness to model mis-specifications. Our tests outperform the Gρ,γ class in general and have performance close to the optimal test. We demonstrate our methods on two cancer immunotherapy trials.  相似文献   

11.
The purpose of the study is to estimate the population size under a truncated count model that accounts for heterogeneity. The proposed estimator is based on the Conway–Maxwell–Poisson distribution. The benefit of using the Conway–Maxwell–Poisson distribution is that it includes the Bernoulli, the Geometric and the Poisson distributions as special cases and, furthermore, allows for heterogeneity. Parameter estimates can be obtained by exploiting the ratios of successive frequency counts in a weighted linear regression framework. The results of the comparisons with Turing’s, the maximum likelihood Poisson, Zelterman’s and Chao’s estimators reveal that our proposal can be beneficially used. Furthermore, our proposal outperforms its competitors under all heterogeneous settings. The empirical examples consider the homogeneous case and several heterogeneous cases, each with its own features, and provide interesting insights on the behavior of the estimators.  相似文献   

12.
Detecting local spatial clusters for count data is an important task in spatial epidemiology. Two broad approaches—moving window and disease mapping methods—have been suggested in some of the literature to find clusters. However, the existing methods employ somewhat arbitrarily chosen tuning parameters, and the local clustering results are sensitive to the choices. In this paper, we propose a penalized likelihood method to overcome the limitations of existing local spatial clustering approaches for count data. We start with a Poisson regression model to accommodate any type of covariates, and formulate the clustering problem as a penalized likelihood estimation problem to find change points of intercepts in two-dimensional space. The cost of developing a new algorithm is minimized by modifying an existing least absolute shrinkage and selection operator algorithm. The computational details on the modifications are shown, and the proposed method is illustrated with Seoul tuberculosis data.  相似文献   

13.
基于Zhou等提出的加权似然比控制图(WEWMA),给出3种对应的控制图方案,即基于极大似然估计的控制图方案、基于非线性方程估计的控制图方案和基于相关系数矩阵估计的控制图方法,以解决产品缺陷数不可被精确观测过程的在线监控问题。数值模拟显示,基于相关系数矩阵估计构造的控制图方案表现良好,尤其是在核查人员之间的确存在相关性的时候,有更明显的优势。随机生成一个例子,说明了相关系数矩阵控制图的使用方法。  相似文献   

14.
In event time data analysis, comparisons between distributions are made by the logrank test. When the data appear to contain crossing hazards phenomena, nonparametric weighted logrank statistics are usually suggested to accommodate different-weighted functions to increase the power. However, the gain in power by imposing different weights has its limits since differences before and after the crossing point may balance each other out. In contrast to the weighted logrank tests, we propose a score-type statistic based on the semiparametric-, heteroscedastic-hazards regression model of Hsieh [2001. On heteroscedastic hazards regression models: theory and application. J. Roy. Statist. Soc. Ser. B 63, 63–79.], by which the nonproportionality is explicitly modeled. Our score test is based on estimating functions derived from partial likelihood under the heteroscedastic model considered herein. Simulation results show the benefit of modeling the heteroscedasticity and power of the proposed test to two classes of weighted logrank tests (including Fleming–Harrington's test and Moreau's locally most powerful test), a Renyi-type test, and the Breslow's test for acceleration. We also demonstrate the application of this test by analyzing actual data in clinical trials.  相似文献   

15.
In randomized complete block designs, a monotonic relationship among treatment groups may already be established from prior information, e.g., a study with different dose levels of a drug. The test statistic developed by Page and another from Jonckheere and Terpstra are two unweighted rank based tests used to detect ordered alternatives when the assumptions in the traditional two-way analysis of variance are not satisfied. We consider a new weighted rank based test by utilizing a weight for each subject based on the sample variance in computing the new test statistic. The new weighted rank based test is compared with the two commonly used unweighted tests with regard to power under various conditions. The weighted test is generally more powerful than the two unweighted tests when the number of treatment groups is small to moderate.  相似文献   

16.
The popular diagnostic checking methods in linear time series models are portmanteau tests based on either residual autocorrelation functions (acf) or partial autocorrelation functions (pacf). In this paper, we device some new weighted mixed portmanteau tests by appropriately combining individual tests based on both acf and pacf. We derive the asymptotic distribution of such weighted mixed portmanteau statistics and study their size and power. It is found that the weighted mixed tests outperform when higher order ARMA models are fitted and diagnostic checks are performed via testing lack of residual autocorrelations. Simulation results suggest to use the proposed tests as complementary to those classical tests found in literature. An illustrative application is given to demonstrate the usefulness of the mixed test.  相似文献   

17.
A robust procedure is developed for testing the equality of means in the two sample normal model. This is based on the weighted likelihood estimators of Basu et al. (1993). When the normal model is true the tests proposed have the same asymptotic power as the two sample Student's t-statistic in the equal variance case. However, when the normality assumptions are only approximately true the proposed tests can be substantially more powerful than the classical tests. In a Monte Carlo study for the equal variance case under various outlier models the proposed test using Hellinger distance based weighted likelihood estimator compared favorably with the classical test as well as the robust test proposed by Tiku (1980).  相似文献   

18.
The score test statistic from the observed information is easy to compute numerically. Its large sample distribution under the null hypothesis is well known and is equivalent to that of the score test based on the expected information, the likelihood‐ratio test and the Wald test. However, several authors have noted that under the alternative hypothesis this no longer holds and in particular the score statistic from the observed information can take negative values. We extend the anthology on the score test to a problem of interest in ecology when studying species occurrence. This is the comparison of two zero‐inflated binomial random variables from two independent samples under imperfect detection. An analysis of eigenvalues associated with the score test in this setting assists in understanding why using the observed information matrix in the score test can be problematic. We demonstrate through a combination of simulations and theoretical analysis that the power of the score test calculated under the observed information decreases as the populations being compared become more dissimilar. In particular, the score test based on the observed information is inconsistent. Finally, we propose a modified rule that rejects the null hypothesis when the score statistic is computed using the observed information is negative or is larger than the usual chi‐square cut‐off. In simulations in our setting this has power that is comparable to the Wald and likelihood ratio tests and consistency is largely restored. Our new test is easy to use and inference is possible. Supplementary material for this article is available online as per journal instructions.  相似文献   

19.
In this paper, we investigate different procedures for testing the equality of two mean survival times in paired lifetime studies. We consider Owen’s M-test and Q-test, a likelihood ratio test, the paired t-test, the Wilcoxon signed rank test and a permutation test based on log-transformed survival times in the comparative study. We also consider the paired t-test, the Wilcoxon signed rank test and a permutation test based on original survival times for the sake of comparison. The size and power characteristics of these tests are studied by means of Monte Carlo simulations under a frailty Weibull model. For less skewed marginal distributions, the Wilcoxon signed rank test based on original survival times is found to be desirable. Otherwise, the M-test and the likelihood ratio test are the best choices in terms of power. In general, one can choose a test procedure based on information about the correlation between the two survival times and the skewness of the marginal survival distributions.  相似文献   

20.
Summary.  In magazine advertisements for new drugs, it is common to see summary tables that compare the relative frequency of several side-effects for the drug and for a placebo, based on results from placebo-controlled clinical trials. The paper summarizes ways to conduct a global test of equality of the population proportions for the drug and the vector of population proportions for the placebo. For multivariate normal responses, the Hotelling T 2-test is a well-known method for testing equality of a vector of means for two independent samples. The tests in the paper are analogues of this test for vectors of binary responses. The likelihood ratio tests can be computationally intensive or have poor asymptotic performance. Simple quadratic forms comparing the two vectors provide alternative tests. Much better performance results from using a score-type version with a null-estimated covariance matrix than from the sample covariance matrix that applies with an ordinary Wald test. For either type of statistic, asymptotic inference is often inadequate, so we also present alternative, exact permutation tests. Follow-up inferences are also discussed, and our methods are applied to safety data from a phase II clinical trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号