首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper considers a family of penalized likelihood score tests for group variation. The tests can be indexed by a measure of degrees of freedom. At one extreme, with degrees of freedom one less than the number of groups, is the usual score test for a fixed effects alternative using indicator variables for the groups, while at the other extreme, in the limit as the degrees of freedom 0, is a test closely related to a score test based on a random effects alternative. Asymptotic power comparisons are made for the tests in the family. As would be expected, different members of the family are more efficient for different alternatives. Generally the tests with smaller degrees of freedom appear to have better power than the standard test for alternatives focusing on differences among the larger groups, and lower power for alternatives focusing on differences among the smaller groups. Simulations indicate the asymptotic approximation to the distribution performs better for the tests with small degrees of freedom.  相似文献   

2.
In randomized complete block designs, a monotonic relationship among treatment groups may already be established from prior information, e.g., a study with different dose levels of a drug. The test statistic developed by Page and another from Jonckheere and Terpstra are two unweighted rank based tests used to detect ordered alternatives when the assumptions in the traditional two-way analysis of variance are not satisfied. We consider a new weighted rank based test by utilizing a weight for each subject based on the sample variance in computing the new test statistic. The new weighted rank based test is compared with the two commonly used unweighted tests with regard to power under various conditions. The weighted test is generally more powerful than the two unweighted tests when the number of treatment groups is small to moderate.  相似文献   

3.
Summary.  The paper presents a case-study of skin fibromas among male rats in the 2-year cancer bioassay of methyleugenol that was conducted by the US National Toxicology Program (NTP). In animal carcinogenicity experiments such as this one, tumour rates are often compared with the Cochran–Armitage (CA) trend test. The operating characteristics of the CA test, however, can be adversely affected by survival differences across groups and by the assumed dose metric. Survival-adjusted generalizations of the CA test have been proposed, but they are still sensitive to the choice of scores that are assigned to the dose groups. We present an alternative test, which outperforms the survival-adjusted CA test which is currently used by the NTP to compare incidence rates. Simulated data from a wide range of realistic situations show that the operating characteristics of the test proposed are superior to those of the NTP's survival-adjusted CA test, especially for rare tumours and wide logarithmic spacings of the dose metric.  相似文献   

4.
A formal semiparametric statistical inference framework is proposed for the evaluation of the age-dependent penetrance of a rare genetic mutation, using family data generated under a case-family design, where phenotype and genotype information are collected from first-degree relatives of case probands carrying the targeted mutation. The proposed approach allows for unobserved risk factors that are correlated among family members. Some rigorous large sample properties are established, which show that the proposed estimators were asymptotically semiparametric efficient. A simulation study is conducted to evaluate the performance of the new approach, which shows the robustness of the proposed semiparametric approach and its advantage over the corresponding parametric approach. As an illustration, the proposed approach is applied to estimating the age-dependent cancer risk among carriers of the MSH2 or MLH1 mutation.  相似文献   

5.
This article considers nonparametric comparison of survival functions, one of the most commonly required task in survival studies. For this, several test procedures have been proposed for interval-censored failure time data in which distributions of censoring intervals are identical among different treatment groups. Sometimes the distributions may depend on treatments and thus not be the same. A class of test statistics is proposed for situations where the distributions may be different for subjects in different treatment groups. The asymptotic normality of the test statistics is established and the test procedure is evaluated by simulations, which suggest that it works well for practical situations. An illustrative example is provided.  相似文献   

6.
In tumorigenicity experiments, each animal begins in a tumor-free state and then either develops a tumor or dies before developing a tumor. Animals that develop a tumor either die from the tumor or from other competing causes. All surviving animals are sacrificed at the end of the experiment, normally two years. The two most commonly used statistical tests are the logrank test for comparing hazards of death from rapidly lethal tumors and the Hoel-Walburg test for comparing prevalences of nonlethal tumors. However, the data obtained from a carcinogenicity experiment generally contains a mixture of fatal and incidental tumors. Peto et al.(1980)suggested combining the fatal and incidental tests for a comparison of tumor onset distributions.

Extensive simulations show that the trend test for tumor onset using the Peto procedure has the proper size, under the simulation constraints, when each group has identical mortality patterns, and the test with continuity correction tends to be conservative. When the animals n the dosed groups have reduced survival rates, the type I error rate is likely to exceed the nominal level. The continuity correction is recommended for a small reduction in survival time among the dosed groups to ensure the proper size. However, when there is a large reduction in survival times in the dosed groups, the onset test does not have the proper size.  相似文献   

7.
This paper investigates the test procedures for testing the homogeneity of the proportions in the analysis of clustered binary data in the context of unequal dispersions across the treatment groups. We introduce a simple test procedure based on adjusted proportions using a sandwich estimator of the variance of the proportion estimators obtained by the generalized estimating equations approach of Zeger and Liang (1986) [Biometrics 42, 121-130]. We also extend the exiting test procedures of testing the hypothesis of proportions in this context. These test procedures are then compared, by simulations, in terms of size and power. Moreover, we derive the score test for testing the homogeneity of the dispersion parameters among several groups of clustered binary data. An illustrative application of the recommended test procedures is also presented.  相似文献   

8.
Generalized discriminant analysis based on distances   总被引:14,自引:1,他引:13  
This paper describes a method of generalized discriminant analysis based on a dissimilarity matrix to test for differences in a priori groups of multivariate observations. Use of classical multidimensional scaling produces a low‐dimensional representation of the data for which Euclidean distances approximate the original dissimilarities. The resulting scores are then analysed using discriminant analysis, giving tests based on the canonical correlations. The asymptotic distributions of these statistics under permutations of the observations are shown to be invariant to changes in the distributions of the original variables, unlike the distributions of the multi‐response permutation test statistics which have been considered by other workers for testing differences among groups. This canonical method is applied to multivariate fish assemblage data, with Monte Carlo simulations to make power comparisons and to compare theoretical results and empirical distributions. The paper proposes classification based on distances. Error rates are estimated using cross‐validation.  相似文献   

9.
The paper considers the problem of homogeneity among groups by comparison of genomic sequences. Some alternative procedures that attach less emphasis on the likelihood approach, and more on alternative measures that deal with similar homogeneity problems are considered here. On this approach, a one-sided hypothesis test is considered and the classical ANOVA decomposition can be directly adapted to sample measures based on the Hamming distance, without necessarily going through their second moments. Some results of U-statistics theory will be useful for the decomposition of the test statistic and to find its asymptotic distribution. An application of this test with real data is shown and the p-value of the test statistic is found via bootstrap resampling.  相似文献   

10.
Uniform scores test is a rank-based method that tests the homogeneity of k-populations in circular data problems. The influence of ties on the uniform scores test has been emphasized by several authors in several articles and books. Moreover, it is suggested that the uniform scores test should be used with caution if ties are present in the data. This paper investigates the influence of ties on the uniform scores test by computing the power of the test using average, randomization, permutation, minimum, and maximum methods to break ties. Monte Carlo simulation is performed to compute the power of the test under several scenarios such as having 5% or 10% of ties and tie group structures in the data. The simulation study shows no significant difference among the methods under the existence of ties but the test loses its power when there are many ties or complicated group structures. Thus, randomization or average methods are equally powerful to break ties when applying uniform scores test. Also, it can be concluded that k-sample uniform scores test can be used safely without sacrificing the power if there are only less than 5% of ties or at most two groups of a few ties.  相似文献   

11.
We consider multiple comparison test procedures among treatment effects in a randomized block design. We propose closed testing procedures based on maximum values of some two-sample t test statistics and based on F test statistics. It is shown that the proposed procedures are more powerful than single-step procedures and the REGW (Ryan/Einot–Gabriel/Welsch)-type tests. Next, we consider the randomized block design under simple ordered restrictions of treatment effects. We propose closed testing procedures based on maximum values of two-sample one-sided t test statistics and based on Batholomew’s statistics for all pairwise comparisons of treatment effects. Although single-step multiple comparison procedures are utilized in general, the power of these procedures is low for a large number of groups. The closed testing procedures stated in the present article are more powerful than the single-step procedures. Simulation studies are performed under the null hypothesis and some alternative hypotheses. In this studies, the proposed procedures show a good performance.  相似文献   

12.
Correlated binary data arise in many ophthalmological and otolaryngological clinical trials. To test the homogeneity of prevalences among different groups is an important issue when conducting these trials. The equal correlation coefficients model proposed by Donner in 1989 is a popular model handling correlated binary data. The asymptotic chi-square test works well when the sample size is large. However, it would fail to maintain the type I error rate when the sample size is relatively small. In this paper, we propose several exact methods to deal with small sample scenarios. Their performances are compared with respect to type I error rate and power. The ‘M approach’ and the ‘E + M approach’ seem to outperform the others. A real work example is given to further explain how these approaches work. Finally, the computational efficiency of the exact methods is discussed as a pressing issue of future work.  相似文献   

13.
A general testing procedure is proposed to multivariately test for equality of p variances among k groups. The procedure applies a multivariate analysis of variance on an appropriate measure of spread for the uncensored original observations. Three such measures of spread are compared in a simulation experiment which considered two and three variables with equal and unequal sample sizes for the null and alternative hypotheses for Gaussian, Student's t (8, 12, and 20 degrees of freedom) and gamma (α=2,4,6 and 10) distributions . The likelihood ratio test (Box, 1949) was included in the above simulations. The results suggest that if one chooses a measure of spread appropriate for the distribution of the original observations, the proposed MANOVA-based testing procedure is robust and reasonably powerful. Using this procedure for the normal distribution, similar power was observed to that of the likelihood ratio test when the variables were uncorrelated or had little positive correlation.  相似文献   

14.
In many completely randomized design experiments, levels of subsampling may be performed on each experimental unit. In such cases the expected mean square error E(MSE) for testing among treatment groups is comprised of variance components analogour to those associated with the primary sampling unit is nested sampling Marcuse (1949) gives a procedure to minimize the cost of obtaining the samples if a desired degree of precision in the E(MSE) is fixed. However, her method gives no consideration to the resulting power of the test for differences among the treatment groups. Our method stipulates that the power, rather than the precision, is fixed at a critical level and the total cost is minimized subject to this constraint.  相似文献   

15.
Many survey questions allow respondents to pick any number out of c possible categorical responses or “items”. These kinds of survey questions often use the terminology “choose all that apply” or “pick any”. Often of interest is determining if the marginal response distributions of each item differ among r different groups of respondents. Agresti and Liu (1998, 1999) call this a test for multiple marginal independence (MMI). If respondents are allowed to pick only 1 out of c responses, the hypothesis test may be performed using the Pearson chi-square test of independence. However, since respondents may pick more or less than 1 response, the test's assumptions that responses are made independently of each other is violated. Recently, a few MMI testing methods have been proposed. Loughin and Scherer (1998) propose using a bootstrap method based on a modified version of the Pearson chi-square test statistic. Agresti and Liu (1998, 1999) propose using marginal logit models, quasisymmetric loglinear models, and a few methods based on Pearson chi-square test statistics. Decady and Thomas (1999) propose using a Rao-Scott adjusted chi-squared test statistic. There has not been a full investigation of these MMI testing methods. The purpose here is to evaluate the proposed methods and propose a few new methods. Recommendations are given to guide the practitioner in choosing which MMI testing methods to use.  相似文献   

16.
The chi-squared statistic is used to test the homogeneity for several groups in a contingency table. However, it may be inappropriate to apply the test when ordinal categories are involved. If it can be assumed that the ordinal categorical variables are realizations of underlying continuous random variables, then it is possible to study the properties of different groups in a relative sense. Assuming that the distributions of the continuous variables are in the same family and that the thresholds that define the categories are invariant across groups, we propose a procedure to test homogeneity and to address the sources of heterogeneity in different groups. An example based on a real data set is used to demonstrate the practical applicability of the suggested method.  相似文献   

17.
The essence of the generalised multivariate Behrens–Fisher problem (BFP) is how to test the null hypothesis of equality of mean vectors for two or more populations when their dispersion matrices differ. Solutions to the BFP usually assume variables are multivariate normal and do not handle high‐dimensional data. In ecology, species' count data are often high‐dimensional, non‐normal and heterogeneous. Also, interest lies in analysing compositional dissimilarities among whole communities in non‐Euclidean (semi‐metric or non‐metric) multivariate space. Hence, dissimilarity‐based tests by permutation (e.g., PERMANOVA, ANOSIM) are used to detect differences among groups of multivariate samples. Such tests are not robust, however, to heterogeneity of dispersions in the space of the chosen dissimilarity measure, most conspicuously for unbalanced designs. Here, we propose a modification to the PERMANOVA test statistic, coupled with either permutation or bootstrap resampling methods, as a solution to the BFP for dissimilarity‐based tests. Empirical simulations demonstrate that the type I error remains close to nominal significance levels under classical scenarios known to cause problems for the un‐modified test. Furthermore, the permutation approach is found to be more powerful than the (more conservative) bootstrap for detecting changes in community structure for real ecological datasets. The utility of the approach is shown through analysis of 809 species of benthic soft‐sediment invertebrates from 101 sites in five areas spanning 1960 km along the Norwegian continental shelf, based on the Jaccard dissimilarity measure.  相似文献   

18.
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning, and credit scoring. The receiver operating characteristic (ROC) surface is a useful tool to assess the ability of a diagnostic test to discriminate among three-ordered classes or groups. In this article, nonparametric predictive inference (NPI) for three-group ROC analysis for ordinal outcomes is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. This article also includes results on the volumes under the ROC surfaces and consideration of the choice of decision thresholds for the diagnosis. Two examples are provided to illustrate our method.  相似文献   

19.
Bivariate correlation coefficients (BCCs) are often calculated to gauge the relationship between two variables in medical research. In a family-type clustered design where multiple participants from same units/families are enrolled, BCCs can be defined and estimated at various hierarchical levels (subject level, family level and marginal BCC). Heterogeneity usually exists between subject groups and, as a result, subject level BCCs may differ between subject groups. In the framework of bivariate linear mixed effects modeling, we define and estimate BCCs at various hierarchical levels in a family-type clustered design, accommodating subject group heterogeneity. Simplified and modified asymptotic confidence intervals are constructed to the BCC differences and Wald type tests are conducted. A real-world family-type clustered study of Alzheimer disease (AD) is analyzed to estimate and compare BCCs among well-established AD biomarkers between mutation carriers and non-carriers in autosomal dominant AD asymptomatic individuals. Extensive simulation studies are conducted across a wide range of scenarios to evaluate the performance of the proposed estimators and the type-I error rate and power of the proposed statistical tests.Abbreviations: BCC: bivariate correlation coefficient; BLM: bivariate linear mixed effects model; CI: confidence interval; AD: Alzheimer’s disease; DIAN: The Dominantly Inherited Alzheimer Network; SA: simple asymptotic; MA: modified asymptoticKEYWORDS: Bivariate correlation coefficient, bivariate linear mixed effects model, parameter estimation, confidence interval, hypothesis testing, type-I error/size and power  相似文献   

20.
For the linear hypothesis in a strucural equation model, the properties of test statistics based on the two stage least squares estimator (2SLSE) have been examined since these test statistics are easily derived in the instrumental variable estimation framework. Savin (1976) has shown that inequalities exist among the test statistics for the linear hypothesis, but it is well known that there is no systematic inequality among these statistics based on 2SLSE for the linear hypothesis in a structural equation model. Morimune and Oya (1994) derived the constrained limited information maximum likelihood estimator (LIMLE) subject to general linear constraints on the coefficients of the structural equation, as well as Wald, LM and Lr Test statistics for the adequacy of the linear constraints.

In this paper, we derive the inequalities among these three test statistics based on LIMLE and the local power functions based on Limle and 2SLSE to show that there is no test statistic which is uniformly most powerful, and the LR test statistic based on LIMLE is locally unbised and the other test statistics are not. Monte Carlo simulations are used to examine the actual sizes of these test statistics and some numerical examples of the power differences among these test statistics are given. It is found that the actual sizes of these test statistics are greater than the nominal sizes, the differences between the actual and nominal sizes of Wald test statistics are generally the greatest, those of LM test statistics are the smallest, and the power functions depend on the correlations between the endogenous explanatory variables and the error term of the structural equation, the asymptotic variance of estimator of coefficients of the structural equation and the number of restrictions imposed on the coefficients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号