首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

2.
Tests of homogeneity of normal means with the alternative restricted by an ordering on the means are considered. The simply ordered case, μ1 ≤ μ2 ≤ ··· ≤ μk, and the simple tree ordering, μ1 ≤ μj, for; j= 2, 3,…, k, are emphasized. A modification of the likelihood-ratio test is proposed which is asymptotically equivalent to it but is more robust to violations of the hypothesized orderings. The new test has power at the points satisfying the hypothesized ordering which is similar to that of the likelihood-ratio test provided the degrees of freedom are not too small. The modified test is shown to be unbiased and consistent.  相似文献   

3.
This paper offers a new method for testing one‐sided hypotheses in discrete multivariate data models. One‐sided alternatives mean that there are restrictions on the multidimensional parameter space. The focus is on models dealing with ordered categorical data. In particular, applications are concerned with R×C contingency tables. The method has advantages over other general approaches. All tests are exact in the sense that no large sample theory or large sample distribution theory is required. Testing is unconditional although its execution is done conditionally, section by section, where a section is determined by marginal totals. This eliminates any potential nuisance parameter issues. The power of the tests is more robust than the power of the typical linear tests often recommended. Furthermore, computer programs are available to carry out the tests efficiently regardless of the sample sizes or the order of the contingency tables. Both censored data and uncensored data models are discussed.  相似文献   

4.
K correlated 2×2 tables with structural zero are commonly encountered in infectious disease studies. A hypothesis test for risk difference is considered in K independent 2×2 tables with structural zero in this paper. Score statistic, likelihood ratio statistic and Wald‐type statistic are proposed to test the hypothesis on the basis of stratified data and pooled data. Sample size formulae are derived for controlling a pre‐specified power or a pre‐determined confidence interval width. Our empirical results show that score statistic and likelihood ratio statistic behave better than Wald‐type statistic in terms of type I error rate and coverage probability, sample sizes based on stratified test are smaller than those based on the pooled test in the same design. A real example is used to illustrate the proposed methodologies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Pearson’s chi-square (Pe), likelihood ratio (LR), and Fisher (Fi)–Freeman–Halton test statistics are commonly used to test the association of an unordered r×c contingency table. Asymptotically, these test statistics follow a chi-square distribution. For small sample cases, the asymptotic chi-square approximations are unreliable. Therefore, the exact p-value is frequently computed conditional on the row- and column-sums. One drawback of the exact p-value is that it is conservative. Different adjustments have been suggested, such as Lancaster’s mid-p version and randomized tests. In this paper, we have considered 3×2, 2×3, and 3×3 tables and compared the exact power and significance level of these test’s standard, mid-p, and randomized versions. The mid-p and randomized test versions have approximately the same power and higher power than that of the standard test versions. The mid-p type-I error probability seldom exceeds the nominal level. For a given set of parameters, the power of Pe, LR, and Fi differs approximately the same way for standard, mid-p, and randomized test versions. Although there is no general ranking of these tests, in some situations, especially when averaged over the parameter space, Pe and Fi have the same power and slightly higher power than LR. When the sample sizes (i.e., the row sums) are equal, the differences are small, otherwise the observed differences can be 10% or more. In some cases, perhaps characterized by poorly balanced designs, LR has the highest power.  相似文献   

6.
An ordered heterogeneity (OH) test is a test for a trend that combines a non-directional heterogeneity test with the rank-order information specified under the alternative. We propose two modifications of the OH test procedure: (1) to use the mean ranks of the groups rather than the sample means to determine the observed ordering of the groups, and (2) to use the maximum correlation out of the 2k???1 – 1 possibilities under the alternative rather than the single ordering (1, 2, … , k), where k is the number of independent groups. A simulation study indicates that these two changes increase the power of the ordered heterogeneity test when, as common in practice, the underlying distribution may deviate from a normal distribution and the trend pattern is a priori unknown. In contrast to the original OH test, the modified OH test can detect all possible patterns under the alternative with a relatively high power.  相似文献   

7.
ABSTRACT

The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first population is the same as in the second. This posterior probability is compared with the p-value of the classical method, obtaining a reconciliation between both results, classical and Bayesian. The obtained results are generalized for r × s tables.  相似文献   

8.
Responses of two groups, measured on the same ordinal scale, are compared through the column effect association model, applied on the corresponding 2 × J contingency table. Monotonic or umbrella shaped ordering for the scores of the model are related to stochastic or umbrella ordering of the underlying response distributions, respectively. An algorithm for testing all possible hypotheses of stochastic ordering and deciding on an appropriate one is proposed.  相似文献   

9.
This article considers K pairs of incomplete correlated 2 × 2 tables in which the interesting measurement is the risk difference between marginal and conditional probabilities. A Wald-type statistic and a score-type statistic are presented to test the homogeneity hypothesis about risk differences across strata. Powers and sample size formulae based on the above two statistics are deduced. Figures about sample size against risk difference (or marginal probability) are given. A real example is used to illustrate the proposed methods.  相似文献   

10.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms.  相似文献   

11.
During drug development, the calculation of inhibitory concentration that results in a response of 50% (IC50) is performed thousands of times every day. The nonlinear model most often used to perform this calculation is a four‐parameter logistic, suitably parameterized to estimate the IC50 directly. When performing these calculations in a high‐throughput mode, each and every curve cannot be studied in detail, and outliers in the responses are a common problem. A robust estimation procedure to perform this calculation is desirable. In this paper, a rank‐based estimate of the four‐parameter logistic model that is analogous to least squares is proposed. The rank‐based estimate is based on the Wilcoxon norm. The robust procedure is illustrated with several examples from the pharmaceutical industry. When no outliers are present in the data, the robust estimate of IC50 is comparable with the least squares estimate, and when outliers are present in the data, the robust estimate is more accurate. A robust goodness‐of‐fit test is also proposed. To investigate the impact of outliers on the traditional and robust estimates, a small simulation study was conducted. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
In linear models having near collinear columns of X, ridge and surrogate estimators often are used to mitigate collinearity. A new class of estimators is based on mixtures, either of X and a design minimal in an ordered class or of the Fisher information and a scalar matrix. Comparisons are drawn among choices for the mixing parameter, and the estimators are found to be admissible relative to ordinary least squares. Case studies demonstrate that selected mixture designs are perturbed from the original design to a lesser extent than are those of the surrogate method, while retaining reasonable efficiency characteristics.  相似文献   

13.
There is a wide variety of stochastic ordering problems where K groups (typically ordered with respect to time) are observed along with a (continuous) response. The interest of the study may be on finding the change-point group, i.e. the group where an inversion of trend of the variable under study is observed. A change point is not merely a maximum (or a minimum) of the time-series function, but a further requirement is that the trend of the time-series is monotonically increasing before that point, and monotonically decreasing afterwards. A suitable solution can be provided within a conditional approach, i.e. by considering some suitable nonparametric combination of dependent tests for simple stochastic ordering problems. The proposed procedure is very flexible and can be extended to trend and/or repeated measure problems. Some comparisons through simulations and examples with the well known Mack & Wolfe test for umbrella alternative and with Page’s test for trend problems with correlated data are investigated.  相似文献   

14.
Two analysis of means type randomization tests for testing the equality of I variances for unbalanced designs are presented. Randomization techniques for testing statistical hypotheses can be used when parametric tests are inappropriate. Suppose that I independent samples have been collected. Randomization tests are based on shuffles or rearrangements of the (combined) sample. Putting each of the I samples ‘in a bowl’ forms the combined sample. Drawing samples ‘from the bowl’ forms a shuffle. Shuffles can be made with replacement (bootstrap shuffling) or without replacement (permutation shuffling). The tests that are presented offer two advantages. They are robust to non-normality and they allow the user to graphically present the results via a decision chart similar to a Shewhart control chart. A Monte Carlo study is used to verify that the permutation version of the tests exhibit excellent power when compared to other robust tests. The Monte Carlo study also identifies circumstances under which the popular Levene's test fails.  相似文献   

15.
In longitudinal studies, robust sandwich variance estimators are often used, and are especially useful when model assumptions are in doubt. However, the usual sandwich estimator does not allow for models with crossed random effects. The hierarchical likelihood extends the idea of the sandwich estimator to models not currently covered. By simulation studies, we show that the new sandwich estimator is robust against heteroscedastic errors and against misspecification of overdispersion in the y | v component.  相似文献   

16.
In assessing the area under the ROC curve for the accuracy of a diagnostic test, it is imperative to detect and locate multiple abnormalities per image. This approach takes that into account by adopting a statistical model that allows for correlation between the reader scores of several regions of interest (ROI).

The ROI method of partitioning the image is taken. The readers give a score to each ROI in the image and the statistical model takes into account the correlation between the scores of the ROI's of an image in estimating test accuracy. The test accuracy is given by Pr[Y > Z] + (1/2)Pr[Y = Z], where Y is an ordinal diagnostic measurement of an affected ROI, and Z is the diagnostic measurement of an unaffected ROI. This way of measuring test accuracy is equivalent to the area under the ROC curve. The parameters are the parameters of a multinomial distribution, then based on the multinomial distribution, a Bayesian method of inference is adopted for estimating the test accuracy.

Using a multinomial model for the test results, a Bayesian method based on the predictive distribution of future diagnostic scores is employed to find the test accuracy. By resampling from the posterior distribution of the model parameters, samples from the posterior distribution of test accuracy are also generated. Using these samples, the posterior mean, standard deviation, and credible intervals are calculated in order to estimate the area under the ROC curve. This approach is illustrated by estimating the area under the ROC curve for a study of the diagnostic accuracy of magnetic resonance angiography for diagnosis of arterial atherosclerotic stenosis. A generalization to multiple readers and/or modalities is proposed.

A Bayesian way to estimate test accuracy is easy to perform with standard software packages and has the advantage of employing the efficient inclusion of information from prior related imaging studies.  相似文献   

17.
Three modified tests for homogeneity of the odds ratio for a series of 2 × 2 tables are studied when the data are clustered. In the case of clustered data, the standard tests for homogeneity of odds ratios ignore the variance inflation caused by positive correlation among responses of subjects within the same cluster, and therefore have inflated Type I error. The modified tests adjust for the variance inflation in the three existing standard tests: Breslow–Day, Tarone and the conditional score test. The degree of clustering effect is measured by the intracluster correlation coefficient, ρ. A variance correction factor derived from ρ is then applied to the variance estimator in the standard tests of homogeneity of the odds ratio. The proposed tests are an application of the variance adjustment method commonly used in correlated data analysis and are shown to maintain the nominal significance level in a simulation study. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
The European Water Framework states that macrophyte communities (seaweeds and seagrass) are key indicators of the ecological health of lagoons. Furthermore, the restoration of these communities, especially the Zostera meadows, is one of the main objectives of the Berre lagoon restoration plan. Consequently, a monitoring programme of the main macrophyte species still present in the lagoon was initiated in 1996. This monitoring resulted in a sequence of 11 spatially structured annual tables consisting of the observed density of these species. These tables are processed in this study. First, we specify the principles of Beh's ordinal correspondence analysis (OCA), designed for ordered row/column categories, and compare this method to classical correspondence analysis (CA). Then, we show that OCA is straightforwardly adaptable for processing a sequence of ordered contingency tables like ours. Both OCA and CA are afterwards used to reveal and test the main patterns of spatio-temporal changes of two macrophyte species in the Berre lagoon: Ulva and Zostera. The results we obtained are compared and discussed.  相似文献   

19.
A number of parametric and non-parametric linear trend tests for time series are evaluated in terms of test size and power, using also resampling techniques to form the empirical distribution of the test statistics under the null hypothesis of no linear trend. For resampling, both bootstrap and surrogate data are considered. Monte Carlo simulations were done for several types of residuals (uncorrelated and correlated with normal and nonnormal distributions) and a range of small magnitudes of the trend coefficient. In particular for AR(1) and ARMA(1, 1) residual processes, we investigate the discrimination of strong autocorrelation from linear trend with respect to the sample size. The correct test size is obtained for larger data sizes as autocorrelation increases and only when a randomization test that accounts for autocorrelation is used. The overall results show that the type I and II errors of the trend tests are reduced with the use of resampled data. Following the guidelines suggested by the simulation results, we could find significant linear trend in the data of land air temperature and sea surface temperature.  相似文献   

20.
In genetic studies of complex diseases, multiple measures of related phenotypes are often collected. Jointly analyzing these phenotypes may improve statistical power to detect sets of rare variants affecting multiple traits. In this work, we consider association testing between a set of rare variants and multiple phenotypes in family‐based designs. We use a mixed linear model to express the correlations among the phenotypes and between related individuals. Given the many sources of correlations in this situation, deriving an appropriate test statistic is not straightforward. We derive a vector of score statistics, whose joint distribution is approximated using a copula. This allows us to have closed‐form expressions for the p‐values of several test statistics. A comprehensive simulation study and an application to Genetic Analysis Workshop 18 data highlight the gains associated with joint testing over univariate approaches, especially in the presence of pleiotropy or highly correlated phenotypes. The Canadian Journal of Statistics 47: 90–107; 2019 © 2018 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号