首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of hypothesis testing of the equality of marginal survival distributions observed from paired lifetime data. Usual procedures include the paired t-test, which may perform poor for certain types of data. We propose asymptotic tests based on gamma frailty models with Weibull conditional distributions, and investigate their theoretical properties using large sample theory. For finite samples, we conduct simulations to evaluate the powers of the associated tests. For moderate and less skewed data, the proposed tests are the most powerful among the commonly applied testing procedures. A data example is illustrated to demonstrate the methods.  相似文献   

2.
A new model is proposed for the joint distribution of paired survival times generated from clinical trials and certain reliability settings. The new model can be considered an extension to the bivariate exponential models studied in the literature. Here, a more flexible bivariate Weibull model will be derived, and two exact parametric tests for testing the equality of marginal survival distributions are developed.  相似文献   

3.
The analysis of two‐way contingency tables is common in clinical studies. In addition to summary counts and percentages, statistical tests or summary measures are often desired. If the data can be viewed as two categorical measurements on the same experimental unit (matched pair data) then a test of marginal homogeneity may be appropriate. The most common clinical example is the so called ‘shift table’ whereby a quantity is tested for change between two time points. The two principal marginal homogeneity tests are the Stuart Maxwell and Bhapkar tests. At present, SAS software does not compute either test directly (for tables with more than two categories) and a programmatic solution is required. Two examples of programmatic SAS code are found in the current literature. Although accurate in most instances, they fail to produce output for certain tables (‘special cases’). After summarizing the mathematics behind the two tests, a SAS macro is presented, which produces correct output for all tables. Finally, several examples are coded and presented with resultant output. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
The conditional mixture likelihood method using the absolute difference of the trait values of a sib pair to estimate genetic parameters underlies commonly used method in linkage analysis. Here, the statistical properties of the model are examined. The marginal model with a pseudo-likelihood function based on a sample of the absolute difference of sib-traits is also studied. Both approaches are compared numerically. When genotyping is much more expensive than screening a quantitative trait, it is known that extremely discordant sib pairs provide more powerful linkage tests than randomly sampled sib pairs. The Fisher information about genetic parameters contained in extremely discordant sib pairs is calculated using the marginal mixture model. Our results supplement current research showing that extremely discordant sib pairs are powerful for the linkage detection by demonstrating they also contain more information about other genetic parameters.  相似文献   

5.
For the analysis of square contingency tables with nominal categories, this paper proposes two kinds of models that indicate the structure of marginal inhomogeneity. One model states that the absolute values of log odds of the row marginal probability to the corresponding column marginal probability for each category i are constant for every i. The other model states that, on the condition that an observation falls in one of the off-diagonal cells in the square table, the absolute values of log odds of the conditional row marginal probability to the corresponding conditional column marginal probability for each category i are constant for every i. These models are used when the marginal homogeneity model does not hold, and the values of parameters in the models are useful for seeing the degree of departure from marginal homogeneity for the data on a nominal scale. Examples are given.  相似文献   

6.
In randomized clinical trials, it is often necessary to demonstrate that a new medical treatment does not substantially differ from a standard reference treatment. Formal testing of such ‘equivalence hypotheses’ is typically done by combining two one‐sided tests (TOST). A quite different strand of research has demonstrated that replacing nuisance parameters with a null estimate produces P‐values that are close to exact ( Lloyd 2008a ) and that maximizing over the residual dependence on the nuisance parameter produces P‐values that are exact and optimal within a class ( Röhmel & Mansmann 1999 ; Lloyd 2008a ). The three procedures – TOST, estimation and maximization of a nuisance parameter – can each be expressed as a transformation of an approximate P‐value. In this paper, we point out that TOST‐based P‐values will generally be conservative, even if based on exact and optimal one‐sided tests. This conservatism is avoided by applying the three transforms in a certain order – estimation followed by TOST followed by maximization. We compare this procedure with existing alternatives through a numerical study of binary matched pairs where the two treatments are compared by the difference of response rates. The resulting tests are uniformly more powerful than the considered competitors, although the difference in power can range from very small to moderate.  相似文献   

7.
This paper offers a new method for testing one‐sided hypotheses in discrete multivariate data models. One‐sided alternatives mean that there are restrictions on the multidimensional parameter space. The focus is on models dealing with ordered categorical data. In particular, applications are concerned with R×C contingency tables. The method has advantages over other general approaches. All tests are exact in the sense that no large sample theory or large sample distribution theory is required. Testing is unconditional although its execution is done conditionally, section by section, where a section is determined by marginal totals. This eliminates any potential nuisance parameter issues. The power of the tests is more robust than the power of the typical linear tests often recommended. Furthermore, computer programs are available to carry out the tests efficiently regardless of the sample sizes or the order of the contingency tables. Both censored data and uncensored data models are discussed.  相似文献   

8.
We consider semiparametric multivariate data models based on copula representation of the common distribution function. A copula is characterized by a parameter of association and marginal distribution functions. This parameter and the marginal distributions are unknown. In this article, we study the estimator of the parameter of association in copulas with the marginal distribution functions assumed as nuisance parameters restricted by the assumption that the components are identically distributed. Results of this work could be used to construct special kinds of tests of homogeneity for random vectors having dependent components.  相似文献   

9.
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity‐score‐matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity‐score‐matched data, using a robust estimator of the variance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Occasionally, investigators collect auxiliary marks at the time of failure in a clinical study. Because the failure event may be censored at the end of the follow‐up period, these marked endpoints are subject to induced censoring. We propose two new families of two‐sample tests for the null hypothesis of no difference in mark‐scale distribution that allows for arbitrary associations between mark and time. One family of proposed tests is a nonparametric extension of an existing semi‐parametric linear test of the same null hypothesis while a second family of tests is based on novel marked rank processes. Simulation studies indicate that the proposed tests have the desired size and possess adequate statistical power to reject the null hypothesis under a simple change of location in the marginal mark distribution. When the marginal mark distribution has heavy tails, the proposed rank‐based tests can be nearly twice as powerful as linear tests.  相似文献   

12.
Partially paired data, either with incompleteness in one or both arms, are common in practice. For testing equality of means of two arms, practitioners often use only the portion of data with complete pairs and perform paired tests. Although such tests (referred as ‘naive paired tests’) are legitimate, their powers might be low as only partial data are utilized. The recently proposed ‘P-value pooling methods’, based on combining P-values from two tests, use all data, have reasonable type-I error control and good power property. While it is generally believed that ‘P-value pooling methods’ are superior to ‘naive paired tests’ in terms of power as the former use more data than the latter, no detailed power comparison has been done. This paper aims to compare powers of ‘naive paired tests’ and ‘P-value pooling methods’ analytically and our findings are counterintuitive, i.e. the ‘P-value pooling methods’ do not always outperform the naive paired tests in terms of power. Based on these results, we present guidance on how to select the best test for testing equality of means with partially paired data.  相似文献   

13.
Testing homogeneity is a fundamental problem in finite mixture models. It has been investigated by many researchers and most of the existing works have focused on the univariate case. In this article, the authors extend the use of the EM‐test for testing homogeneity to multivariate mixture models. They show that the EM‐test statistic asymptotically has the same distribution as a certain transformation of a single multivariate normal vector. On the basis of this result, they suggest a resampling procedure to approximate the P‐value of the EM‐test. Simulation studies show that the EM‐test has accurate type I errors and adequate power, and is more powerful and computationally efficient than the bootstrap likelihood ratio test. Two real data sets are analysed to illustrate the application of our theoretical results. The Canadian Journal of Statistics 39: 218–238; 2011 © 2011 Statistical Society of Canada  相似文献   

14.
In 1935, R.A. Fisher published his well-known “exact” test for 2x2 contingency tables. This test is based on the conditional distribution of a cell entry when the rows and columns marginal totals are held fixed. Tocher (1950) and Lehmann (1959) showed that Fisher s test, when supplemented by randomization, is uniformly most powerful among all the unbiased tests UMPU). However, since all the practical tests for 2x2 tables are nonrandomized - and therefore biased the UMPU test is not necessarily more powerful than other tests of the same or lower size. Inthis work, the two-sided Fisher exact test and the UMPU test are compared with six nonrandomized unconditional exact tests with respect to their power. In both the two-binomial and double dichotomy models, the UMPU test is often less powerful than some of the unconditional tests of the same (or even lower) size. Thus, the assertion that the Tocher-Lehmann modification of Fisher's conditional test is the optimal test for 2x2 tables is unjustified.  相似文献   

15.
For the analysis of square contingency tables with ordered categories, this paper proposes a model which indicates the structure of marginal asymmetry. The model states that the absolute values of logarithm of ratio of the cumulative probability that an observation will fall in row category i or below and column category i+1 or above to the corresponding cumulative probability that the observation falls in column category i or below and row category i+1 or above are constant for every i. We deal with the estimation problem for the model parameter and goodness-of-fit tests. Also we discuss the relationships between the model and a measure which represents the degree of departure from marginal homogeneity. Examples are given.  相似文献   

16.
Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a χ-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes.  相似文献   

17.
Uniformly most powerful Bayesian tests (UMPBTs) are a new class of Bayesian tests in which null hypotheses are rejected if their Bayes factor exceeds a specified threshold. The alternative hypotheses in UMPBTs are defined to maximize the probability that the null hypothesis is rejected. Here, we generalize the notion of UMPBTs by restricting the class of alternative hypotheses over which this maximization is performed, resulting in restricted most powerful Bayesian tests (RMPBTs). We then derive RMPBTs for linear models by restricting alternative hypotheses to g priors. For linear models, the rejection regions of RMPBTs coincide with those of usual frequentist F‐tests, provided that the evidence thresholds for the RMPBTs are appropriately matched to the size of the classical tests. This correspondence supplies default Bayes factors for many common tests of linear hypotheses. We illustrate the use of RMPBTs for ANOVA tests and t‐tests and compare their performance in numerical studies.  相似文献   

18.
This article considers K pairs of incomplete correlated 2 × 2 tables in which the interesting measurement is the risk difference between marginal and conditional probabilities. A Wald-type statistic and a score-type statistic are presented to test the homogeneity hypothesis about risk differences across strata. Powers and sample size formulae based on the above two statistics are deduced. Figures about sample size against risk difference (or marginal probability) are given. A real example is used to illustrate the proposed methods.  相似文献   

19.
In prospective or retrospective studies with matched pairs one often wishes to control for covariates other than those used in the matching process.Large sample procedures assuming a logistic model are available for this problem.The present paper presents some exact permutation tests which are uniformly most powerful unbiased within a large class of tests.  相似文献   

20.
Chronic disease processes often feature transient recurrent adverse clinical events. Treatment comparisons in clinical trials of such disorders must be based on valid and efficient methods of analysis. We discuss robust strategies for testing treatment effects with recurrent events using methods based on marginal rate functions, partially conditional rate functions, and methods based on marginal failure time models. While all three approaches lead to valid tests of the null hypothesis when robust variance estimates are used, they differ in power. Moreover, some approaches lead to estimators of treatment effect which are more easily interpreted than others. To investigate this, we derive the limiting value of estimators of treatment effect from marginal failure time models and illustrate their dependence on features of the underlying point process, as well as the censoring mechanism. Through simulation, we show that methods based on marginal failure time distributions are shown to be sensitive to treatment effects delaying the occurrence of the very first recurrences. Methods based on marginal or partially conditional rate functions perform well in situations where treatment effects persist or in settings where the aim is to summarizee long-term data on efficacy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号