首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
While analyzing 2 × 2 contingency tables, the log odds ratio for measuring the strength of association is often approximated by a normal distribution with some variance. We show that the expression of that variance needs to be modified in the presence of correlation between two binomial distributions of the contingency table. In the present paper, we derive a correlation-adjusted variance of the limiting normal distribution of log odds ratio. We also propose a correlation adjusted test based on the standard odds ratio for analyzing matched-pair studies and any other study settings that induce correlated binary outcomes. We demonstrate that our proposed test outperforms the classical McNemar’s test. Simulation studies show the gains in power are especially manifest when sample size is small and strong correlation is present. Two examples of real data sets are used to demonstrate that the proposed method may lead to conclusions significantly different from those reached using McNemar’s test.  相似文献   

2.
In many case-control studies the risk factors are categorized in order to clarify the analysis and presentation of the data. However, inconsistent categorization of continuous risk factors may make interpretation difficult. This paper attempts to evaluate the effect of the categorization procedure on the odds ratio and several measures of association. Often the risk factor is dichotomized and the data linking the risk factor and the disease is presented in a 2 x 2 table. We show that the odds ratio obtained from the 2x2 table is usually considerably larger than the comparable statistic that would have been obtained had a large number of outpoints been used. Also, if 2 x 2, 2 x 3, or 2 x 4 tables are obtained by using a few outpoints on the risk factor, the measures of association for these tables are usually greater than the measure that would have been obtained had a large number of cntpoints been used. We propose an odds ratio measure that more closely approximates the odds ratio between the continuous risk factor and disease. A corresponding measure of association is also proposed for 2 x 2, 2x3, and 2x4 tables.  相似文献   

3.
Odds ratios are frequently used to describe the relationship between a binary treatment or exposure and a binary outcome. An odds ratio can be interpreted as a causal effect or a measure of association, depending on whether it involves potential outcomes or the actual outcome. An odds ratio can also be characterized as marginal versus conditional, depending on whether it involves conditioning on covariates. This article proposes a method for estimating a marginal causal odds ratio subject to confounding. The proposed method is based on a logistic regression model relating the outcome to the treatment indicator and potential confounders. Simulation results show that the proposed method performs reasonably well in moderate-sized samples and may even offer an efficiency gain over the direct method based on the sample odds ratio in the absence of confounding. The method is illustrated with a real example concerning coronary heart disease.  相似文献   

4.
The odds ratio is a measure commonly used for expressing the association between an exposure and a binary outcome. A feature of the odds ratio is that its value depends on the choice of the distribution over which the probabilities in the odds ratio are evaluated. In particular, this means that an odds ratio conditional on a covariate may have a different value from an odds ratio marginal on the covariate, even if the covariate is not associated with the exposure (not a confounder). We define the individual odds ratio (IORs) and population odds ratios (PORs) as the ratio of the odds of the outcome for a unit increase in the exposure, respectively, for an individual in the population and for the whole population, in which case the odds are averaged across the population. The attenuation of conditional odds ratio, marginal odds ratio, and PORs from the IOR is demonstrated in a realistic simulation exercise. The degree of attenuation differs in the whole population and in a case–control sample, and the property of invariance to outcome-dependent sampling is only true for the IOR. The relevance of the non collapsibility of odds ratios in a range of methodological areas is discussed.  相似文献   

5.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

6.
We propose three statistical frameworks for determining the cutoff points of metabolic syndrome (MetS) criteria, consisting of six components that are the same as in widely used MetS definitions, e.g., the 2004 updated NCEP-ATPIII criteria. Several international organizations have proposed MetS definitions; no literature indicates that any of these definitions is based on statistical frameworks. For all the three frameworks, the cutoff points are set to maximize the observed prevalence rate of stroke and DM. The three frameworks differ in assumptions on the joint distribution of the six components. Using the cohort data from a regional hospital in Taiwan, we illustrate applications of the three frameworks and compare them with the updated NCEP-ATPIII definition and the 2009 consensus definition of IDF and AHA/NHLBI. The performance measure is the odds ratio, the odds of getting stroke or DM within subjects with MetS divided by the analogous odds for subjects without MetS. Our numerical results show that the odds ratios of the three frameworks are higher than those of the updated-NCEP and consensus definitions, showing that the proposed frameworks seem to provide a better association of MetS with stroke and DM.  相似文献   

7.
Genomewide association studies have become the primary tool for discovering the genetic basis of complex human diseases. Such studies are susceptible to the confounding effects of population stratification, in that the combination of allele-frequency heterogeneity with disease-risk heterogeneity among different ancestral subpopulations can induce spurious associations between genetic variants and disease. This article provides a statistically rigorous and computationally feasible solution to this challenging problem of unmeasured confounders. We show that the odds ratio of disease with a genetic variant is identifiable if and only if the genotype is independent of the unknown population substructure conditional on a set of observed ancestry-informative markers in the disease-free population. Under this condition, the odds ratio of interest can be estimated by fitting a semiparametric logistic regression model with an arbitrary function of a propensity score relating the genotype probability to ancestry-informative markers. Approximating the unknown function of the propensity score by B-splines, we derive a consistent and asymptotically normal estimator for the odds ratio of interest with a consistent variance estimator. Simulation studies demonstrate that the proposed inference procedures perform well in realistic settings. An application to the well-known Wellcome Trust Case-Control Study is presented. Supplemental materials are available online.  相似文献   

8.
To increase the efficiency of comparisons between treatments in clinical trials, we may consider the use of a multiple matching design, in which, for each patient receiving the experimental treatment, we match with more than one patient receiving the standard treatment. To assess the efficacy of the experimental treatment, the risk ratio (RR) of patient responses between two treatments is certainly one of the most commonly used measures. Because the probability of patient responses in clinical trial is often not small, the odds ratio (OR), of which the practical interpretation is not easily understood, cannot approximate RR well. Thus, all sample size formulae in terms of OR for case-control studies with multiple matched controls per case can be of limited use here. In this paper, we develop three sample size formulae based on RR for randomized trials with multiple matching. We propose a test statistic for testing the equality of RR under multiple matching. On the basis of Monte Carlo simulation, we evaluate the performance of the proposed test statistic with respect to Type I error. To evaluate the accuracy and usefulness of the three sample size formulae developed in this paper, we further calculate their simulated powers and compare them with those of the sample size formula ignoring matching and the sample size formula based on OR for multiple matching published elsewhere. Finally, we include an example that employs the multiple matching study design about the use of the supplemental ascorbate in the supportive treatment of terminal cancer patients to illustrate the use of these formulae.  相似文献   

9.
Based on the large-sample normal distribution of the sample log odds ratio and its asymptotic variance from maximum likelihood logistic regression, shortest 95% confidence intervals for the odds ratio are developed. Although the usual confidence interval on the odds ratio is unbiased, the shortest interval is not. That is, while covering the true odds ratio with the stated probability, the shortest interval covers some values below the true odds ratio with higher probability. The upper and lower limits of the shortest interval are shifted to the left of those of the usual interval, with greater shifts in the upper limits. With the log odds model γ + , in which X is binary, simulation studies showed that the approximate average percent difference in length is 7.4% for n (sample size) = 100, and 3.8% for n = 200. Precise estimates of the covering probabilities of the two types of intervals were obtained from simulation studies, and are compared graphically. For odds ratio estimates greater (less) than one, shortest intervals are more (less) likely to include one than are the usual intervals. The usual intervals are likelihood-based and the shortest intervals are not. The usual intervals have minimum expected length among the class of unbiased intervals. Shortest intervals do not provide important advantages over the usual intervals, which we recommend for practical use.  相似文献   

10.
Summary. We obtain the residual information criterion RIC, a selection criterion based on the residual log-likelihood, for regression models including classical regression models, Box–Cox transformation models, weighted regression models and regression models with autoregressive moving average errors. We show that RIC is a consistent criterion, and that simulation studies for each of the four models indicate that RIC provides better model order choices than the Akaike information criterion, corrected Akaike information criterion, final prediction error, C p and R adj2, except when the sample size is small and the signal-to-noise ratio is weak. In this case, none of the criteria performs well. Monte Carlo results also show that RIC is superior to the consistent Bayesian information criterion BIC when the signal-to-noise ratio is not weak, and it is comparable with BIC when the signal-to-noise ratio is weak and the sample size is large.  相似文献   

11.
In this paper we derive two likelihood-based procedures for the construction of confidence limits for the common odds ratio in K 2 × 2 contingency tables. We then conduct a simulation study to compare these procedures with a recently proposed procedure by Sato (Biometrics 46 (1990) 71–79), based on the asymptotic distribution of the Mantel-Haenszel estimate of the common odds ratio. We consider the situation in which the number of strata remains fixed (finite), but the sample sizes within each stratum are large. Bartlett's score procedure based on the conditional likelihood is found to be almost identical, in terms of coverage probabilities and average coverage lengths, to the procedure recommended by Sato, although the score procedure has some edge, in some instances, in terms of average coverage lengths. So, for ‘fixed strata and large sample’ situation Bartlett's score procedure can be considered as an alternative to the procedure proposed by Sato, based on the asymptotic distribution of the Mantel-Haenszel estimator of the common odds ratio.  相似文献   

12.
Although epidemiological studies support an association between smoking and cognitive impairment, existing data do not answer the question of whether this association is causal or arises from covariates. In this paper, we investigate smoking status, assessed from adolescence to adulthood, and subsequent cognitive problems in a large representative sample of youths. To analyze this data, we propose a method for causal effects using full matching based on the subject-specific random intercept and slope of the propensity scores. The findings suggest that earlier smoking is not a causal factor for later cognitive problem (odds ratio = 1.64, 95% CI: 0.97–2.80, p = 0.06).  相似文献   

13.
This paper considers 2×2 tables arising from case–control studies in which the binary exposure may be misclassified. We found circumstances under which the inverse matrix method provides a more efficient odds ratio estimator than the naive estimator. We provide some intuition for the findings, and also provide a formula for obtaining the minimum size of a validation study such that the variance of the odds ratio estimator from the inverse matrix method is smaller than that of the naive estimator, thereby ensuring an advantage for the misclassification corrected result. As a corollary of this result, we show that correcting for misclassification does not necessarily lead to a widening of the confidence intervals, but, rather, in addition to producing a consistent estimate, can also produce one that is more efficient.  相似文献   

14.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

15.
The article details a sampling scheme which can lead to a reduction in sample size and cost in clinical and epidemiological studies of association between a count outcome and risk factor. We show that inference in two common generalized linear models for count data, Poisson and negative binomial regression, is improved by using a ranked auxiliary covariate, which guides the sampling procedure. This type of sampling has typically been used to improve inference on a population mean. The novelty of the current work is its extension to log-linear models and derivations showing that the sampling technique results in an increase in information as compared to simple random sampling. Specifically, we show that under the proposed sampling strategy the maximum likelihood estimate of the risk factor’s coefficient is improved through an increase in the Fisher’s information. A simulation study is performed to compare the mean squared error, bias, variance, and power of the sampling routine with simple random sampling under various data-generating scenarios. We also illustrate the merits of the sampling scheme on a real data set from a clinical setting of males with chronic obstructive pulmonary disease. Empirical results from the simulation study and data analysis coincide with the theoretical derivations, suggesting that a significant reduction in sample size, and hence study cost, can be realized while achieving the same precision as a simple random sample.  相似文献   

16.
A panel study consists of individuals who have data collected at periodic follow-up visits or pre-specified time points following entry into the study. The objective of this paper is to consider design issues in a panel study when the response variable is the stage of disease, and with focus on the transition intensities. Important design issues include the choice of the time interval between follow-up visits and sample size considerations. We study the effects of time intervals between follow-up visits on the precision of the transition intensities estimators. We also consider the power of statistical tests on the ratio of transition intensities. Discussion is extended to incorporate heterogeneity in the population in which frailty is introduced to describe subject-specific transition intensities.  相似文献   

17.
ABSTRACT

In clustered survival data, the dependence among individual survival times within a cluster has usually been described using copula models and frailty models. In this paper we propose a profile likelihood approach for semiparametric copula models with different cluster sizes. We also propose a likelihood ratio method based on profile likelihood for testing the absence of association parameter (i.e. test of independence) under the copula models, leading to the boundary problem of the parameter space. For this purpose, we show via simulation study that the proposed likelihood ratio method using an asymptotic chi-square mixture distribution performs well as sample size increases. We compare the behaviors of the two models using the profile likelihood approach under a semiparametric setting. The proposed method is demonstrated using two well-known data sets.  相似文献   

18.
In a human bioequivalence (BE) study, the conclusion of BE is usually based on the ratio of geometric means of pharmacokinetic parameters between a test and a reference products. The “Guideline for Bioequivalence Studies of Generic Products” (2012) issued by the Japanese health authority and other similar guidelines across the world require a 90% confidence interval (CI) of the ratio to fall entirely within the range of 0.8 to 1.25 for the conclusion of BE. If prerequisite conditions are satisfied, the Japanese guideline provides for a secondary BE criterion that requires the point estimate of the ratio to fall within the range of 0.9 to 1.11. We investigated the statistical properties of the “switching decision rule” wherein the secondary criterion is applied only when the CI criterion fails. The behavior of the switching decision rule differed from either of its component criteria and displayed an apparent type I error rate inflation when the prerequisite conditions were not considered. The degree of inflation became greater as the true variability increased in comparison to the assumed variability used in the sample size calculation. To our knowledge, this is the first report in which the overall behavior of the combination of the two component criteria was investigated. The implications of the in vitro tests on human BE and the accuracy of the intra‐subject variability have impacts on appropriate planning and interpretation of BE studies utilizing the switching decision rule.  相似文献   

19.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

20.
In this paper we consider a Bayesian predictive approach to sample size determination in equivalence trials. Equivalence experiments are conducted to show that the unknown difference between two parameters is small. For instance, in clinical practice this kind of experiment aims to determine whether the effects of two medical interventions are therapeutically similar. We declare an experiment successful if an interval estimate of the effects‐difference is included in a set of values of the parameter of interest indicating a negligible difference between treatment effects (equivalence interval). We derive two alternative criteria for the selection of the optimal sample size, one based on the predictive expectation of the interval limits and the other based on the predictive probability that these limits fall in the equivalence interval. Moreover, for both criteria we derive a robust version with respect to the choice of the prior distribution. Numerical results are provided and an application is illustrated when the normal model with conjugate prior distributions is assumed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号