首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 224 毫秒
1.
Large-sample Wilson-type confidence intervals (CIs) are derived for a parameter of interest in many clinical trials situations: the log-odds-ratio, in a two-sample experiment comparing binomial success proportions, say between cases and controls. The methods cover several scenarios: (i) results embedded in a single 2 × 2 contingency table; (ii) a series of K 2 × 2 tables with common parameter; or (iii) K tables, where the parameter may change across tables under the influence of a covariate. The calculations of the Wilson CI require only simple numerical assistance, and for example are easily carried out using Excel. The main competitor, the exact CI, has two disadvantages: It requires burdensome search algorithms for the multi-table case and results in strong over-coverage associated with long confidence intervals. All the application cases are illustrated through a well-known example. A simulation study then investigates how the Wilson CI performs among several competing methods. The Wilson interval is shortest, except for very large odds ratios, while maintaining coverage similar to Wald-type intervals. An alternative to the Wald CI is the Agresti-Coull CI, calculated from the Wilson and Wald CIs, which has same length as the Wald CI but improved coverage.  相似文献   

2.
Missing observations due to non‐response are commonly encountered in data collected from sample surveys. The focus of this article is on item non‐response which is often handled by filling in (or imputing) missing values using the observed responses (donors). Random imputation (single or fractional) is used within homogeneous imputation classes that are formed on the basis of categorical auxiliary variables observed on all the sampled units. A uniform response rate within classes is assumed, but that rate is allowed to vary across classes. We construct confidence intervals (CIs) for a population parameter that is defined as the solution to a smooth estimating equation with data collected using stratified simple random sampling. The imputation classes are assumed to be formed across strata. Fractional imputation with a fixed number of random draws is used to obtain an imputed estimating function. An empirical likelihood inference method under the fractional imputation is proposed and its asymptotic properties are derived. Two asymptotically correct bootstrap methods are developed for constructing the desired CIs. In a simulation study, the proposed bootstrap methods are shown to outperform traditional bootstrap methods and some non‐bootstrap competitors under various simulation settings. The Canadian Journal of Statistics 47: 281–301; 2019 © 2019 Statistical Society of Canada  相似文献   

3.
A stratified analysis of the differences in proportions has been widely employed in epidemiological research, social sciences, and drug development. It provides a useful framework for combining data across strata to produce a common effect. However, for rare events with incidence rates close to zero, popular confidence intervals for risk differences in a stratified analysis may not have appropriate coverage probabilities that approach the nominal confidence levels and the algorithms may fail to produce a valid confidence interval because of zero events in both the arms of a stratum. The main objective of this study is to evaluate the performance of certain methods commonly employed to construct confidence intervals for stratified risk differences when the response probabilities are close to a boundary value of zero or one. Additionally, we propose an improved stratified Miettinen–Nurminen confidence interval that exhibits a superior performance over standard methods while avoiding computational difficulties involving rare events. The proposed method can also be employed when the response probabilities are close to one.  相似文献   

4.
Several methods are available for generating confidence intervals for rate difference, rate ratio, or odds ratio, when comparing two independent binomial proportions or Poisson (exposure‐adjusted) incidence rates. Most methods have some degree of systematic bias in one‐sided coverage, so that a nominal 95% two‐sided interval cannot be assumed to have tail probabilities of 2.5% at each end, and any associated hypothesis test is at risk of inflated type I error rate. Skewness‐corrected asymptotic score methods have been shown to have superior equal‐tailed coverage properties for the binomial case. This paper completes this class of methods by introducing novel skewness corrections for the Poisson case and for odds ratio, with and without stratification. Graphical methods are used to compare the performance of these intervals against selected alternatives. The skewness‐corrected methods perform favourably in all situations—including those with small sample sizes or rare events—and the skewness correction should be considered essential for analysis of rate ratios. The stratified method is found to have excellent coverage properties for a fixed effects analysis. In addition, another new stratified score method is proposed, based on the t‐distribution, which is suitable for use in either a fixed effects or random effects analysis. By using a novel weighting scheme, this approach improves on conventional and modern meta‐analysis methods with weights that rely on crude estimation of stratum variances. In summary, this paper describes methods that are found to be robust for a wide range of applications in the analysis of rates.  相似文献   

5.
In fuzzy regression discontinuity (FRD) designs, the treatment effect is identified through a discontinuity in the conditional probability of treatment assignment. We show that when identification is weak (i.e., when the discontinuity is of a small magnitude), the usual t-test based on the FRD estimator and its standard error suffers from asymptotic size distortions as in a standard instrumental variables setting. This problem can be especially severe in the FRD setting since only observations close to the discontinuity are useful for estimating the treatment effect. To eliminate those size distortions, we propose a modified t-statistic that uses a null-restricted version of the standard error of the FRD estimator. Simple and asymptotically valid confidence sets for the treatment effect can be also constructed using this null-restricted standard error. An extension to testing for constancy of the regression discontinuity effect across covariates is also discussed. Supplementary materials for this article are available online.  相似文献   

6.
We consider the problem of simultaneously estimating Poisson rate differences via applications of the Hsu and Berger stepwise confidence interval method (termed HBM), where comparisons to a common reference group are performed. We discuss continuity-corrected confidence intervals (CIs) and investigate the HBM performance with a moment-based CI, and uncorrected and corrected for continuity Wald and Pooled confidence intervals (CIs). Using simulations, we compare nine individual CIs in terms of coverage probability and the HBM with nine intervals in terms of family-wise error rate (FWER) and overall and local power. The simulations show that these statistical properties depend highly on parameter settings.  相似文献   

7.
The problem of estimating the difference between two binomial proportions is considered. Closed-form approximate confidence intervals (CIs) and a fiducial CI for the difference between proportions are proposed. The approximate CIs are simple to compute, and they perform better than the classical Wald CI in terms of coverage probabilities and precision. Numerical studies indicate that these approximate CIs can be used safely for practical applications under a simple condition. The fiducial CI is more accurate than the approximate CIs in terms of coverage probabilities. The fiducial CIs, the Newcombe CIs, and the Miettinen–Nurminen CIs are comparable in terms of coverage probabilities and precision. The interval estimation procedures are illustrated using two examples.  相似文献   

8.
The problem posed by exact confidence intervals (CIs) which can be either all-inclusive or empty for a nonnegligible set of sample points is known to have no solution within CI theory. Confidence belts causing improper CIs can be modified by using margins of error from the renewed theory of errors initiated by J. W. Tukey—briefly described in the article—for which an extended Fraser's frequency interpretation is given. This approach is consistent with Kolmogorov's axiomatization of probability, in which a probability and an error measure obey the same axioms, although the connotation of the two words is different. An algorithm capable of producing a margin of error for any parameter derived from the five parameters of the bivariate normal distribution is provided. Margins of error correcting Fieller's CIs for a ratio of means are obtained, as are margins of error replacing Jolicoeur's CIs for the slope of the major axis. Margins of error using Dempster's conditioning that can correct optimal, but improper, CIs for the noncentrality parameter of a noncentral chi-square distribution are also given.  相似文献   

9.
Assessment of non-inferiority is often performed using a one-sided statistical test through an analogous one-sided confidence limit. When the focus of attention is the difference in success rates between test and active control proportions, the lower confidence limit is computed, and many methods exist in the literature to address this objective. This paper considers methods which have been shown to be popular in the literature and have surfaced in this research as having good performance with respect to controlling type I error at the specified level. Performance of these methods is assessed with respect to power and type I error through simulations. Sample size considerations are also included to aid in the planning stages of non-inferiority trials focusing on the difference in proportions. Results suggest that the appropriate method to use depends on the sample size allocation of subjects in the test and active control groups.  相似文献   

10.
Five estimation approaches have been developed to compute the confidence interval (CI) for the ratio of two lognormal means: (1) T, the CI based on the t-test procedure; (2) ML, a traditional maximum likelihood-based approach; (3) BT, a bootstrap approach; (4) R, the signed log-likelihood ratio statistic; and (5) R*, the modified signed log-likelihood ratio statistic. The purpose of this study was to assess the performance of these five approaches when applied to distributions other than lognormal distribution, for which they were derived. Performance was assessed in terms of average length and coverage probability of the CIs for each estimation approaches (i.e., T, ML, BT, R, and R*) when data followed a Weibull or gamma distribution. Four models were discussed in this study. In Model 1, the sample sizes and variances were equal within the two groups. In Model 2, the sample sizes were equal but variances were different within the two groups. In Model 3, the variances were different within the two groups and the larger variance was paired with the larger sample size. In Model 4, the variances were different within the two groups and the larger variance was paired with the smaller sample size. The results showed that when the variances of the two groups were equal, the t-test performed well, no matter what the underlying distribution was and how large the variances of the two groups were. The BT approach performed better than the others when the underlying distribution was not lognormal distribution, although it was inaccurate when the variances were large. The R* test did not perform well when the underlying distribution was Weibull or gamma distributed data, but it performed best when the data followed a lognormal distribution.  相似文献   

11.
Bivariate correlation coefficients (BCCs) are often calculated to gauge the relationship between two variables in medical research. In a family-type clustered design where multiple participants from same units/families are enrolled, BCCs can be defined and estimated at various hierarchical levels (subject level, family level and marginal BCC). Heterogeneity usually exists between subject groups and, as a result, subject level BCCs may differ between subject groups. In the framework of bivariate linear mixed effects modeling, we define and estimate BCCs at various hierarchical levels in a family-type clustered design, accommodating subject group heterogeneity. Simplified and modified asymptotic confidence intervals are constructed to the BCC differences and Wald type tests are conducted. A real-world family-type clustered study of Alzheimer disease (AD) is analyzed to estimate and compare BCCs among well-established AD biomarkers between mutation carriers and non-carriers in autosomal dominant AD asymptomatic individuals. Extensive simulation studies are conducted across a wide range of scenarios to evaluate the performance of the proposed estimators and the type-I error rate and power of the proposed statistical tests.Abbreviations: BCC: bivariate correlation coefficient; BLM: bivariate linear mixed effects model; CI: confidence interval; AD: Alzheimer’s disease; DIAN: The Dominantly Inherited Alzheimer Network; SA: simple asymptotic; MA: modified asymptoticKEYWORDS: Bivariate correlation coefficient, bivariate linear mixed effects model, parameter estimation, confidence interval, hypothesis testing, type-I error/size and power  相似文献   

12.
This article studies the hypothesis testing and interval estimation for the among-group variance component in unbalanced heteroscedastic one-fold nested design. Based on the concepts of generalized p-value and generalized confidence interval, tests and confidence intervals for the among-group variance component are developed. Furthermore, some simulation results are presented to compare the performance of the proposed approach with those of existing approaches. It is found that the proposed approach and one of the existing approaches can maintain the nominal confidence level across a wide array of scenarios, and therefore are recommended to use in practical problems. Finally, a real example is illustrated.  相似文献   

13.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

14.
In this article, we discuss constructing confidence intervals (CIs) of performance measures for an M/G/1 queueing system. Fiducial empirical distribution is applied to estimate the service time distribution. We construct fiducial empirical quantities (FEQs) for the performance measures. The relationship between generalized pivotal quantity and fiducial empirical quantity is illustrated. We also present numerical examples to show that the FEQs can yield new CIs dominate the bootstrap CIs in relative coverage (defined as the ratio of coverage probability to average length of CI) for performance measures of an M/G/1 queueing system in most of the cases.  相似文献   

15.
This paper examines the use of bootstrapping for bias correction and calculation of confidence intervals (CIs) for a weighted nonlinear quantile regression estimator adjusted to the case of longitudinal data. Different weights and types of CIs are used and compared by computer simulation using a logistic growth function and error terms following an AR(1) model. The results indicate that bias correction reduces the bias of a point estimator but fails for CI calculations. A bootstrap percentile method and a normal approximation method perform well for two weights when used without bias correction. Taking both coverage and lengths of CIs into consideration, a non-bias-corrected percentile method with an unweighted estimator performs best.  相似文献   

16.
In stratified otolaryngologic (or ophthalmologic) studies, the misleading results may be obtained when ignoring the confounding effect and the correlation between responses from two ears. Score statistic and Wald-type statistic are presented to test equality in a stratified bilateral-sample design, and their corresponding sample size formulae are given. Score statistic for testing homogeneity of difference between two proportions and score confidence interval of the common difference of two proportions in a stratified bilateral-sample design are derived. Empirical results show that (1) score statistic and Wald-type statistic based on dependence model assumption outperform other statistics in terms of the type I error rates; (2) score confidence interval demonstrates reasonably good coverage property; (3) sample size formula via Wald-type statistic under dependence model assumption is rather accurate. A real example is used to illustrate the proposed methodologies.  相似文献   

17.
Abstract.  The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.  相似文献   

18.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

19.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   

20.
A major use of the bootstrap methodology is in the construction of nonparametric confidence intervals. Although no consensus has yet been reached on the best way to proceed, theoretical and empirical evidence indicate that bootstra.‐t intervals provide a reasonable solution to this problem. However, when applied to small data sets, these intervals can be unusually wide and unstable. The author presents techniques for stabilizing bootstra.‐t intervals for small samples. His methods are motivated theoretically and investigated though simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号