首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

2.
In the situation of stratified 2×2 tables, consitency of two different jackknife variances of the Mantel-Haenszel estimator is discussed in the case of increasing sample sizes, but a fixed number of strata. Different principles for constructing confidence limits for the common odds ratio are investigated from a theoretical point of view with regard to the position and the length of the resulting intervals. Monte Carlo experiments compare the finite sample performance of the consistent jackknife variance with that of other noniterative variance estimators. In addition, the properties of these variance estimators are investigated when used for confidence interval estimation.  相似文献   

3.
Several methods are available for generating confidence intervals for rate difference, rate ratio, or odds ratio, when comparing two independent binomial proportions or Poisson (exposure‐adjusted) incidence rates. Most methods have some degree of systematic bias in one‐sided coverage, so that a nominal 95% two‐sided interval cannot be assumed to have tail probabilities of 2.5% at each end, and any associated hypothesis test is at risk of inflated type I error rate. Skewness‐corrected asymptotic score methods have been shown to have superior equal‐tailed coverage properties for the binomial case. This paper completes this class of methods by introducing novel skewness corrections for the Poisson case and for odds ratio, with and without stratification. Graphical methods are used to compare the performance of these intervals against selected alternatives. The skewness‐corrected methods perform favourably in all situations—including those with small sample sizes or rare events—and the skewness correction should be considered essential for analysis of rate ratios. The stratified method is found to have excellent coverage properties for a fixed effects analysis. In addition, another new stratified score method is proposed, based on the t‐distribution, which is suitable for use in either a fixed effects or random effects analysis. By using a novel weighting scheme, this approach improves on conventional and modern meta‐analysis methods with weights that rely on crude estimation of stratum variances. In summary, this paper describes methods that are found to be robust for a wide range of applications in the analysis of rates.  相似文献   

4.
For testing the non-inferiority (or equivalence) of an experimental treatment to a standard treatment, the odds ratio (OR) of patient response rates has been recommended to measure the relative treatment efficacy. On the basis of an exact test procedure proposed elsewhere for a simple crossover design, we develop an exact sample-size calculation procedure with respect to the OR of patient response rates for a desired power of detecting non-inferiority at a given nominal type I error. We note that the sample size calculated for a desired power based on an asymptotic test procedure can be much smaller than that based on the exact test procedure under a given situation. We further discuss the advantage and disadvantage of sample-size calculation using the exact test and the asymptotic test procedures. We employ an example by studying two inhalation devices for asthmatics to illustrate the use of sample-size calculation procedure developed here.  相似文献   

5.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

6.
A problem which occurs in the practice of meta-analysis is that one or more component studies may have sparse data, such as zero events in the treatment and control groups. Two possible approaches were explored using simulations. The corrected method, in which one half was added to each cell was compared to the uncorrected method. These methods were compared over a range of sparse data situations in terms of coverage rates using three summary statistics:the Mantel-Haenszel odds ratio and the dersimonian and Laird odds ratio and rate difference. The uncorrected method performed better only when using the Mantel-Haenszel odds ratio with very little heterogeneity present. For all other sparse data applications, the continuity correction performed better and is recommended for use in meta-analyses of similar scope  相似文献   

7.
One of the common problems encountered in applied statistics is that of comparing two proportions from stratified samples. One approach to this problem is via inference on the corresponding odds ratio. In this paper, the various point and interval estimators of and hypothesis testing procedures for a common odds ratio from multiple 2 ×2 tables are reviewed. Based On research to date, the conditional maximum likelihood and Mantel-Haenszel estimators are recommended as the point estimators of choice. Neither confidence intervals nor hypothesis testing metthods have been studied as well as the point estimators, but there is a confidence interval method associated with the Mantel-Haenszel estimator that is a good choice.  相似文献   

8.
Sequential analyses in clinical trials have ethical and economic advantages over fixed sample size methods. The sequential probability ratio test (SPRT) is a hypothesis testing procedure which evaluates data as it is collected. The original SPRT was developed by Wald for one-parameter families of distributions and later extended by Bartlett to handle the case of nuisance parameters. However, Bartlett's SPRT requires independent and identically distributed observations. In this paper we show that Bartlett's SPRT can be applied to generalized linear model (GLM) contexts. Then we propose an SPRT analysis methodology for a Poisson generalized linear mixed model (GLMM) that is suitable for our application to the design of a multicenter randomized clinical trial that compares two preventive treatments for surgical site infections. We validate the methodology with a simulation study that includes a comparison to Neyman–Pearson and Bayesian fixed sample size test designs and the Wald SPRT.  相似文献   

9.
For a postulated common odds ratio for several 2 × 2 contingency tables one may, by conditioning on the marginals of the seperate tables, determine the exact expectation and variance of the entry in a particular cell of each table, hence for the total of such cells across all tables. This makes it feasible to determine limiting values, via single-degree-of-freedom, continuity-corrected chi-square tests on the common odds ratio–one determines lower and upper limits corresponding to just barely significant chi-square values. The Mantel-Haenszel approach can be viewed as a special application of this, but directed specifically to the case of unity for the odds ratio, for which the expectation and variance formulas are particularly simple. Computation of exact expectations and variances may be feasible only for 2 × 2 tables of limited size, but asymptotic formulas can be applied in other instances.Illustration is given for a particular set of four 2 × 2 tables in which both exact limits and limits by the proposed method could be applied, the two methods giving reasonably good agreement. Both procedures are directed at the distribution of the total over the designated cells, the proposed method treating that distribution as being asymptotically normal. Especially good agreement of proposed with exact limits could be anticipated in more asymptotic situations (overall, not for individual tables) but in practice this may not be demonstrable as the computation of exact limits is then unfeasible.  相似文献   

10.
We focus on the construction of confidence corridors for multivariate nonparametric generalized quantile regression functions. This construction is based on asymptotic results for the maximal deviation between a suitable nonparametric estimator and the true function of interest, which follow after a series of approximation steps including a Bahadur representation, a new strong approximation theorem, and exponential tail inequalities for Gaussian random fields. As a byproduct we also obtain multivariate confidence corridors for the regression function in the classical mean regression. To deal with the problem of slowly decreasing error in coverage probability of the asymptotic confidence corridors, which results in meager coverage for small sample sizes, a simple bootstrap procedure is designed based on the leading term of the Bahadur representation. The finite-sample properties of both procedures are investigated by means of a simulation study and it is demonstrated that the bootstrap procedure considerably outperforms the asymptotic bands in terms of coverage accuracy. Finally, the bootstrap confidence corridors are used to study the efficacy of the National Supported Work Demonstration, which is a randomized employment enhancement program launched in the 1970s. This article has supplementary materials online.  相似文献   

11.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

12.
This study constructs a simultaneous confidence region for two combinations of coefficients of linear models and their ratios based on the concept of generalized pivotal quantities. Many biological studies, such as those on genetics, assessment of drug effectiveness, and health economics, are interested in a comparison of several dose groups with a placebo group and the group ratios. The Bonferroni correction and the plug-in method based on the multivariate-t distribution have been proposed for the simultaneous region estimation. However, the two methods are asymptotic procedures, and their performance in finite sample sizes has not been thoroughly investigated. Based on the concept of generalized pivotal quantity, we propose a Bonferroni correction procedure and a generalized variable (GV) procedure to construct the simultaneous confidence regions. To address a genetic concern of the dominance ratio, we conduct a simulation study to empirically investigate the probability coverage and expected length of the methods for various combinations of sample sizes and values of the dominance ratio. The simulation results demonstrate that the simultaneous confidence region based on the GV procedure provides sufficient coverage probability and reasonable expected length. Thus, it can be recommended in practice. Numerical examples using published data sets illustrate the proposed methods.  相似文献   

13.
In this paper, we will investigate the nonparametric estimation of the distribution function F of an absolutely continuous random variable. Two methods are analyzed: the first one based on the empirical distribution function, expressed in terms of i.i.d. lattice random variables and, secondly, the kernel method, which involves nonlattice random vectors dependent on the sample size n; this latter procedure produces a smooth distribution estimator that will be explicitly corrected to reduce the effect of bias or variance. For both methods, the non-Studentized and Studentized statistics are considered as well as their bootstrap counterparts and asymptotic expansions are constructed to approximate their distribution functions via the Edgeworth expansion techniques. On this basis, we will obtain confidence intervals for F(x) and state the coverage error order achieved in each case.  相似文献   

14.
Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power. We illustrate our procedure as applied to several published data sets.  相似文献   

15.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

16.
A confidence interval for the generalized variance of a matrix normal distribution with unknown mean is constructed which improves on the usual minimum size (i.e., minimum length or minimum ratio of endpoints) interval based on the sample generalized variance alone in terms of both coverage probability and size. The method is similar to the univariate case treated by Goutis and Casella (Ann. Statist. 19 (1991) 2015–2031).  相似文献   

17.
Pairwise comparisons for proportions estimated by pooled testing   总被引:1,自引:0,他引:1  
When estimating the prevalence of a rare trait, pooled testing can confer substantial benefits when compared to individual testing. In addition to screening experiments for infectious diseases in humans, pooled testing has also been exploited in other applications such as drug testing, epidemiological studies involving animal disease, plant disease assessment, and screening for rare genetic mutations. Within a pooled-testing context, we consider situations wherein different strata or treatments are to be compared with the goals of assessing significant and practical differences between strata and ranking strata in terms of prevalence. To achieve these goals, we first present two simultaneous pairwise interval estimation procedures for use with pooled data. Our procedures rely on asymptotic results, so we investigate small-sample behavior and compare the two procedures in terms of simultaneous coverage probability and mean interval length. We then present a unified approach to determine pool sizes which deliver desired coverage properties while taking testing costs and interval precision into account. We illustrate our methods using data from an observational HIV study involving heterosexual males who use intravenous drugs.  相似文献   

18.
The current estimator of the degree of insect control by an insecticide in a field experiment laid out in randomized blocks is equal to one minus the cross-product ratio of a two way table of total insect counts over blocks. Since much work has been done on estimation of the common odds ratio of a number of strata in medical studies, a series of Monte Carlo studies was performed to investigate the possible use of these estimators and their standard errors in estimating the common degree of inject control of a number of blocks. Maximum likelihood, Mantel-Haenszel, and empirical logit estimators were evaluated and compared with back-transformed means over blocks, of cross-product ratios on the arithmetic, logarithmic, and arcsine scales. Maximum likelihood and Mantel-Haenszel estimators had the smallest mean squared errors, but their standard error estimates were only appropriate when sampling distributions were approximately Poisson and there was little heterogeneity among plots within blocks in the natural rates of population change.  相似文献   

19.
Location-scale invariant Bickel–Rosenblatt goodness-of-fit tests (IBR tests) are considered in this paper to test the hypothesis that f, the common density function of the observed independent d-dimensional random vectors, belongs to a null location-scale family of density functions. The asymptotic behaviour of the test procedures for fixed and non-fixed bandwidths is studied by using an unifying approach. We establish the limiting null distribution of the test statistics, the consistency of the associated tests and we derive its asymptotic power against sequences of local alternatives. These results show the asymptotic superiority, for fixed and local alternatives, of IBR tests with fixed bandwidth over IBR tests with non-fixed bandwidth.  相似文献   

20.
In this article, we propose a simple method of constructing confidence intervals for a function of binomial success probabilities and for a function of Poisson means. The method involves finding an approximate fiducial quantity (FQ) for the parameters of interest. A FQ for a function of several parameters can be obtained by substitution. For the binomial case, the fiducial approach is illustrated for constructing confidence intervals for the relative risk and the ratio of odds. Fiducial inferential procedures are also provided for estimating functions of several Poisson parameters. In particular, fiducial inferential approach is illustrated for interval estimating the ratio of two Poisson means and for a weighted sum of several Poisson means. Simple approximations to the distributions of the FQs are also given for some problems. The merits of the procedures are evaluated by comparing them with those of existing asymptotic methods with respect to coverage probabilities, and in some cases, expected widths. Comparison studies indicate that the fiducial confidence intervals are very satisfactory, and they are comparable or better than some available asymptotic methods. The fiducial method is easy to use and is applicable to find confidence intervals for many commonly used summary indices. Some examples are used to illustrate and compare the results of fiducial approach with those of other available asymptotic methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号