首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Group testing is the process of combining individual samples and testing them as a group for the presence of an attribute. The use of such testing to estimate proportions is an important statistical tool in many applications. When samples are collected and tested in groups of different size, complications arise in the construction of exact confidence intervals. In this case, the numbers of positive groups has a multivariate distribution, and the difficulty stems from a lack of a natural ordering of the sample points. Exact two‐sided intervals such as the equal‐tail method based on maximum likelihood estimation, and those based on joint probability or likelihood ratio statistics, have been previously considered. In this paper several new estimators are developed and assessed. We show that the combined tails (or Blaker) method based on a suitable ordering statistic, is the best choice in this setting. The methods are illustrated using a study involving the infection prevalence of Myxobolus cerebralis among free‐ranging fish.  相似文献   

2.
The exact confidence region for log relative potency resulting from likelihood score methods (Williams (1988) An exact confidence interval for the relative potency estimated from a multivariate bioassay, Biometrics, 44:861-868) will very likely consist of two disjoint confidence intervals. The two methods proposed by Williams which aim to select just one (the same) confidence interval from the confidence region are nearly – but not completely – consistent. The likelihood score interval and likelihood ratio interval are asymptotically equivalent. Williams's very strong claim concerning the confidence coefficient in the second selection method is still theoretically unproved; yet, simulations show that it is true for a wide range of practical experimental situations.  相似文献   

3.
Empirical likelihood has attracted much attention in the literature as a nonparametric method. A recent paper by Lu & Peng (2002) [Likelihood based confidence intervals for the tail index. Extremes 5, 337–352] applied this method to construct a confidence interval for the tail index of a heavy‐tailed distribution. It turns out that the empirical likelihood method, as well as other likelihood‐based methods, performs better than the normal approximation method in terms of coverage probability. However, when the sample size is small, the confidence interval computed using the χ2 approximation has a serious undercoverage problem. Motivated by Tsao (2004) [A new method of calibration for the empirical loglikelihood ratio. Statist. Probab. Lett. 68, 305–314], this paper proposes a new method of calibration, which corrects the undercoverage problem.  相似文献   

4.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   

5.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

6.
We develop an approach to evaluating frequentist model averaging procedures by considering them in a simple situation in which there are two‐nested linear regression models over which we average. We introduce a general class of model averaged confidence intervals, obtain exact expressions for the coverage and the scaled expected length of the intervals, and use these to compute these quantities for the model averaged profile likelihood (MPI) and model‐averaged tail area confidence intervals proposed by D. Fletcher and D. Turek. We show that the MPI confidence intervals can perform more poorly than the standard confidence interval used after model selection but ignoring the model selection process. The model‐averaged tail area confidence intervals perform better than the MPI and postmodel‐selection confidence intervals but, for the examples that we consider, offer little over simply using the standard confidence interval for θ under the full model, with the same nominal coverage.  相似文献   

7.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

8.
Let X and Y follow independent Burr type XII distributions, which share a common inner shape parameter. The maximum likelihood estimator of the parameter δ = P(X < Y) is studied based on record samples. The existence and uniqueness of the maximum likelihood estimator of δ based on record samples are established. When the inner shape parameter is known, an exact confidence interval of δ is derived; otherwise, the Fisher information matrix and two bootstrap methods are used to obtain three approximate confidence intervals of δ. The performances of the proposed methods are evaluated via Monte Carlo simulation. Two examples are provided for illustration.  相似文献   

9.
Through random cut‐points theory, the author extends inference for ordered categorical data to the unspecified continuum underlying the ordered categories. He shows that a random cut‐point Mann‐Whitney test yields slightly smaller p‐values than the conventional test for most data. However, when at least P% of the data lie in one of the k categories (with P = 80 for k = 2, P = 67 for k = 3,…, P = 18 for k = 30), he also shows that the conventional test can yield much smaller p‐values, and hence misleadingly liberal inference for the underlying continuum. The author derives formulas for exact tests; for k = 2, the Mann‐Whitney test is but a binomial test.  相似文献   

10.
A method of calculating simultaneous one-sided confidence intervals for all ordered pairwise differences of the treatment effectsji, 1 i < j k, in a one-way model without any distributional assumptions is discussed. When it is known a priori that the treatment effects satisfy the simple ordering1k, these simultaneous confidence intervals offer the experimenter a simple way of determining which treatment effects may be declared to be unequal, and is more powerful than the usual two-sided Steel-Dwass procedure. Some exact critical points required by the confidence intervals are presented for k= 3 and small sample sizes, and other methods of critical point determination such as asymptotic approximation and simulation are discussed.  相似文献   

11.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

12.
It is shown that the exact null distribution of the likelihood ratio criterion for sphericity test in the p-variate normal case and the marginal distribution of the first component of a (p ? 1)-variate generalized Dirichlet model with a given set of parameters are identical. The exact distribution of the likelihood ratio criterion so obtained has a general format for every p. A novel idea is introduced here through which the complicated exact null distribution of the sphericity test criterion in multivariate statistical analysis is converted into an easily tractable marginal density in a generalized Dirichlet model. It provides a direct and easiest method of computation of p-values. The computation of p-values and a table of critical points corresponding to p = 3 and 4 are also presented.  相似文献   

13.
Abstract

Numerous methods—based on exact and asymptotic distributions—can be used to obtain confidence intervals for the odds ratio in 2 × 2 tables. We examine ten methods for generating these intervals based on coverage probability, closeness of coverage probability to target, and length of confidence intervals. Based on these criteria, Cornfield’s method, without the continuity correction, performed the best of the methods examined here. A drawback to use of this method is the significant possibility that the attained coverage probability will not meet the nominal confidence level. Use of a mid-P value greatly improves methods based on the “exact” distribution. When combined with the Wilson rule for selection of a rejection set, the resulting method is a procedure that performed very well. Crow’s method, with use of a mid-P, performed well, although it was only a slight improvement over the Wilson mid-P method. Its cumbersome calculations preclude its general acceptance. Woolf's (logit) method—with the Haldane–Anscombe correction— performed well, especially with regard to length of confidence intervals, and is recommended based on ease of computation.  相似文献   

14.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

15.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

16.
For studies with dichotomous outcomes, inverse sampling (also known as negative binomial sampling) is often used when the subjects arrive sequentially, when the underlying response of interest is acute, and/or when the maximum likelihood estimators of some epidemiologic indices are undefined. Although exact unconditional inference has been shown to be appealing, its applicability and popularity is severely hindered by the notorious conservativeness due to the adoption of the maximization principle and by the tedious computing time due to the involvement of infinite summation. In this article, we demonstrate how these obstacles can be overcome by the application of the constrained maximum likelihood estimation and truncated approximation. The present work is motivated by confidence interval construction for the risk difference under inverse sampling. Wald-type and score-type confidence intervals based on inverting two one-sided and one two-sided tests are considered. Monte Carlo simulations are conducted to evaluate the performance of these confidence intervals with respect to empirical coverage probability, empirical confidence width, and empirical left and right non-coverage probabilities. Two examples from a maternal congenital heart disease study and a drug comparison study are used to demonstrate the proposed methodologies.  相似文献   

17.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

18.
Methods of constructing exact tolerance intervals (β-expectation and β-content) for independent observations are well known. For the case of dependent observations, obtaining exact results is not possible. In this article we provide an approximate method of constructing β-expectation tolerance intervals via a Taylor series expansion. Examples of independent observations are considered to compare the intervals constructed with those obtained by the exact method. For the case of non-stationary type processes we have proposed a method of constructing approximate β-content tolerance intervals. Once again an example is given to illustrate the results.  相似文献   

19.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
ABSTRACT

In this paper, the stress-strength reliability, R, is estimated in type II censored samples from Pareto distributions. The classical inference includes obtaining the maximum likelihood estimator, an exact confidence interval, and the confidence intervals based on Wald and signed log-likelihood ratio statistics. Bayesian inference includes obtaining Bayes estimator, equi-tailed credible interval, and highest posterior density (HPD) interval given both informative and non-informative prior distributions. Bayes estimator of R is obtained using four methods: Lindley's approximation, Tierney-Kadane method, Monte Carlo integration, and MCMC. Also, we compare the proposed methods by simulation study and provide a real example to illustrate them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号