首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article proposes a confidence interval procedure for an open question posted by Tamhane and Dunnett regarding the inference on the minimum effective dose of a drug for binary data. We use a partitioning approach in conjunction with a confidence interval procedure to provide a solution to this problem. Binary data frequently arise in medical investigations in connection with dichotomous outcomes such as the development of a disease or the efficacy of a drug. The proposed procedure not only detects the minimum effective dose of the drug, but also provides estimation information on the treatment effect of the closest ineffective dose. Such information benefits follow-up investigations in clinical trials. We prove that, when the confidence interval for the pairwise comparison has (or asymptotically controls) confidence level 1 ? α, the stepwise procedure strongly controls (or asymptotically controls) the familywise error rate at level α, which is a key criterion in dose finding. The new method is compared with other procedures in terms of power performance and coverage probability using simulations. The simulated results shed new light on the discernible features of the new procedure. An example on the investigation of acetaminophen is included.  相似文献   

2.
Low dose risk estimation via simultaneous statistical inferences   总被引:2,自引:0,他引:2  
Summary.  The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose.  相似文献   

3.
Summary.  Controversy has intensified regarding the death-rate from cancer that is induced by a dose of radiation. In the models that are usually considered the hazard function is an increasing function of the dose of radiation. Such models can mask local variations. We consider the models of excess relative risk and of absolute risk and propose a nonparametric estimation of the effect of the dose by using a model selection procedure. This estimation deals with stratified data. We approximate the function of the dose by a collection of splines and select the best one according to the Akaike information criterion. In the same way between the models of excess relative risk or excess absolute risk, we choose the model that best fits the data. We propose a bootstrap method for calculating a pointwise confidence interval of the dose function. We apply our method for estimating the solid cancer and leukaemia death hazard functions to Hiroshima.  相似文献   

4.
The comparison of increasing doses of a compound to a zero dose control is of interest in medical and toxicological studies. Assume that the mean dose effects are non-decreasing among the non-zero doses of the compound. A simple procedure that modifies Dunnett's procedure is proposed to construct simultaneous confidence intervals for pairwise comparisons of each dose group with the zero dose control by utilizing the ordering of the means. The simultaneous lower bounds and upper bounds by the new procedure are monotone, which is not the case with Dunnett's procedure. This is useful to categorize dose levels. The expected gains of the new procedure over Dunnett's procedure are studied. The procedure is shown by real data to compare well with its predecessor.  相似文献   

5.
Statistical methods of risk assessment for continuous variables   总被引:1,自引:0,他引:1  
Adverse health effects for continuous responses are not as easily defined as adverse health effects for binary responses. Kodell and West (1993) developed methods for defining adverse effects for continuous responses and the associated risk. Procedures were developed for finding point estimates and upper confidence limits for additional risk under the assumption of a normal distribution and quadratic mean response curve with equal variances at each dose level. In this paper, methods are developed for point estimates and upper confidence limits for additional risk at experimental doses when the equal variance assumption is relaxed. An interpolation procedure is discussed for obtaining information at doses other than the experimental doses. A small simulation study is presented to test the performance of the methods discussed.  相似文献   

6.
The purpose of toxicological studies is a safety assessment of compounds (e.g. pesticides, pharmaceuticals, industrial chemicals and food additives) at various dose levels. Because a mistaken declaration that a really non-equivalent dose is equivalent could have dangerous consequences, it is important to adopt reliable statistical methods that can properly control the family-wise error rate. We propose a new stepwise confidence interval procedure for toxicological evaluation based on an asymmetric loss function. The new procedure is shown to be reliable in the sense that the corresponding family-wise error rate is well controlled at or below the pre-specified nominal level. Our simulation results show that the new procedure is to be preferred over the classical confidence interval procedure and the stepwise procedure based on Welch's approximation in terms of practical equivalence/safety. The implementation and significance of the new procedure are illustrated with two real data sets: one from a reproductive toxicological study on Nitrofurazone in Swiss CD-1 mice, and the other from a toxicological study on Aconiazide.  相似文献   

7.
The likelihood ratio method is used to construct a confidence interval for a population mean when sampling from a population with certain characteristics found in many applications, such as auditing. Specifically, a sample taken from this type of population usually consists of a very large number of zero values, plus a small number of nonzero values that follow some continuous distribution. In this situation, the traditional confidence interval constructed for the population mean is known to be unreliable. This article derives confidence intervals based on the likelihood-ratio-test approach by assuming (1) a normal distribution (normal algorithm) and (2) an exponential distribution (exponential algorithm). Because the error population distribution is usually unknown, it is important to study the robustness of the proposed procedures. We perform an extensive simulation study to compare the percentage of confidence intervals containing the true population mean using the two proposed algorithms with the percentage obtained from the traditional method based on the central limit theorem. It is shown that the normal algorithm is the most robust procedure against many different distributional error assumptions.  相似文献   

8.
This study constructs a simultaneous confidence region for two combinations of coefficients of linear models and their ratios based on the concept of generalized pivotal quantities. Many biological studies, such as those on genetics, assessment of drug effectiveness, and health economics, are interested in a comparison of several dose groups with a placebo group and the group ratios. The Bonferroni correction and the plug-in method based on the multivariate-t distribution have been proposed for the simultaneous region estimation. However, the two methods are asymptotic procedures, and their performance in finite sample sizes has not been thoroughly investigated. Based on the concept of generalized pivotal quantity, we propose a Bonferroni correction procedure and a generalized variable (GV) procedure to construct the simultaneous confidence regions. To address a genetic concern of the dominance ratio, we conduct a simulation study to empirically investigate the probability coverage and expected length of the methods for various combinations of sample sizes and values of the dominance ratio. The simulation results demonstrate that the simultaneous confidence region based on the GV procedure provides sufficient coverage probability and reasonable expected length. Thus, it can be recommended in practice. Numerical examples using published data sets illustrate the proposed methods.  相似文献   

9.
We study the use of a Scheffé-style simultaneous confidence band as applied to low-dose risk estimation with quantal response data. We consider two formulations for the dose-response risk function, an Abbott-adjusted Weibull model and an Abbott-adjusted log-logistic model. Using the simultaneous construction, we derive methods for estimating upper confidence limits on predicted extra risk and, by inverting the upper bands on risk, lower bounds on the benchmark dose, or BMD, at a specific level of ‘benchmark risk’. Monte Carlo evaluations explore the operating characteristics of the simultaneous limits.  相似文献   

10.
One of the most important issues in toxicity studies is the identification of the equivalence of treatments with a placebo. Because it is unacceptable to declare non‐equivalent treatments to be equivalent, it is important to adopt a reliable statistical method to properly control the family‐wise error rate (FWER). In dealing with this issue, it is important to keep in mind that overestimating toxicity equivalence is a more serious error than underestimating toxicity equivalence. Consequently asymmetric loss functions are more appropriate than symmetric loss functions. Recently Tao, Tang & Shi (2010) developed a new procedure based on an asymmetric loss function. However, their procedure is somewhat unsatisfactory because it assumes that the variances of various dose levels are known. This assumption is restrictive for some applications. In this study we propose an improved approach based on asymmetric confidence intervals without the restrictive assumption of known variances. The asymmetry guarantees reliability in the sense that the FWER is well controlled. Although our procedure is developed assuming that the variances of various dose levels are unknown but equal, simulation studies show that our procedure still performs quite well when the variances are unequal.  相似文献   

11.
We consider the problem of finding an upper 1 –α confidence limit (α < ½) for a scalar parameter of interest θ in the presence of a nuisance parameter vector ψ when the data are discrete. Using a statistic T as a starting point, Kabaila & Lloyd (1997) define what they call the tight upper limit with respect to T . This tight upper limit possesses certain attractive properties. However, these properties provide very little guidance on the choice of T itself. The practical recommendation made by Kabaila & Lloyd (1997) is that T be an approximate upper 1 –α confidence limit for θ rather than, say, an approximately median unbiased estimator of θ. We derive a large sample approximation which provides strong theoretical support for this recommendation.  相似文献   

12.
This paper proposes a new test procedure called the rel test to resolve the problem of small-sample local biasedness and non-monotonic power behavior of the Wald test for two linear restrictions caused by inaccuracy of the estimated covariance matrix of the estimator. This new test procedure, which does not need the covariance matrix of the estimator, involves finding the critical region based on contour points of the percentile confidence limit of a rel utilizing the bootstrap in order to obtain a test with the desired size and good power properties. Simulation results indicate that this new test procedure, the rel test, performs rather well both with respect to controlling size and having monotonic increasing power.  相似文献   

13.
ABSTRACT

Holm's step-down testing procedure starts with the smallest p-value and sequentially screens larger p-values without any information on confidence intervals. This article changes the conventional step-down testing framework by presenting a nonparametric procedure that starts with the largest p-value and sequentially screens smaller p-values in a step-by-step manner to construct a set of simultaneous confidence sets. We use a partitioning approach to prove that the new procedure controls the simultaneous confidence level (thus strongly controlling the familywise error rate). Discernible features of the new stepwise procedure include consistency with individual inference, coherence, and confidence estimations for follow-up investigations. In a simple simulation study, the proposed procedure (treated as a testing procedure), is more powerful than Holm's procedure when the correlation coefficient is large, and vice versa when it is small. In the data analysis of a medical study, the new procedure is able to detect the efficacy of Aspirin as a cardiovascular prophylaxis in a nonparametric setting.  相似文献   

14.
An algorithm is presented for computing an exact nonparametric interval estimate of the slope parameter in a simple linear regression model. The confidence interval is obtained by inverting the hypothesis test for slope that uses Spearman's rho. This method is compared to an exact procedure based on Kendall's tau. The Spearman rho procedure will generally give exact levels of confidence closer to desired levels, especially in small samples. Monte carlo results comparing these two methods with the parametric procedure are given  相似文献   

15.
Consider dichotomous observations taken from T strata or tables, where within each table, the effect of J>2 doses or treatments are valuated. 'Ihe dose or treatment effect may be measured by various functions of the probability of outcomes, but it is assumed that the effect is the same in each table. Previous work on finding confidence intervals is specific to a particular function of the probabilities, based on only two doses, and limited to ML estimation of the nuisance parameters. In this paper, confidence intervals are developed based on the C, test, allowing for a unification and generalization of previous work. A computational procedure is given that minimizes the number of iterations required. An extension of the procedure to the regression framework suitable when there are large numbers of sparse tables is outlined.  相似文献   

16.
The Buehler 1 –α upper confidence limit is as small as possible, subject to the constraints that its coverage probability is at least 1 –α and that it is a non‐decreasing function of a pre‐specified statistic T. This confidence limit has important biostatistical and reliability applications. Previous research has examined the way the choice of T affects the efficiency of the Buehler 1 –α upper confidence limit for a given value of α. This paper considers how T should be chosen when the Buehler limit is to be computed for a range of values of α. If T is allowed to depend on α then the Buehler limit is not necessarily a non‐increasing function of α, i.e. the limit is ‘non‐nesting’. Furthermore, non‐nesting occurs in standard and practical examples. Therefore, if the limit is to be computed for a range [αL, αU]of values of α, this paper suggests that T should be a carefully chosen approximate 1 –αL upper limit for θ. The choice leads to Buehler limits that have high statistical efficiency and are nesting.  相似文献   

17.
Responses in a one-factor experiment with A; ordered treatments follow an umbrella, ordering if they consist of two piecewise monotone segments, i.e. increasing and then decreasing, or the converse. This paper proposes a non-parametric distribution-free confidence procedure for umbrella orderings, the aim being to identify the treatments that correspond to the optimal effects. It uses a method that joins the seemingly unrelated theories of U-statistics and isotonic regression. A random confidence subset of the ordered treatments is constructed, such that it contains all the unknown peaks (optimal treatments) of an umbrella ordering with any prespecified confidence level. The paper demonstrates that the proposed confidence procedure is nonparametric distribution-free and, further, that the proposed procedure naturally implies a test for umbrella alternatives. Since the proposed confidence procedure is always more informative than tests for umbrella alternatives, it should be used in their place in practice. An example illustrates the proposed procedure.  相似文献   

18.
Based on type II censored data, an exact lower confidence limit is constructed for the reliability function of a two-parameter exponential distribution, using the concept of a generalized confidence interval due to Weerahandi (J. Amer. Statist. Assoc. 88 (1993) 899). It is shown that the interval is exact, i.e., it provides the intended coverage. The confidence limit has to be numerically obtained; however, the required computations are simple and straightforward. An approximation is also developed for the confidence limit and its performance is numerically investigated. The numerical results show that compared to what is currently available, our approximation is more satisfactory in terms of providing the intended coverage, especially for small samples.  相似文献   

19.
We consider the problem of setting up a confidence region for the mean of amultivariate timeseries ont he basis of a part-realisation of that series.A procedure for setting up a confidence interval for the mean of a univariate time series Is implicitin Jones(1976).We present an analogous procedure for setting up a confidence region for the mean of a multivariatet ime series.This procedure is base donastatistic which is an analogue of Hotelling'sT'.Our results are applied to a comparison of climate means obtained from experiments with a General Circulation Model of the earth's atmosphere.  相似文献   

20.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号