首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
The purpose of toxicological studies is a safety assessment of compounds (e.g. pesticides, pharmaceuticals, industrial chemicals and food additives) at various dose levels. Because a mistaken declaration that a really non-equivalent dose is equivalent could have dangerous consequences, it is important to adopt reliable statistical methods that can properly control the family-wise error rate. We propose a new stepwise confidence interval procedure for toxicological evaluation based on an asymmetric loss function. The new procedure is shown to be reliable in the sense that the corresponding family-wise error rate is well controlled at or below the pre-specified nominal level. Our simulation results show that the new procedure is to be preferred over the classical confidence interval procedure and the stepwise procedure based on Welch's approximation in terms of practical equivalence/safety. The implementation and significance of the new procedure are illustrated with two real data sets: one from a reproductive toxicological study on Nitrofurazone in Swiss CD-1 mice, and the other from a toxicological study on Aconiazide.  相似文献   

2.
ABSTRACT

Holm's step-down testing procedure starts with the smallest p-value and sequentially screens larger p-values without any information on confidence intervals. This article changes the conventional step-down testing framework by presenting a nonparametric procedure that starts with the largest p-value and sequentially screens smaller p-values in a step-by-step manner to construct a set of simultaneous confidence sets. We use a partitioning approach to prove that the new procedure controls the simultaneous confidence level (thus strongly controlling the familywise error rate). Discernible features of the new stepwise procedure include consistency with individual inference, coherence, and confidence estimations for follow-up investigations. In a simple simulation study, the proposed procedure (treated as a testing procedure), is more powerful than Holm's procedure when the correlation coefficient is large, and vice versa when it is small. In the data analysis of a medical study, the new procedure is able to detect the efficacy of Aspirin as a cardiovascular prophylaxis in a nonparametric setting.  相似文献   

3.
In this paper we discuss constructing confidence intervals based on asymptotic generalized pivotal quantities (AGPQs). An AGPQ associates a distribution with the corresponding parameter, and then an asymptotically correct confidence interval can be derived directly from this distribution like Bayesian or fiducial interval estimates. We provide two general procedures for constructing AGPQs. We also present several examples to show that AGPQs can yield new confidence intervals with better finite-sample behaviors than traditional methods.  相似文献   

4.
The problem of selecting a graphical model is considered as a performing simultaneously multiple tests. The control of the overall Type I error on the selected graph is done using the so famous Holm's procedure. We prove that when we use a consistent edge exclusion test the selected graph is asymptotically equal to the true graph with probability at least equal to a fixed level 1 ? α. This method is then used for the selection of mixed concentration graph models by performing the χ2-edge exclusion test. We also apply the method to two classical examples and to simulated data. We compare the overall error of the selected model with the one obtained using the stepwise method. We establish that the control is better when we use the Holm's procedure.  相似文献   

5.
Unless all of a drug is eliminated during each dosing interval, the plasma concentrations within a dosing interval will increase until the time course of change in plasma concentrations becomes invariant from one dosing interval to the next, resulting in steady state. A simple method for estimating drug concentration time to steady state based on multiple dose area under the plasma concentration–time curve and effective rate of drug accumulation is presented. Several point estimates and confidence intervals for time to 90% of steady state are compared, and a recommendation is made on how to summarize and present the results. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
In the small area estimation, the empirical best linear unbiased predictor (EBLUP) or the empirical Bayes estimator (EB) in the linear mixed model is recognized to be useful because it gives a stable and reliable estimate for a mean of a small area. In practical situations where EBLUP is applied to real data, it is important to evaluate how much EBLUP is reliable. One method for the purpose is to construct a confidence interval based on EBLUP. In this paper, we obtain an asymptotically corrected empirical Bayes confidence interval in a nested error regression model with unbalanced sample sizes and unknown components of variance. The coverage probability is shown to satisfy the confidence level in the second-order asymptotics. It is numerically revealed that the corrected confidence interval is superior to the conventional confidence interval based on the sample mean in terms of the coverage probability and the expected width of the interval. Finally, it is applied to the posted land price data in Tokyo and the neighboring prefecture.  相似文献   

7.
Summary.  Controversy has intensified regarding the death-rate from cancer that is induced by a dose of radiation. In the models that are usually considered the hazard function is an increasing function of the dose of radiation. Such models can mask local variations. We consider the models of excess relative risk and of absolute risk and propose a nonparametric estimation of the effect of the dose by using a model selection procedure. This estimation deals with stratified data. We approximate the function of the dose by a collection of splines and select the best one according to the Akaike information criterion. In the same way between the models of excess relative risk or excess absolute risk, we choose the model that best fits the data. We propose a bootstrap method for calculating a pointwise confidence interval of the dose function. We apply our method for estimating the solid cancer and leukaemia death hazard functions to Hiroshima.  相似文献   

8.
To investigate the biological activities of a new compound or drug, experimenters usually compare a series of increasing doses to a control. Among other objectives, one may try to investigate any possible dose-response trend and to determine the minimum effective dose among all the experimental doses. Williams (1971, 1972) proposed a procedure to test the dose-response trend and also to identify the minimum effective dose based on the normally distributed data. In this paper, we propose a similar test procedure based on the robust estimate'of the average response to perform similar analysis. The proposed method is more resistant to the outliers and more powerful than the Williams procedure when the data distribution deviates from normality. We illustrate the use of this procedure with data arising from a recent study.  相似文献   

9.
We develop in this paper a new procedure to construct simultaneous confidence bands for derivatives of mean curves in functional data analysis. The technique involves polynomial splines that provide an approximation to the derivatives of the mean functions, the covariance functions and the associated eigenfunctions. We show that the proposed procedure has desirable statistical properties. In particular, we first show that the proposed estimators of derivatives of the mean curves are semiparametrically efficient. Second, we establish consistency results for derivatives of covariance functions and their eigenfunctions. Most importantly, we show that the proposed spline confidence bands are asymptotically efficient as if all random trajectories were observed with no error. Finally, the confidence band procedure is illustrated through numerical simulation studies and a real life example.  相似文献   

10.
Consider a linear regression model with independent normally distributed errors. Suppose that the scalar parameter of interest is a specified linear combination of the components of the regression parameter vector. Also suppose that we have uncertain prior information that a parameter vector, consisting of specified distinct linear combinations of these components, takes a given value. Part of our evaluation of a frequentist confidence interval for the parameter of interest is the scaled expected length, defined to be the expected length of this confidence interval divided by the expected length of the standard confidence interval for this parameter, with the same confidence coefficient. We say that a confidence interval for the parameter of interest utilizes this uncertain prior information if (a) the scaled expected length of this interval is substantially less than one when the prior information is correct, (b) the maximum value of the scaled expected length is not too large and (c) this confidence interval reverts to the standard confidence interval, with the same confidence coefficient, when the data happen to strongly contradict the prior information. We present a new confidence interval for a scalar parameter of interest, with specified confidence coefficient, that utilizes this uncertain prior information. A factorial experiment with one replicate is used to illustrate the application of this new confidence interval.  相似文献   

11.
We suppose a case is to be compared with controls on the basis of a test that gives a single discrete score. The score of the case may tie with the scores of one or more controls. However, scores relate to an underlying quantity of interest that is continuous and so an observed score can be treated as the rounded value of an underlying continuous score. This makes it reasonable to break ties. This paper addresses the problem of forming a confidence interval for the proportion of controls that have a lower underlying score than the case. In the absence of ties, this is the standard task of making inferences about a binomial proportion and many methods for forming confidence intervals have been proposed. We give a general procedure to extend these methods to handle ties, under the assumption that ties may be broken at random. Properties of the procedure are given and an example examines its performance when it is used to extend several methods. A real example shows that an estimated confidence interval can be much too small if the uncertainty associated with ties is not taken into account. Software implementing the procedure is freely available.  相似文献   

12.
Existing equivalence tests for multinomial data are valid asymptotically, but the level is not properly controlled for small and moderate sample sizes. We resolve this difficulty by developing an exact multinomial test for equivalence and an associated confidence interval procedure. We also derive a conservative version of the test that is easy to implement even for very large sample sizes. Both tests use a notion of equivalence that is based on the cumulative distribution function, with two probability vectors being considered equivalent if their partial sums never differ by more than some specified constant. We illustrate the methods by applying them to Weldon's dice data, to data on the digits of , and to data collected by Mendel. The Canadian Journal of Statistics 37: 47–59; © 2009 Statistical Society of Canada  相似文献   

13.
In this article, we consider the problem of constructing simultaneous confidence intervals for odds ratios in 2 × k classification tables with a fixed reference level. We discuss six methods designed to control the familywise error rate and investigate these methods in terms of simultaneous coverage probability and mean interval length. We illustrate the importance and the implementation of these methods using two {\sc hiv} public health studies.  相似文献   

14.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961 Pratt , J. W. ( 1961 ). Length of confidence intervals . J. Amer. Statist. Assoc. 56 : 549657 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths.  相似文献   

16.
This article deals with the estimation of the stress-strength parameter R = P(Y < X) when X and Y are independent Lindley random variables with different shape parameters. The uniformly minimum variance unbiased estimator has explicit expression, however, its exact or asymptotic distribution is very difficult to obtain. The maximum likelihood estimator of the unknown parameter can also be obtained in explicit form. We obtain the asymptotic distribution of the maximum likelihood estimator and it can be used to construct confidence interval of R. Different parametric bootstrap confidence intervals are also proposed. Bayes estimator and the associated credible interval based on independent gamma priors on the unknown parameters are obtained using Monte Carlo methods. Different methods are compared using simulations and one data analysis has been performed for illustrative purposes.  相似文献   

17.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

18.
Abstract.  Comparison of two samples can sometimes be conducted on the basis of analysis of receiver operating characteristic (ROC) curves. A variety of methods of point estimation and confidence intervals for ROC curves have been proposed and well studied. We develop smoothed empirical likelihood-based confidence intervals for ROC curves when the samples are censored and generated from semiparametric models. The resulting empirical log-likelihood function is shown to be asymptotically chi-squared. Simulation studies illustrate that the proposed empirical likelihood confidence interval is advantageous over the normal approximation-based confidence interval. A real data set is analysed using the proposed method.  相似文献   

19.
Analysis of high dimensional data often seeks to identify a subset of important features and assess their effects on the outcome. Traditional statistical inference procedures based on standard regression methods often fail in the presence of high-dimensional features. In recent years, regularization methods have emerged as promising tools for analyzing high dimensional data. These methods simultaneously select important features and provide stable estimation of their effects. Adaptive LASSO and SCAD for instance, give consistent and asymptotically normal estimates with oracle properties. However, in finite samples, it remains difficult to obtain interval estimators for the regression parameters. In this paper, we propose perturbation resampling based procedures to approximate the distribution of a general class of penalized parameter estimates. Our proposal, justified by asymptotic theory, provides a simple way to estimate the covariance matrix and confidence regions. Through finite sample simulations, we verify the ability of this method to give accurate inference and compare it to other widely used standard deviation and confidence interval estimates. We also illustrate our proposals with a data set used to study the association of HIV drug resistance and a large number of genetic mutations.  相似文献   

20.
In the present article, we develop some asymptotically power on partially sequential nonparametric tests for monitoring structural changes. Our test procedures are based on Wilcoxon score. We use the idea of curved stopping boundaries. We derive some exact results and perform simulation studies to provide various properties of the tests. We see that one of the proposed procedures significantly controls the Type I error rate. This procedure may be very effective for fluctuation monitoring. We illustrate the procedures by using real life data from the stock market.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号