首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
In this paper we consider the risk of an estimator of the error variance after a pre-test for homoscedasticity of the variances in the two-sample heteroscedastic linear regression model. This particular pre-test problem has been well investigated but always under the restrictive assumption of a squared error loss function. We consider an asymmetric loss function — the LINEX loss function — and derive the exact risks of various estimators of the error variance.  相似文献   

2.
The purpose of toxicological studies is a safety assessment of compounds (e.g. pesticides, pharmaceuticals, industrial chemicals and food additives) at various dose levels. Because a mistaken declaration that a really non-equivalent dose is equivalent could have dangerous consequences, it is important to adopt reliable statistical methods that can properly control the family-wise error rate. We propose a new stepwise confidence interval procedure for toxicological evaluation based on an asymmetric loss function. The new procedure is shown to be reliable in the sense that the corresponding family-wise error rate is well controlled at or below the pre-specified nominal level. Our simulation results show that the new procedure is to be preferred over the classical confidence interval procedure and the stepwise procedure based on Welch's approximation in terms of practical equivalence/safety. The implementation and significance of the new procedure are illustrated with two real data sets: one from a reproductive toxicological study on Nitrofurazone in Swiss CD-1 mice, and the other from a toxicological study on Aconiazide.  相似文献   

3.
Abstract. A test for two‐sided equivalence of means has been developed under the assumption of normally distributed populations with heterogeneous variances. Its rejection region is limited by functions ± h that depend on the empirical variances. h is stated implicitly by a partial differential equation, an exact solution of which would provide a test that is exactly similar at the boundary of the null hypothesis of non‐equivalence. h is approximated by Taylor series up to third powers in the reciprocal number of degrees of freedom. This suffices to obtain error probabilities of the first kind that are very close to a nominal level of α = 0 . 05 at the boundary of the null hypothesis. For more than 10 data points in each group, they range between 0.04995 and 0.05005, and are thus much more precise than those obtained by other authors.  相似文献   

4.
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together, we call these conditions a ‘rejection principle for sequential tests’, which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next, the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and finding the maximum safe dose of a treatment.  相似文献   

5.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

6.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

7.
Equality of variances is one of the key assumptions of analysis of variances (ANOVA). There are several testing procedures available to validate this assumption, but it is rare to find a test procedure which controls the type I error rate while providing high statistical power. In this article, we introduce a bootstrap test based on the ratio of mean absolute deviances (RMD). We also propose a two-stage testing procedure where we first quantify the skewness of the distributions and then choose an appropriate test for homogeneity of variances. The performances of these test procedures are studied via a simulation study.  相似文献   

8.
In this paper, we apply the empirical likelihood method to heteroscedastic partially linear errors-in-variables model. For the cases of known and unknown error variances, the two different empirical log-likelihood ratios for the parameter of interest are constructed. If the error variances are known, the empirical log-likelihood ratio is proved to be asymptotic chi-square distribution under the assumption that the errors are given by a sequence of stationary α-mixing random variables. Furthermore, if the error variances are unknown, we show that the proposed statistic is asymptotically standard chi-square distribution when the errors are independent. Simulations are carried out to assess the performance of the proposed method.  相似文献   

9.
Estimation of each of and linear functions of two order restricted normal means is considered when variances are unknown and possibly unequal. We replace unknown variances with sample variances and construct isotonic regression estimators, which we call in our paper the plug-in estimators, to estimate ordered normal means. Under squared error loss, a necessary and sufficient condition is given for the plug-in estimators to improve upon the unrestricted maximum likelihood estimators uniformly. As for the estimation of linear functions of ordered normal means, we also show that when variances are known, the restricted maximum likelihood estimator always improves upon the unrestricted maximum likelihood estimator uniformly, but when variances are unknown, the plug-in estimator does not always improve upon the unrestricted maximum likelihood estimator uniformly.  相似文献   

10.
We consider the construction and properties of influence functions in the context of functional measurement error models with replicated data. In these models estimates of the parameters can be affected both by the individual observations and the means of replicated observations. We show that influence function of the means of replicates on the estimate of regression coefficients can be only derived under the assumption that the variances of the errors are known, while one for the individual observations can be only derived simultaneously with their influence function on the estimators of the variances of the errors.  相似文献   

11.
Toxicologists and pharmacologists often describe toxicity of a chemical using parameters of a nonlinear regression model. Thus estimation of parameters of a nonlinear regression model is an important problem. The estimates of the parameters and their uncertainty estimates depend upon the underlying error variance structure in the model. Typically, a priori the researcher would not know if the error variances are homoscedastic (i.e., constant across dose) or if they are heteroscedastic (i.e., the variance is a function of dose). Motivated by this concern, in this paper we introduce an estimation procedure based on preliminary test which selects an appropriate estimation procedure accounting for the underlying error variance structure. Since outliers and influential observations are common in toxicological data, the proposed methodology uses M-estimators. The asymptotic properties of the preliminary test estimator are investigated; in particular its asymptotic covariance matrix is derived. The performance of the proposed estimator is compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using a data set obtained from the National Toxicology Program.  相似文献   

12.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction.  相似文献   

13.
Abstract

The shape parameter of Topp–Leone distribution is estimated in this article from the Bayesian viewpoint under the assumption of known scale parameter. Bayes and empirical Bayes estimates of the unknown parameter are proposed under non informative and suitable conjugate priors. These estimates are derived under the assumption of squared and linear-exponential error loss functions. The risk functions of the proposed estimates are derived in analytical forms. It is shown that the proposed estimates are minimax and admissible. The consistency of the proposed estimates under the squared error loss function is also proved. Numerical examples are provided.  相似文献   

14.
There are several measures that are commonly used to assess performance of a multiple testing procedure (MTP). These measures include power, overall error rate (family‐wise error rate), and lack of power. In settings where the MTP is used to estimate a parameter, for example, the minimum effective dose, bias is of interest. In some studies, the parameter has a set‐like structure, and thus, bias is not well defined. Nevertheless, the accuracy of estimation is one of the essential features of an MTP in such a context. In this paper, we propose several measures based on the expected values of loss functions that resemble bias. These measures are constructed to be useful in combination drug dose response studies when the target is to identify all minimum efficacious drug combinations. One of the proposed measures allows for assigning different penalties for incorrectly overestimating and underestimating a true minimum efficacious combination. Several simple examples are considered to illustrate the proposed loss functions. Then, the expected values of these loss functions are used in a simulation study to identify the best procedure among several methods used to select the minimum efficacious combinations, where the measures take into account the investigator's preferences about possibly overestimating and/or underestimating a true minimum efficacious combination. The ideas presented in this paper can be generalized to construct measures that resemble bias in other settings. These measures can serve as an essential tool to assess performance of several methods for identifying set‐like parameters in terms of accuracy of estimation. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Recurrent event data arise commonly in medical and public health studies. The analysis of such data has received extensive research attention and various methods have been developed in the literature. Depending on the focus of scientific interest, the methods may be broadly classified as intensity‐based counting process methods, mean function‐based estimating equation methods, and the analysis of times to events or times between events. These methods and models cover a wide variety of practical applications. However, there is a critical assumption underlying those methods–variables need to be correctly measured. Unfortunately, this assumption is frequently violated in practice. It is quite common that some covariates are subject to measurement error. It is well known that covariate measurement error can substantially distort inference results if it is not properly taken into account. In the literature, there has been extensive research concerning measurement error problems in various settings. However, with recurrent events, there is little discussion on this topic. It is the objective of this paper to address this important issue. In this paper, we develop inferential methods which account for measurement error in covariates for models with multiplicative intensity functions or rate functions. Both likelihood‐based inference and robust inference based on estimating equations are discussed. The Canadian Journal of Statistics 40: 530–549; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
A test for assessing the equivalence of two variances of a bivariate normal vector is constructed. It is uniformly more powerful than the two one-sided tests procedure and the power improvement is substantial. Numerical studies show that it has a type I error close to the test level at most boundary points of the null hypothesis space. One can apply this test to paired difference experiments or 2×2 crossover designs to compare the variances of two populations with two correlated samples. The application of this test on bioequivalence in variability is presented. We point out that bioequivalence in intra-variability implies bioequivalence in variability, however, the latter has a larger power.  相似文献   

17.
This paper treats the problem of comparing different evaluations of procedures which rank the variances of k normal populations. Procedures are evaluated on the basis of appropriate loss functions for a particular goal. The goal considered involves ranking the variances of k independent normal populations when the corresponding population means are unknown. The variances are ranked by selecting samples of size n from each population and using the sample variances to obtain the ranking. Our results extend those of various authors who looked at the narrower problem of evaluating the standard proceduv 3 associated with selecting the smallest of the population variances (see e.g.,P. Somerville (1975)).

Different loss functions (both parametric and non-parametric) appropriate to the particular goal under consideration are proposed. Procedures are evaluated by the performance of their risk over a particular preference zone. The sample size n, the least favorable parametric configuration, and the maximum value of the risk are three quantities studied for each procedure. When k is small these quantities, calculated by numerical simulation, show which loss functions respond better and which respond worse to increases in sample size. Loss functions are compared with one another according to the extent of this response. Theoretical results are given for the case of asymptotically large k. It is shown that for certain cases the error incurred by using these asymptotic results is small when k is only moderately large.

This work is an outgrowth of and extends that of J. Reeves and M.J. Sobel (1987) by comparing procedures on the basis of the sample size (perpopulation) required to obtain various bounds on the associated risk functions. New methodologies are developed to evaluate complete ranking procedures in different settings.  相似文献   

18.
When estimating in a practical situation, asymmetric loss functions are preferred over squared error loss functions, as the former is more appropriate than the latter in many estimation problems. We consider here the problem of fixed precision point estimation of a linear parametric function in beta for the multiple linear regression model using asymmetric loss functions. Due to the presence of nuissance parameters, the sample size for the estimation problem is not known beforehand and hence we take the recourse of adaptive multistage sampling methodologies. We discuss here some multistage sampling techniques and compare the performances of these methodologies using simulation runs. The implementation of the codes for our proposed models is accomplished utilizing MATLAB 7.0.1 program run on a Pentium IV machine. Finally, we highlight the significance of such asymmetric loss functions with few practical examples.  相似文献   

19.
The problem of comparing two independent groups of univariate data in the sense of testing for equivalence is considered for a fully nonparametric setting. The distribution of the data within each group may be a mixture of both a continuous and a discrete component, and no assumptions are made regarding the way in which the distributions of the two groups of data may differ from each other – in particular, the assumption of a shift model is avoided. The proposed equivalence testing procedure for this scenario refers to the median of the independent difference distribution, i.e. to the median of the differences between independent observations from the test group and the reference group, respectively. The procedure provides an asymptotic equivalence test, which is symmetric with respect to the roles of ‘test’ and ‘reference’. It can be described either as a two‐one‐sided‐tests (TOST) approach, or equivalently as a confidence interval inclusion rule. A one‐sided variant of the approach can be applied analogously to non‐inferiority testing problems. The procedure may be generalised to equivalence testing with respect to quantiles other than the median, and is closely related to tolerance interval type inference. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Mixture of linear regression models provide a popular treatment for modeling nonlinear regression relationship. The traditional estimation of mixture of regression models is based on Gaussian error assumption. It is well known that such assumption is sensitive to outliers and extreme values. To overcome this issue, a new class of finite mixture of quantile regressions (FMQR) is proposed in this article. Compared with the existing Gaussian mixture regression models, the proposed FMQR model can provide a complete specification on the conditional distribution of response variable for each component. From the likelihood point of view, the FMQR model is equivalent to the finite mixture of regression models based on errors following asymmetric Laplace distribution (ALD), which can be regarded as an extension to the traditional mixture of regression models with normal error terms. An EM algorithm is proposed to obtain the parameter estimates of the FMQR model by combining a hierarchical representation of the ALD. Finally, the iterated weighted least square estimation for each mixture component of the FMQR model is derived. Simulation studies are conducted to illustrate the finite sample performance of the estimation procedure. Analysis of an aphid data set is used to illustrate our methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号