首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Khuri (Technometrics 27 (1985) 213) and Levy and Neill (Comm. Statist. A 19 (1990) 1987) presented regression lack of fit tests for multiresponse data with replicated observations available at points in the experimental region, thereby extending the classical univariate lack of fit test given by Fisher (J. Roy. Statist. Soc. 85 (1922) 597). In this paper, multivariate tests for lack of fit in a linear multiresponse model are derived for the common circumstance in which replicated observations are not obtained. The tests are based on the union–intersection principle, and provide multiresponse extensions of the univariate tests for between- and within-cluster lack of fit introduced by Christensen (Ann. of Statist. 17 (1989) 673; J. Amer. Statist. Assoc. 86 (1991) 752). Since the properties of these tests depend on the choice of multivariate clusters of the observations, a multiresponse generalization of the maximin power clustering criterion given by Miller, Neill and Sherfey (Ann. of Statist. 26 (1998) 1411; J. Amer. Statist. Assoc. 94 (1999) 610) is also developed.  相似文献   

2.
Abstract.  Wang & Wells [ J. Amer. Statist. Assoc. 95 (2000) 62] describe a non-parametric approach for checking whether the dependence structure of a random sample of censored bivariate data is appropriately modelled by a given family of Archimedean copulas. Their procedure is based on a truncated version of the Kendall process introduced by Genest & Rivest [ J. Amer. Statist. Assoc. 88 (1993) 1034] and later studied by Barbe et al . [ J. Multivariate Anal. 58 (1996) 197]. Although Wang & Wells (2000) determine the asymptotic behaviour of their truncated process, their model selection method is based exclusively on the observed value of its L 2-norm. This paper shows how to compute asymptotic p -values for various goodness-of-fit test statistics based on a non-truncated version of Kendall's process. Conditions for weak convergence are met in the most common copula models, whether Archimedean or not. The empirical behaviour of the proposed goodness-of-fit tests is studied by simulation, and power comparisons are made with a test proposed by Shih [ Biometrika 85 (1998) 189] for the gamma frailty family.  相似文献   

3.
For the assessment of agreement using probability criteria, we obtain an exact test, and for sample sizes exceeding 30, we give a bootstrap-tt test that is remarkably accurate. We show that for assessing agreement, the total deviation index approach of Lin [2000. Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence. Statist. Med. 19, 255–270] is not consistent and may not preserve its asymptotic nominal level, and that the coverage probability approach of Lin et al. [2002. Statistical methods in assessing agreement: models, issues and tools. J. Amer. Statist. Assoc. 97, 257–270] is overly conservative for moderate sample sizes. We also show that the nearly unbiased test of Wang and Hwang [2001. A nearly unbiased test for individual bioequivalence problems using probability criteria. J. Statist. Plann. Inference 99, 41–58] may be liberal for large sample sizes, and suggest a minor modification that gives numerically equivalent approximation to the exact test for sample sizes 30 or less. We present a simple and accurate sample size formula for planning studies on assessing agreement, and illustrate our methodology with a real data set from the literature.  相似文献   

4.
A procedure is proposed for the assessment of bioequivalence of variabilities between two formulations in bioavailability/bioequivalence studies. This procedure is essentially a two one-sided Pitman-Morgan’s tests procedure which is based on the correlation between crossover differences and subject totals. The nonparametric version of the proposed test is also discussed. A dataset of AUC from a 2×2 crossover bioequivalence trial is presented to illustrate the proposed procedures.  相似文献   

5.
In this paper, we introduce a new estimator of entropy of a continuous random variable. We compare the proposed estimator with the existing estimators, namely, Vasicek [A test for normality based on sample entropy, J. Roy. Statist. Soc. Ser. B 38 (1976), pp. 54–59], van Es [Estimating functionals related to a density by class of statistics based on spacings, Scand. J. Statist. 19 (1992), pp. 61–72], Correa [A new estimator of entropy, Commun. Statist. Theory and Methods 24 (1995), pp. 2439–2449] and Wieczorkowski-Grzegorewski [Entropy estimators improvements and comparisons, Commun. Statist. Simulation and Computation 28 (1999), pp. 541–567]. We next introduce a new test for normality. By simulation, the powers of the proposed test under various alternatives are compared with normality tests proposed by Vasicek (1976) and Esteban et al. [Monte Carlo comparison of four normality tests using different entropy estimates, Commun. Statist.–Simulation and Computation 30(4) (2001), pp. 761–785].  相似文献   

6.
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.  相似文献   

7.
An Erratum has been published for this article in Pharmaceutical Statistics 2004; 3(3): 232 Since the early 1990s, average bioequivalence (ABE) has served as the international standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile. Population (PBE) and individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980s. Guidance has been proposed by the Food and Drug Administration (FDA) for the implementation of these techniques in the pioneer and generic pharmaceutical industries. Hitherto no consensus among regulators, academia and industry has been established on the use of the IBE and PBE metrics. The need for more stringent bioequivalence criteria has not been demonstrated, and it is known that the PBE and IBE criteria proposed by the FDA are actually less stringent under certain conditions. The statistical properties of method of moments and restricted maximum likelihood modelling in replicate designs will be summarized, and the application of these techniques in the assessment of ABE, IBE and PBE will be considered based on a database of 51 replicate design studies and using simulation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we consider the bootstrap procedure for the augmented Dickey–Fuller (ADF) unit root test by implementing the modified divergence information criterion (MDIC, Mantalos et al. [An improved divergence information criterion for the determination of the order of an AR process, Commun. Statist. Comput. Simul. 39(5) (2010a), pp. 865–879; Forecasting ARMA models: A comparative study of information criteria focusing on MDIC, J. Statist. Comput. Simul. 80(1) (2010b), pp. 61–73]) for the selection of the optimum number of lags in the estimated model. The asymptotic distribution of the resulting bootstrap ADF/MDIC test is established and its finite sample performance is investigated through Monte-Carlo simulations. The proposed bootstrap tests are found to have finite sample sizes that are generally much closer to their nominal values, than those tests that rely on other information criteria, like the Akaike information criterion [H. Akaike, Information theory and an extension of the maximum likelihood principle, in Proceedings of the 2nd International Symposium on Information Theory, B.N. Petrov and F. Csáki, eds., Akademiai Kaido, Budapest, 1973, pp. 267–281]. The simulations reveal that the proposed procedure is quite satisfactory even for models with large negative moving average coefficients.  相似文献   

9.
Drug switchability requires the evidence of individual bioequivalence which -refers to the comparison of the closeness between the two distributions of the pharmacokinetic (PK) responses from the same subject obtained under the repeated administrations of the test and reference formulations. Advantages and drawbacks of the current statistical procedures for assessment of individual bioequivalence are discussed with emphasis on the aggregate-based criteria, An intersection-union test based on disaggregate criteria is proposed for the evaluation of individual bioequivalence. In addition, a modified aggregated criterion is suggested to overcome the drawbacks suffered by aggregate criteria. The relationships among different criteria are examined, and the performance of the procedures will be compared. A numerical example is given to illustrate the proposed procedures.  相似文献   

10.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

11.
In this paper we suggest a completely nonparametric test for the assessment of similar marginals of a multivariate distribution function. This test is based on the asymptotic normality of Mallows distance between marginals. It is also shown that the n out of n bootstrap is weakly consistent, thus providing a theoretical justification to the work in Czado, C. and Munk, A. [2001. Bootstrap methods for the nonparametric assessment of population bioequivalence and similarity of distributions. J. Statist. Comput. Simulation 68, 243–280]. The test is extended to cross-over trials and is applied to the problem of population bioequivalence, where two formulations of a drug are shown to be similar up to a tolerable limit. This approach was investigated in small samples using bootstrap techniques in Czado, C., Munk, A. [2001. Bootstrap methods for the nonparametric assessment of population bioequivalence and similarity of distributions. J. Statist. Comput. Simulation 68, 243–280], showing that the bias corrected and accelerated bootstrap yields a very accurate and powerful finite sample correction. A data example is discussed.  相似文献   

12.
This paper proposes a novel estimation of coefficients in single-index regression models. Unlike the traditional average derivative estimation [Powell JL, Stock JH, Stoker TM. Semiparametric estimation of index coefficients. Econometrica. 1989;57(6):1403–1430; Hardle W, Thomas M. Investigating smooth multiple regression by the method of average derivatives. J Amer Statist Assoc. 1989;84(408):986–995] and semiparametric least squares estimation [Ichimura H. Semiparametric least squares (sls) and weighted sls estimation of single-index models. J Econometrics. 1993;58(1):71–120; Hardle W, Hall P, Ichimura H. Optimal smoothing in single-index models. Ann Statist. 1993;21(1):157–178], the procedure developed in this paper is to estimate the coefficients directly by minimizing the mean variation function and does not involve estimating the link function nonparametrically. As a result, it avoids the selection of the bandwidth or the number of knots, and its implementation is more robust and easier. The resultant estimator is shown to be consistent. Numerical results and real data analysis also show that the proposed procedure is more applicable against model free assumptions.  相似文献   

13.
In this paper, we propose a new test for coefficient stability of an AR(1) model against the random coefficient autoregressive model of order 1 neither assuming a stationary nor a non-stationary process under the null hypothesis of a constant coefficient. The proposed test is obtained as a modification of the locally best invariant (LBI) test by Lee [(1998). Coefficient constancy test in a random coefficient autoregressive model. J. Statist. Plann. Inference 74, 93–101]. We examine finite sample properties of the proposed test by Monte Carlo experiments comparing with other existing tests, in particular, the LBI test by McCabe and Tremayne [(1995). Testing a time series for difference stationary. Ann. Statist. 23 (3), 1015–1028], which is for the null of a unit root process against the alternative of a stochastic unit root process.  相似文献   

14.
The 2 × 2 crossover is commonly used to establish average bioequivalence of two treatments. In practice, the sample size for this design is often calculated under a supposition that the true average bioavailabilities of the two treatments are almost identical. However, the average bioequivalence analysis that is subsequently carried out does not reflect this prior belief and this leads to a loss in efficiency. We propose an alternate average bioequivalence analysis that avoids this inefficiency. The validity and substantial power advantages of our proposed method are illustrated by simulations, and two numerical examples with real data are provided.  相似文献   

15.
Asymptotic comparison of two recent tests for constant regression via intermediate efficiency approach is developed here. The following work shows that the constructions proposed by Eubank and Hart (Ann. Statist. 20 (1992) 1412) and Fan and Huang (J. Amer. Statist. Assoc. 96 (2001) 640) are efficient for one type of deviation only, which is the same for both tests. It is also inferred that, for other directions, the second solution outperforms the first one.The approach elaborated in this paper also allows one to calculate the intermediate efficiency of the classic Kolmogorov–Smirnov test, thus supplementing several earlier developments on this statistic.  相似文献   

16.
When measuring units are expensive or time consuming, while ranking them is relatively easy and inexpensive, it is known that ranked set sampling (RSS) is preferable to simple random sampling (SRS). Many authors have suggested several extensions of RSS. As a variation, Al-Saleh and Al-Kadiri [Double ranked set sampling, Statist. Probab. Lett. 48 (2000), pp. 205–212] introduced double ranked set sampling (DRSS) and it was extended by Al-Saleh and Al-Omari [Multistage ranked set sampling, J. Statist. Plann. Inference 102 (2002), pp. 273–286] to multistage ranked set sampling (MSRSS). The entropy of a random variable (r.v.) is a measure of its uncertainty. It is a measure of the amount of information required on the average to determine the value of a (discrete) r.v.. In this work, we discuss entropy estimation in RSS design and aforementioned extensions and compare the results with those in SRS design in terms of bias and root mean square error (RMSE). Motivated by the above observed efficiency, we continue to investigate entropy-based goodness-of-fit test for the inverse Gaussian distribution using RSS. Critical values for some sample sizes determined by means of Monte Carlo simulations are presented for each design. A Monte Carlo power analysis is performed under various alternative hypotheses in order to compare the proposed testing procedure with the existing methods. The results indicate that tests based on RSS and its extensions are superior alternatives to the entropy test based on SRS.  相似文献   

17.
This paper deals with a study of different types of tests for the two-sided c-sample scale problem. We consider the classical parametric test of Bartlett [M.S. Bartlett, Properties of sufficiency and statistical tests, Proc. R. Stat. Soc. Ser. A. 160 (1937), pp. 268–282] several nonparametric tests, especially the test of Fligner and Killeen [M.A. Fligner and T.J. Killeen, Distribution-free two-sample tests for scale, J. Amer. Statist. Assoc. 71 (1976), pp. 210–213], the test of Levene [H. Levene, Robust tests for equality of variances, in Contribution to Probability and Statistics, I. Olkin, ed., Stanford University Press, Palo Alto, 1960, pp. 278–292] and a robust version of it introduced by Brown and Forsythe [M.B. Brown and A.B. Forsythe, Robust tests for the equality of variances, J. Amer. Statist. Assoc. 69 (1974), pp. 364–367] as well as two adaptive tests proposed by Büning [H. Büning, Adaptive tests for the c-sample location problem – the case of two-sided alternatives, Comm. Statist.Theory Methods. 25 (1996), pp. 1569–1582] and Büning [H. Büning, An adaptive test for the two sample scale problem, Nr. 2003/10, Diskussionsbeiträge des Fachbereich Wirtschaftswissenschaft der Freien Universität Berlin, Volkswirtschaftliche Reihe, 2003]. which are based on the principle of Hogg [R.V. Hogg, Adaptive robust procedures. A partial review and some suggestions for future applications and theory, J. Amer. Statist. Assoc. 69 (1974), pp. 909–927]. For all the tests we use Bootstrap sampling strategies, too. We compare via Monte Carlo Methods all the tests by investigating level α and power β of the tests for distributions with different strength of tailweight and skewness and for various sample sizes. It turns out that the test of Fligner and Killeen in combination with the bootstrap is the best one among all tests considered.  相似文献   

18.
In this paper, we describe an overall strategy for robust estimation of multivariate location and shape, and the consequent identification of outliers and leverage points. Parts of this strategy have been described in a series of previous papers (Rocke, Ann. Statist., in press; Rocke and Woodruff, Statist. Neerlandica 47 (1993), 27–42, J. Amer. Statist. Assoc., in press; Woodruff and Rocke, J. Comput. Graphical Statist. 2 (1993), 69–95; J. Amer. Statist. Assoc. 89 (1994), 888–896) but the overall structure is presented here for the first time. After describing the first-level architecture of a class of algorithms for this problem, we review available information about possible tactics for each major step in the process. The major steps that we have found to be necessary are as follows: (1) partition the data into groups of perhaps five times the dimension; (2) for each group, search for the best available solution to a combinatorial estimator such as the Minimum Covariance Determinant (MCD) — these are the preliminary estimates; (3) for each preliminary estimate, iterate to the solution of a smooth estimator chosen for robustness and outlier resistance; and (4) choose among the final iterates based on a robust criterion, such as minimum volume. Use of this algorithm architecture can enable reliable, fast, robust estimation of heavily contaminated multivariate data in high (> 20) dimension even with large quantities of data. A computer program implementing the algorithm is available from the authors.  相似文献   

19.
A test for assessing the equivalence of two variances of a bivariate normal vector is constructed. It is uniformly more powerful than the two one-sided tests procedure and the power improvement is substantial. Numerical studies show that it has a type I error close to the test level at most boundary points of the null hypothesis space. One can apply this test to paired difference experiments or 2×2 crossover designs to compare the variances of two populations with two correlated samples. The application of this test on bioequivalence in variability is presented. We point out that bioequivalence in intra-variability implies bioequivalence in variability, however, the latter has a larger power.  相似文献   

20.
Abstract

Use of the MVUE for the inverse-Gaussian distribution has been recently proposed by Nguyen and Dinh [Nguyen, T. T., Dinh, K. T. (2003). Exact EDF goodnes-of-fit tests for inverse Gaussian distributions. Comm. Statist. (Simulation and Computation) 32(2):505–516] where a sequential application based on Rosenblatt's transformation [Rosenblatt, M. (1952). Remarks on a multivariate transformation. Ann. Math. Statist. 23:470–472] led the authors to solve the composite goodness-of-fit problem by solving the surrogate simple goodness-of-fit problem, of testing uniformity of the independent transformed variables. In this note, we observe first that the proposal is not new since it was proposed in a rather general setting in O'Reilly and Quesenberry [O'Reilly, F., Quesenberry, C. P. (1973). The conditional probability integral transformation and applications to obtain composite chi-square goodness-of-fit tests. Ann. Statist. I:74–83]. It is shown on the other hand that the results in the paper of Nguyen and Dinh (2003) are incorrect in their Sec. 4, specially the Monte Carlo figures reported. Power simulations are provided here comparing these corrected results with two previously reported goodness-of-fit tests for the inverse-Gaussian; the modified Kolmogorov–Smirnov test in Edgeman et al. [Edgeman, R. L., Scott, R. C., Pavur, R. J. (1988). A modified Kolmogorov-Smirnov test for inverse Gaussian distribution with unknown parameters. Comm. Statist. 17(B): 1203–1212] and the A 2 based method in O'Reilly and Rueda [O'Reilly, F., Rueda, R. (1992). Goodness of fit for the inverse Gaussian distribution. T Can. J. Statist. 20(4):387–397]. The results show clearly that there is a large loss of power in the method explored in Nguyen and Dinh (2003) due to an implicit exogenous randomization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号