首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Applications of nonparametric methods to the evaluation of bioequiv-alence for two treatments are presented for independent samples and for a crossover design. Included are procedures for testing for equivalence in location, in dispersion, and in general. Also presented are procedures for the calculation of confidence limits. A general strategy for the evaluation of bioequivalence is developed which involves both hypothesis testing and the calculation of confidencelimits for parameters which characterize departures from equivalene.  相似文献   

2.
For the competing risk setting where the lifetime data are due to one of several distinct and exclusive causes, comparison of cause-specific hazard rates is of interest to researchers. In this paper we survey existing methods for related tests and provide a new test for a common overall rate for all causes and groups. Tests given in the literature for checking for a common rate for causes and a common rate for the groups are shown to be in the same framework as the proposed test for common overall rate. Asymptotics are shown to follow a common theme for each test. An extensive numerical and graphical investigation and an example are presented to substantiate the proposed methods.  相似文献   

3.
This article describes a method for partitioning with respect to a control for the situation in which the treatment sample sizes are unequal and also for the situation where the treatment sample sizes are equal except for a few missing values. Calculation of the critical values required for finding confidence limits is discussed and tables are presented for the “almost equal” sample size case. An application of this method to length of stay data for congestive heart failure patients is also provided.  相似文献   

4.
Critical values for Dunnett's multiple comparison procedure for simultaneously comparing the means of several experimental treatments with the mean of a control treatment are available in many references. However, these values are primarily applicable to one-way analysis of variance with equal sample sizes for the experimental treatments. Even for this design, these critical values do not suffice for situations where the corresponding p-values are desired. Here the technique is presented for calculation of the p-value for multiple comparisons with a control when a general linear model is used for the analysis. A FORTRAN program using IMSL subroutines is described for the calculations.  相似文献   

5.
In this paper, classical optimum tests for symmetry of two-piece normal distribution is derived. Uniformly most powerful one-sided test for the skewness parameter is obtained when the location and scale parameters are known and is compared with sequential probability ratio test. An ad-hoc test for symmetry and likelihood ratio test for symmetry for large samples, can be found in literature for this distribution. But in this paper, we derive exact likelihood ratio test for symmetry, when location parameter is known. The exact power of the test is evaluated for different sample sizes.  相似文献   

6.
Statistical methods of risk assessment for continuous variables   总被引:1,自引:0,他引:1  
Adverse health effects for continuous responses are not as easily defined as adverse health effects for binary responses. Kodell and West (1993) developed methods for defining adverse effects for continuous responses and the associated risk. Procedures were developed for finding point estimates and upper confidence limits for additional risk under the assumption of a normal distribution and quadratic mean response curve with equal variances at each dose level. In this paper, methods are developed for point estimates and upper confidence limits for additional risk at experimental doses when the equal variance assumption is relaxed. An interpolation procedure is discussed for obtaining information at doses other than the experimental doses. A small simulation study is presented to test the performance of the methods discussed.  相似文献   

7.
In this article, we consider several statistical models for censored exponential data. We prove a large deviation result for the maximum likelihood estimators (MLEs) of each model, and a unique result for the posterior distributions which works well for all the cases. Finally, comparing the large deviation rate functions for MLEs and posterior distributions, we show that a typical feature fails for one model; moreover, we illustrate the relation between this fact and a well-known result for curved exponential models.  相似文献   

8.
In the paper, we consider a linear mixed model (LMM) for longitudinal data under linear restriction and find the estimators for the parameters of interest. The strong consistency and asymptotic normality of the estimators are obtained under some regularity conditions. Besides, we derive the strong consistent estimator of the fourth moment for the error which is useful for statistical inference for random effects and error variance. Simulations and an example are reported for illustration.  相似文献   

9.
This paper shows that Daniels's (1954) saddlepoint expansion for the density of a sample mean is, for all practical purposes, always uniformly valid on compact subsets in the interior of the domain of the mean. This uniform validity is the key for establishing the relation between the saddlepoint expansion for the density function and Lugannani and Rice's expansion for the tail probability, and for establishing the validity of a high-order asymptotic expansion for the density of a standardized mean.  相似文献   

10.
Edgeworth–type expansions are given for the log density and also for the derivatives of the density of an asymptotically normal random variable having the standard expansions for its cumulants. Expansions for the log density are much simpler than for the density. In fact the rth term of the expansion for the log density is a polynomial of degree only r + 2, while that for the density is a polynomial of degree 3r.  相似文献   

11.
The value of Elderton's k - criterion can be bounded for a given pdf. This paper presents these bounds for the most widely used distribution functions for income and computes the value of K for household incomes as reported in the Consumer Population Survey. The variance for theK are also estimated using the bootstrap, jackknife and delta methods. We find that the K can be used to narrow the field of potential pdfs for income and that all three methods for estimating the variance coincide in flagging low precision in the estimated K.  相似文献   

12.
This paper presents a multivariate extension of Dunnett's test for comparing simultaneously k treatment group means with a single control group mean. A test based on Hotelling T2statistics is presented and approximate critical values are evaluated for the case of equal numbers of observations in each group, for the .05 and .01 levels of significance, for 1 to 5 variates, for 1 to 10 treatment groups, and for varying degrees of freedom. The accuracy of the procedure for generating approximate critical values is assessed via simulation studies conducted for selected cases and an example is presented using real data.  相似文献   

13.
This study investigates the performance of two traditional F tests, one for main effects and the other for interaction in repeated measures designs under several conditions of covariance heterogeneity. Overall, the test for interaction is more vulnerable than the one for main effects. Distortion in the level of significance is less serious for the case of equal group size.  相似文献   

14.
In many biomedical applications, tests for the classical hypotheses based on the difference of treatment means in a one-way layout can be replaced by tests for ratios (or tests for relative changes). This approach is well noted for its simplicity in defining the margins, as for example in tests for non-inferiority. Here, we derive approximate and efficient sample size formulas in a multiple testing situation and then thoroughly investigate the relative performance of hypothesis testing based on the ratios of treatment means when compared with differences of means. The results will be illustrated with an example on simultaneous tests for non-inferiority.  相似文献   

15.
We propose a data-dependent method for choosing the tuning parameter appearing in many recently developed goodness-of-fit test statistics. The new method, based on the bootstrap, is applicable to a class of distributions for which the null distribution of the test statistic is independent of unknown parameters. No data-dependent choice for this parameter exists in the literature; typically, a fixed value for the parameter is chosen which can perform well for some alternatives, but poorly for others. The performance of the new method is investigated by means of a Monte Carlo study, employing three tests for exponentiality. It is found that the Monte Carlo power of these tests, using the data-dependent choice, compares favourably to the maximum achievable power for the tests calculated over a grid of values of the tuning parameter.  相似文献   

16.
"The geographic mapping of age-standardized, cause-specific death rates is a powerful tool for identifying possible etiologic factors, because the spatial distribution of mortality risks can be examined for correlations with the spatial distribution of disease-specific risk factors. This article presents a two-stage empirical Bayes procedure for calculating age-standardized cancer death rates, for use in mapping, which are adjusted for the stochasticity of rates in small area populations. Using the adjusted rates helps isolate and identify spatial patterns in the rates. The model is applied to sex-specific data on U.S. county cancer mortality in the white population for 15 cancer sites for three decades: 1950-1959, 1960-1969, and 1970-1979. Selected results are presented as maps of county death rates for white males."  相似文献   

17.
Algorithms for computing the maximum likelihood estimators and the estimated covariance matrix of the estimators of the factor model are derived. The algorithms are particularly suitable for large matrices and for samples that give zero estimates of some error variances. A method of constructing estimators for reduced models is presented. The algorithms can also be used for the multivariate errors-in-variables model with known error covariance matrix.  相似文献   

18.
SUMMARY This paper tests the hypothesis of difference stationarity of macro-economic time series against the alternative of trend stationarity, with and without allowing for possible structural breaks. The methodologies used are that of Dickey and Fuller familiarized by Nelson and Plosser, and that of dummy variables familiarized by Perron, including the Zivot and Andrews extension of Perron's tests. We have chosen 12 macro-economic variables in the Indian economy during the period 1900-1988 for this study. A study of this nature has not previously been undertaken for the Indian economy. The conventional Dickey-Fuller methodology without allowing for structural breaks cannot reject the unit root hypothesis (URH) for any series. Allowing for exogenous breaks in level and rate of growth in the years 1914, 1939 and 1951, Perron's tests reject the URH for three series after 1951, i.e. the year of introduction of economic planning in India. The Zivot and Andrews tests for endogenous breaks confirm the Perron tests and lead to the rejection of the URH for three more series.  相似文献   

19.
Some multiple comparison procedures are described for multiple armed studies. The procedures are appropriate for testing all hypotheses for comparing two endpoints and multiple test arms to a single control group, for example three different fixed doses compared to a placebo. The procedure assumes that among the two endpoints, one is designated as a primary endpoint such that for a given treatment arm, no hypothesis for the secondary endpoint can be rejected unless the hypothesis for the primary endpoint was rejected. The procedures described control the family-wise error rate in the strong sense at a specified level α.  相似文献   

20.
We have developed a new approach to determine the threshold of a biomarker that maximizes the classification accuracy of a disease. We consider a Bayesian estimation procedure for this purpose and illustrate the method using a real data set. In particular, we determine the threshold for Apolipoprotein B (ApoB), Apolipoprotein A1 (ApoA1) and the ratio for the classification of myocardial infarction (MI). We first conduct a literature review and construct prior distributions. We then develop classification rules based on the posterior distribution of the location and scale parameters for these biomarkers. We identify the threshold for ApoB and ApoA1, and the ratio as 0.908 (gram/liter), 1.138 (gram/liter) and 0.808, respectively. We also observe that the threshold for disease classification varies substantially across different age and ethnic groups. Next, we identify the most informative predictor for MI among the three biomarkers. Based on this analysis, ApoA1 appeared to be a stronger predictor than ApoB for MI classification. Given that we have used this data set for illustration only, the results will require further investigation for use in clinical applications. However, the approach developed in this article can be used to determine the threshold of any continuous biomarker for a binary disease classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号