首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Log-normal and Weibull distributions are the two most popular distributions for analysing lifetime data. In this paper, we consider the problem of discriminating between the two distribution functions. It is assumed that the data are coming either from log-normal or Weibull distributions and that they are Type-II censored. We use the difference of the maximized log-likelihood functions, in discriminating between the two distribution functions. We obtain the asymptotic distribution of the discrimination statistic. It is used to determine the probability of correct selection in this discrimination process. We perform some simulation studies to observe how the asymptotic results work for different sample sizes and for different censoring proportions. It is observed that the asymptotic results work quite well even for small sizes if the censoring proportions are not very low. We further suggest a modified discrimination procedure. Two real data sets are analysed for illustrative purposes.  相似文献   

2.
We give a critical synopsis of classical and recent tests for Poissonity, our emphasis being on procedures which are consistent against general alternatives. Two classes of weighted Cramér–von Mises type test statistics, based on the empirical probability generating function process, are studied in more detail. Both of them generalize already known test statistics by introducing a weighting parameter, thus providing more flexibility with regard to power against specific alternatives. In both cases, we prove convergence in distribution of the statistics under the null hypothesis in the setting of a triangular array of rowwise independent and identically distributed random variables as well as consistency of the corresponding test against general alternatives. Therefore, a sound theoretical basis is provided for the parametric bootstrap procedure, which is applied to obtain critical values in a large-scale simulation study. Each of the tests considered in this study, when implemented via the parametric bootstrap method, maintains a nominal level of significance very closely, even for small sample sizes. The procedures are applied to four well-known data sets.  相似文献   

3.
We consider various robust estimators for the extended Burr Type III (EBIII) distribution for complete data with outliers. The considered robust estimators are M-estimators, least absolute deviations, Theil, Siegel's repeated median, least trimmed squares, and least median of squares. Before we perform the aforementioned estimators for the EBIII, we adapt the quantiles method to the estimation of the shape parameter k of the EBIII. The simulation results show that the considered robust estimators generally outperform the existing estimation approaches for data with upper outliers, with certain of them retaining a relatively high degree of efficiency for small sample sizes.  相似文献   

4.
We consider several procedures to detect changes in the mean or the covariance structure of a linear process. The tests are based on the weighted CUSUM process. The limit distributions of the test statistics are derived under the no change null hypothesis. We develop new strong and weak approximations for the sample mean as well as the sample correlations of linear processes. A small Monte Carlo simulation illustrates the applicability of our results.  相似文献   

5.
In this paper, we consider a nonparametric test procedure for multivariate data with grouped components under the two sample problem setting. For the construction of the test statistic, we use linear rank statistics which were derived by applying the likelihood ratio principle for each component. For the null distribution of the test statistic, we apply the permutation principle for small or moderate sample sizes and derive the limiting distribution for the large sample case. Also we illustrate our test procedure with an example and compare with other procedures through simulation study. Finally, we discuss some additional interesting features as concluding remarks.  相似文献   

6.
This article considers the different methods for determining sample sizes for Wald, likelihood ratio, and score tests for logistic regression. We review some recent methods, report the results of a simulation study comparing each of the methods for each of the three types of test, and provide Mathematica code for calculating sample size. We consider a variety of covariate distributions, and find that a calculation method based on a first order expansion of the likelihood ratio test statistic performs consistently well in achieving a target level of power for each of the three types of test.  相似文献   

7.
In this article, we consider inference about the correlation coefficients of several bivariate normal distributions. We first propose computational approach tests for testing the equality of the correlation coefficients. In fact, these approaches are parametric bootstrap tests, and simulation studies show that they perform very satisfactory, and the actual sizes of these tests are better than other existing approaches. We also present a computational approach test and a parametric bootstrap confidence interval for inference about the parameter of common correlation coefficient. At the end, all the approaches are illustrated using two real examples.  相似文献   

8.
In this article, we consider the problem of comparing several multivariate normal mean vectors when the covariance matrices are unknown and arbitrary positive definite matrices. We propose a parametric bootstrap (PB) approach and develop an approximation to the distribution of the PB pivotal quantity for comparing two mean vectors. This approximate test is shown to be the same as the invariant test given in [Krishnamoorthy and Yu, Modified Nel and Van der Merwe test for the multivariate Behrens–Fisher problem, Stat. Probab. Lett. 66 (2004), pp. 161–169] for the multivariate Behrens–Fisher problem. Furthermore, we compare the PB test with two existing invariant tests via Monte Carlo simulation. Our simulation studies show that the PB test controls Type I error rates very satisfactorily, whereas other tests are liberal especially when the number of means to be compared is moderate and/or sample sizes are small. The tests are illustrated using an example.  相似文献   

9.

We present correction formulae to improve likelihood ratio and score teats for testing simple and composite hypotheses on the parameters of the beta distribution. As a special case of our results we obtain improved tests for the hypothesis that a sample is drawn from a uniform distribution on (0, 1). We present some Monte Carlo investigations to show that both corrected tests have better performances than the classical likelihood ratio and score tests at least for small sample sizes.  相似文献   

10.
The problem of testing for equivalence in clinical trials is restated here in terms of the proper clinical hypotheses and a simple classical frequentist significance test based on the central t distribution is derived. This method is then shown to be more powerful than the methods based on usual (shortest) and symmetric confidence intervals.

We begin by considering a noncentral t statistic and then consider three approximations to it. A simulation is used to compare actual test sizes to the nominal values in crossover and completely randomized designs. A central t approximation was the best. The power calculation is then shown to be based on a central t distribution, and a method is developed for obtaining the sample size required to obtain a specified power. For the approximations, a simulation compares actual powers to those obtained for the t distribution and confirms that the theoretical results are close to the actual powers.  相似文献   

11.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Existing equivalence tests for multinomial data are valid asymptotically, but the level is not properly controlled for small and moderate sample sizes. We resolve this difficulty by developing an exact multinomial test for equivalence and an associated confidence interval procedure. We also derive a conservative version of the test that is easy to implement even for very large sample sizes. Both tests use a notion of equivalence that is based on the cumulative distribution function, with two probability vectors being considered equivalent if their partial sums never differ by more than some specified constant. We illustrate the methods by applying them to Weldon's dice data, to data on the digits of , and to data collected by Mendel. The Canadian Journal of Statistics 37: 47–59; © 2009 Statistical Society of Canada  相似文献   

13.
Abstract

We consider the problem of testing the equality of several inverse Gaussian means when the scale parameters and sample sizes are possibly unequal. We propose four parametric bootstrap (PB) tests based on the uniformly minimum variance unbiased estimators of parameters. We also compare our proposed tests with the existing ones via an extensive simulation study in terms of controlling the Type I error rate and power performance. Simulation results show the merits of the PB tests.  相似文献   

14.
In this article, we propose two testing procedures for the serial correlation in single index models by virtue of B spline approximation for unknown single index function. Under some regular conditions, we show that our proposed statistics asymptotically follow normal and χ2 distribution. Many numerical studies illustrate that the proposed procedures can perform very well for moderate sample size.  相似文献   

15.
Summary.  Multilevel modelling is sometimes used for data from complex surveys involving multistage sampling, unequal sampling probabilities and stratification. We consider generalized linear mixed models and particularly the case of dichotomous responses. A pseudolikelihood approach for accommodating inverse probability weights in multilevel models with an arbitrary number of levels is implemented by using adaptive quadrature. A sandwich estimator is used to obtain standard errors that account for stratification and clustering. When level 1 weights are used that vary between elementary units in clusters, the scaling of the weights becomes important. We point out that not only variance components but also regression coefficients can be severely biased when the response is dichotomous. The pseudolikelihood methodology is applied to complex survey data on reading proficiency from the American sample of the 'Program for international student assessment' 2000 study, using the Stata program gllamm which can estimate a wide range of multilevel and latent variable models. Performance of pseudo-maximum-likelihood with different methods for handling level 1 weights is investigated in a Monte Carlo experiment. Pseudo-maximum-likelihood estimators of (conditional) regression coefficients perform well for large cluster sizes but are biased for small cluster sizes. In contrast, estimators of marginal effects perform well in both situations. We conclude that caution must be exercised in pseudo-maximum-likelihood estimation for small cluster sizes when level 1 weights are used.  相似文献   

16.
In this article, we consider nonparametric test procedures based on a group of quantile test statistics. We consider the quadratic form for the two-sided test and the maximal and summing types of statistics for the one-sided alternatives. Then we derive the null limiting distributions of the proposed test statistics using the large sample approximation theory. Also, we consider applying the permutation principle to obtain the null distribution. In this vein, we may consider the supremum type, which should use the permutation principle for obtaining the null distribution. Then we illustrate our procedure with an example and compare the proposed tests with other existing tests including the individual quantile tests by obtaining empirical powers through simulation study. Also, we comment on the related discussions to this testing procedure as concluding remarks. Finally we prove the lemmas and theorems in the appendices.  相似文献   

17.
In this article, a technique based on the sample correlation coefficient to construct goodness-of-fit tests for max-stable distributions with unknown location and scale parameters and finite second moment is proposed. Specific details to test for the Gumbel distribution are given, including critical values for small sample sizes as well as approximate critical values for larger sample sizes by using normal quantiles. A comparison by Monte Carlo simulation shows that the proposed test for the Gumbel hypothesis is substantially more powerful than some other known tests against some alternative distributions with positive skewness coefficient.  相似文献   

18.

We address the testing problem of proportional hazards in the two-sample survival setting allowing right censoring, i.e., we check whether the famous Cox model is underlying. Although there are many test proposals for this problem, only a few papers suggest how to improve the performance for small sample sizes. In this paper, we do exactly this by carrying out our test as a permutation as well as a wild bootstrap test. The asymptotic properties of our test, namely asymptotic exactness under the null and consistency, can be transferred to both resampling versions. Various simulations for small sample sizes reveal an actual improvement of the empirical size and a reasonable power performance when using the resampling versions. Moreover, the resampling tests perform better than the existing tests of Gill and Schumacher and Grambsch and Therneau . The tests’ practical applicability is illustrated by discussing real data examples.

  相似文献   

19.
Alternative ways of using Monte Carlo methods to implement a Cox-type test for separate families of hypotheses are considered. Monte Carlo experiments are designed to compare the finite sample performances of Pesaran and Pesaran's test, a RESET test, and two Monte Carlo hypothesis test procedures. One of the Monte Carlo tests is based on the distribution of the log-likelihood ratio and the other is based on an asymptotically pivotal statistic. The Monte Carlo results provide strong evidence that the size of the Pesaran and Pesaran test is generally incorrect, except for very large sample sizes. The RESET test has lower power than the other tests. The two Monte Carlo tests perform equally well for all sample sizes and are both clearly preferred to the Pesaran and Pesaran test, even in large samples. Since the Monte Carlo test based on the log-likelihood ratio is the simplest to calculate, we recommend using it.  相似文献   

20.
ABSTRACT

Motivated by an example in marine science, we use Fisher’s method to combine independent likelihood ratio tests (LRTs) and asymptotic independent score tests to assess the equivalence of two zero-inflated Beta populations (mixture distributions with three parameters). For each test, test statistics for the three individual parameters are combined into a single statistic to address the overall difference between the two populations. We also develop non parametric and semiparametric permutation-based tests for simultaneously comparing two or three features of unknown populations. Simulations show that the likelihood-based tests perform well for large sample sizes and that the statistics based on combining LRT statistics outperforms the ones based on combining score test statistics. The permutation-based tests have overall better performance in terms of both power and type I error rate. Our methods are easy to implement and computationally efficient, and can be expanded to more than two populations and to other multiple parameter families. The permutation tests are entirely generic and can be useful in various applications dealing with zero (or other) inflation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号