首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 40 毫秒
1.
2.
Wilks's A was factorized by Bartlett (1951) for testing the hypothesis of goodness of fit of a hypothetical discriminant function in the case of several groups. This test has applications in various areas such as Econometrics, contingency tables, growth curves, principal components analysis, design of experiments and so on. This paper gives a consolidated account of the research done in these areas on the application of factors of Wilks's A  相似文献   

3.
4.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

5.
We study the asymptotic behaviour of the maximum likelihood estimator corresponding to the observation of a trajectory of a skew Brownian motion, through a uniform time discretization. We characterize the speed of convergence and the limiting distribution when the step size goes to zero, which in this case are non‐classical, under the null hypothesis of the skew Brownian motion being an usual Brownian motion. This allows to design a test on the skewness parameter. We show that numerical simulations can be easily performed to estimate the skewness parameter and provide an application in Biology.  相似文献   

6.
7.
Stein's two–sample procedure for a general linear model is studied and derived in terms of matrices in which the error tems are distributed as multivatriate student t–error terms. Tests and confidence regions are constructed in a similar way to classical linear models which involves percentage points of student t and F distributions. The advantages of taking two samples are: the variance of the error terms is known, and the power of tests are size of confidence regions are controllable. A new distribution called noncentral F–type distribution different from the nencentral F is found when considerinf the power of the test of general linear hypothesis.  相似文献   

8.
Yu et al. [An improved score interval with a modified midpoint for a binomial proportion. J Stat Comput Simul. 2014;84:1022–1038] propose a novel confidence interval (CI) for a binomial proportion by modifying the midpoint of the score interval. This CI is competitive with the various commonly used methods. At the same time, Martín and Álvarez [Two-tailed asymptotic inferences for a proportion. J Appl Stat. 2014;41:1516–1529] analyse the performance of 29 asymptotic two-tailed CI for a proportion. The CI they selected is based on the arcsin transformation (when this is applied to the data increased by 0.5), although they also refer to the good behaviour of the classical methods of score and Agresti and Coull (which may be preferred in certain circumstances). The aim of this commentary is to compare the four methods referred to previously. The conclusion (for the classic error α of 5%) is that with a small sample size (≤80) the method that should be used is that of Yu et al.; for a large sample size (n?≥?100), the four methods perform in a similar way, with a slight advantage for the Agresti and Coull method. In any case the Agresti and Coull method does not perform badly and tends to be conservative. The program which determines these four intervals are available from the address http://www.ugr.es/local/bioest/Z_LINEAR_K.EXEhttp://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

9.
We consider an individual or household endowed with an initial capital and an income, modeled as a linear function of time. Assuming that the discount rate evolves as an Ornstein–Uhlenbeck process, we target to find an unrestricted consumption strategy such that the value of the expected discounted consumption is maximized. Differently than in the case with restricted consumption rates, we can determine the optimal strategy and the value function.  相似文献   

10.
In this article, we use the bivariate Poisson distribution obtained by the trivariate reduction method and compound it with a geometric distribution to derive a bivariate Pólya-Aeppli distribution. We then discuss a number of properties of this distribution including the probability generating function, correlation structure, probability mass function, recursive relations, and conditional distributions. The generating function of the tail probabilities is also obtained. Moment estimation of the parameters is then discussed and illustrated with a numerical example.  相似文献   

11.
ABSTRACT

Decision-tree approaches are commonly used in the analysis of toxicological assays. This paper considers the widely-used and recommended approach of first applying the ANOVA F-test and, if it is significant, using the many-to-one comparison procedure of Dunnett (1955 Dunnett, C.W. (1955). A multiple comparison procedure for comparing several treatments with a control. J. Amer. Statist. Assoc. 50(272):10961121.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Data configurations are provided for which the F-test is not significant, but the Dunnett-test is significant. Because a conditional F-test before the Dunnett-test may increase the false negative rate, it is recommended that the ANOVA F-test not be used at all as a global pre-test. In addition, related non-parametric tests are investigated.  相似文献   

12.
In this article, we consider a simple transient queuing system, i.e., a linear birth process with immigration in the presence of twin births. We find the differential-difference equation and also the probability-generating function (p.g.f.) for this process. Again, we generalize it into a linear birth process with immigration in the presence of both single birth or twin births and again for the case of multiple births. From the p.g.f. of linear birth process with immigration in the presence of twin births, we find some particular transient queuing processes like linear birth process with twin births and simple immigration process. Direct derivations of mean and variance of these processes are also discussed without using the generating functions.  相似文献   

13.
14.
It is shown that two visibly distinct distributions can have almost identical moment-generating functions.  相似文献   

15.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

16.
The exact distribution of a modified Behrens–Fisher statistic is derived. The distribution function is mostly elementary and is simpler than the exact distribution derived by Nel et al. Its practical use (including computationalefficiency and computational convenience) is discussed.  相似文献   

17.
Space–time correlation modelling is one of the crucial steps of traditional structural analysis, since space–time models are used for prediction purposes. A comparative study among some classes of space–time covariance functions is proposed. The relevance of choosing a suitable model by taking into account the characteristic behaviour of the models is proved by using a space–time data set of ozone daily averages and the flexibility of the product-sum model is also highlighted through simulated data sets.  相似文献   

18.
A Bayesian approach to modelling binary data on a regular lattice is introduced. The method uses a hierarchical model where the observed data is the sign of a hidden conditional autoregressive Gaussian process. This approach essentially extends the familiar probit model to dependent data. Markov chain Monte Carlo simulations are used on real and simulated data to estimate the posterior distribution of the spatial dependency parameters and the method is shown to work well. The method can be straightforwardly extended to regression models.  相似文献   

19.
20.
In survival analysis and reliability studies, problems with random sample size arise quite frequently. More specifically, in cancer studies, the number of clonogens is unknown and the time to relapse of the cancer is defined by the minimum of the incubation times of the various clonogenic cells. In this article, we have proposed a new model where the distribution of the incubation time is taken as Weibull and the distribution of the random sample size as Bessel, giving rise to a Weibull–Bessel distribution. The maximum likelihood estimation of the model parameters is studied and a score test is developed to compare it with its special submodel, namely, exponential–Bessel distribution. To illustrate the model, two real datasets are examined, and it is shown that the proposed model, presented here, fits better than several other existing models in the literature. Extensive simulation studies are also carried out to examine the performance of the estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号