首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we consider the multivariate normality test based on measure of multivariate sample skewness defined by Srivastava (1984). Srivastava derived asymptotic expectation up to the order N−1 for the multivariate sample skewness and approximate χ2χ2 test statistic, where N   is sample size. Under normality, we derive another expectation and variance for Srivastava's multivariate sample skewness in order to obtain a better test statistic. From this result, improved approximate χ2χ2 test statistic using the multivariate sample skewness is also given for assessing multivariate normality. Finally, the numerical result by Monte Carlo simulation is shown in order to evaluate accuracy of the obtained expectation, variance and improved approximate χ2χ2 test statistic. Furthermore, upper and lower percentiles of χ2χ2 test statistic derived in this paper are compared with those of χ2χ2 test statistic derived by Mardia (1974) which is used multivariate sample skewness defined by Mardia (1970).  相似文献   

2.
The purpose of the present work is to extend the work of Gupta et al. (2010) to s  -level column balanced supersaturated designs. Addition of runs to an existing E(χ2)-optimalE(χ2)-optimal supersaturated design and to study the optimality of the resulting design is an important issue. This paper considers the study of the optimality of the resulting design. A lower bound to E(χ2)E(χ2) has been obtained for the extended supersaturated designs. Some examples and a small catalogue of E(χ2)-optimalE(χ2)-optimal supersaturated designs are also presented.  相似文献   

3.
We consider batch queueing systems M/MH/1M/MH/1 and MH/M/1MH/M/1 with catastrophes. The transient probability functions of these queueing systems are obtained by a Lattice Path Combinatorics approach that utilizes randomization and dual processes. Steady state distributions are also determined. Generalization to systems having batches of different sizes are discussed.  相似文献   

4.
In accelerated life testing (ALT), products are exposed to stress levels higher than those at normal use in order to obtain information in a timely manner. Past work on planning ALT is predominantly on a single cause of failure. This article presents methods for planning ALT in the presence of k competing risks. Expressions for computing the Fisher information matrix are presented when risks are independently distributed lognormal. Optimal test plans are obtained under criteria that are based on determinants and maximum likelihood estimation. The proposed method is demonstrated on ALT of motor insulation.  相似文献   

5.
Before carrying out a full scale bioequivalence trial, it is desirable to conduct a pilot trial to decide if a generic drug product shows promise of bioequivalence. The purpose of a pilot trial is to screen test formulations, and hence small sample sizes can be used. Based on the outcome of the pilot trial, one can decide whether or not a full scale pivotal trial should be carried out to assess bioequivalence. This article deals with the design of a pivotal trial, based on the evidence from the pilot trial. A two-stage adaptive procedure is developed in order to determine the sample size and the decision rule for the pivotal trial, for testing average bioequivalence using the two one-sided test (TOST). Numerical implementation of the procedure is discussed in detail, and the required tables are provided. Numerical results indicate that the required sample sizes could be smaller than that recommended by the FDA for a single trial, especially when the pilot study provides strong evidence in favor of bioequivalence.  相似文献   

6.
When random variables do not take discrete values, observed data are often the rounded values of continuous random variables. Errors caused by rounding of data are often neglected by classical statistical theories. While some pioneers have identified and made suggestions to rectify the problem, few suitable approaches were proposed. In this paper, we propose an approximate MLE (AMLE) procedure to estimate the parameters and discuss the consistency and asymptotic normality of the estimates. For our illustration, we shall consider the estimates of the parameters in AR(p)AR(p) and MA(q)MA(q) models for rounded data.  相似文献   

7.
The paper introduces DT-optimum designs that provide a specified balance between model discrimination and parameter estimation. An equivalence theorem is presented for the case of two models and extended to an arbitrary number of models and of combinations of parameters. A numerical example shows the properties of the procedure. The relationship with other design procedures for parameter estimation and model discrimination is discussed.  相似文献   

8.
The KL-optimality criterion has been recently proposed to discriminate between any two statistical models. However, designs which are optimal for model discrimination may be inadequate for parameter estimation. In this paper, the DKL-optimality criterion is proposed which is useful for the dual problem of model discrimination and parameter estimation. An equivalence theorem and a stopping rule for the corresponding iterative algorithms are provided. A pharmacokinetics application and a bioassay example are given to show the good properties of a DKL-optimum design.  相似文献   

9.
Box and Behnken [1958. Some new three level second-order designs for surface fitting. Statistical Technical Research Group Technical Report No. 26. Princeton University, Princeton, NJ; 1960. Some new three level designs for the study of quantitative variables. Technometrics 2, 455–475.] introduced a class of 3-level second-order designs for fitting the second-order response surface model. These 17 Box–Behnken designs (BB designs) are available for 3–12 and 16 factors. Although BB designs were developed nearly 50 years ago, they and the central-composite designs of Box and Wilson [1951. On the experimental attainment of optimum conditions. J. Royal Statist. Soc., Ser. B 13, 1–45.] are still the most often recommended response surface designs. Of the 17 aforementioned BB designs, 10 were constructed from balanced incomplete block designs (BIBDs) and seven were constructed from partially BIBDs (PBIBDs). In this paper we show that these seven BB designs constructed from PBIBDs can be improved in terms of rotatability as well as average prediction variance, DD- and GG-efficiency. In addition, we also report new orthogonally blocked solutions for 5, 8, 9, 11 and 13 factors. Note that an 11-factor BB design is available but cannot be orthogonally blocked. All new designs can be found at http://www.math.montana.edu/jobo/bbd/.  相似文献   

10.
11.
Isotones   are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of pp values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes   of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r)Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate FF test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate FF distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives.  相似文献   

12.
In this article, we introduce three new distribution-free Shewhart-type control charts that exploit run and Wilcoxon-type rank-sum statistics to detect possible shifts of a monitored process. Exact formulae for the alarm rate, the run length distribution, and the average run length (ARL) are all derived. A key advantage of these charts is that, due to their nonparametric nature, the false alarm rate (FAR) and in-control run length distribution is the same for all continuous process distributions. Tables are provided for the implementation of the charts for some typical FAR values. Furthermore, a numerical study carried out reveals that the new charts are quite flexible and efficient in detecting shifts to Lehmann-type out-of-control situations.  相似文献   

13.
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from.  相似文献   

14.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided tt-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice.  相似文献   

15.
The Poisson–Lindley distribution is a compound discrete distribution that can be used as an alternative to other discrete distributions, like the negative binomial. This paper develops approximate one-sided and equal-tailed two-sided tolerance intervals for the Poisson–Lindley distribution. Practical applications of the Poisson–Lindley distribution frequently involve large samples, thus we utilize large-sample Wald confidence intervals in the construction of our tolerance intervals. A coverage study is presented to demonstrate the efficacy of the proposed tolerance intervals. The tolerance intervals are also demonstrated using two real data sets. The R code developed for our discussion is briefly highlighted and included in the tolerance package.  相似文献   

16.
The distribution of the test statistics of homogeneity tests is often unknown, requiring the estimation of the critical values through Monte Carlo (MC) simulations. The computation of the critical values at low α, especially when the distribution of the statistics changes with the series length (sample cardinality), requires a considerable number of simulations to achieve a reasonable precision of the estimates (i.e. 106 simulations or more for each series length). If, in addition, the test requires a noteworthy computational effort, the estimation of the critical values may need unacceptably long runtimes.

To overcome the problem, the paper proposes a regression-based refinement of an initial MC estimate of the critical values, also allowing an approximation of the achieved improvement. Moreover, the paper presents an application of the method to two tests: SNHT (standard normal homogeneity test, widely used in climatology), and SNH2T (a version of SNHT showing a squared numerical complexity). For both, the paper reports the critical values for α ranging between 0.1 and 0.0001 (useful for the p-value estimation), and the series length ranging from 10 (widely adopted size in climatological change-point detection literature) to 70,000 elements (nearly the length of a daily data time series 200 years long), estimated with coefficients of variation within 0.22%. For SNHT, a comparison of our results with approximated, theoretically derived, critical values is also performed; we suggest adopting those values for the series exceeding 70,000 elements.  相似文献   


17.
This article develops test statistics for the homogeneity of the means of several treatment groups of count data in the presence of over-dispersion or under-dispersion when there is no likelihood available. The C(α)C(α) or score type tests based on the models that are specified by only the first two moments of the counts are obtained using quasi-likelihood, extended quasi-likelihood, and double extended quasi-likelihood. Monte Carlo simulations are then used to study the comparative behavior of these C(α)C(α) statistics compared to the C(α)C(α) statistic based on a parametric model, namely, the negative binomial model, in terms of the following: size; power; robustness for departures from the data distribution as well as dispersion homogeneity. These simulations demonstrate that the C(α)C(α) statistic based on the double extended quasi-likelihood holds the nominal size at the 5% level well in all data situations, and it shows some edge in power over the other statistics, and, in particular, it performs much better than the commonly used statistic based on the quasi-likelihood. This C(α)C(α) statistic also shows robustness for moderate heterogeneity due to dispersion. Finally, applications to ecological, toxicological and biological data are given.  相似文献   

18.
19.
Minimisation is a method often used in clinical trials to balance the treatment groups with respect to some prognostic factors. In the case of two treatments, the predictability of this method is calculated for different numbers of factors, different numbers of levels of each factor and for different proportions of the population at each level. It is shown that if we know nothing about the previous patients except the last treatment allocation, the next treatment can be correctly guessed more than 60% of the time if no biased coin is used. If the two previous assignments are known to have been the same, the next treatment can be guessed correctly around 80% of the time. Therefore, it is suggested that a biased coin should always be used with minimisation. Different choices of biased coin are investigated in terms of the reduction in predictability and the increase in imbalance that they produce. An alternative design to minimisation which makes use of optimum design theory is also investigated, by means of simulation, and does not appear to have any clear advantages over minimisation with a biased coin.  相似文献   

20.
In the recent years, the notion of data depth has been used in nonparametric multivariate data analysis since it gives natural ‘centre-outward’ ordering of multivariate data points with respect to the given data cloud. In the literature, various nonparametric tests are developed for testing equality of location of two multivariate distributions based on data depth. Here, we define two nonparametric tests based on two different test statistic for testing equality of locations of two multivariate distributions. In the present work, we compare the performance of these tests with the tests developed by Li and Liu [New nonparametric tests of multivariate locations and scales using data depth. Statist Sci. 2004;(1):686–696] for testing equality of locations of two multivariate distributions. Comparison in terms of power is done for multivariate symmetric and skewed distributions using simulation for three popular depth functions. Application of tests to real life data is provided. Conclusion and recommendations are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号