共查询到20条相似文献,搜索用时 0 毫秒
1.
Zhidong Bai Shurong Zheng Baoxue Zhang Guorong Hu 《Journal of statistical planning and inference》2009
When random variables do not take discrete values, observed data are often the rounded values of continuous random variables. Errors caused by rounding of data are often neglected by classical statistical theories. While some pioneers have identified and made suggestions to rectify the problem, few suitable approaches were proposed. In this paper, we propose an approximate MLE (AMLE) procedure to estimate the parameters and discuss the consistency and asymptotic normality of the estimates. For our illustration, we shall consider the estimates of the parameters in AR(p) and MA(q) models for rounded data. 相似文献
2.
We consider batch queueing systems M/MH/1 and MH/M/1 with catastrophes. The transient probability functions of these queueing systems are obtained by a Lattice Path Combinatorics approach that utilizes randomization and dual processes. Steady state distributions are also determined. Generalization to systems having batches of different sizes are discussed. 相似文献
3.
A supersaturated design (SSD) is a factorial design in which the degrees of freedom for all its main effects exceed the total number of distinct factorial level-combinations (runs) of the design. Designs with quantitative factors, in which level permutation within one or more factors could result in different geometrical structures, are very different from designs with nominal ones which have been treated as traditional designs. In this paper, a new criterion is proposed for SSDs with quantitative factors. Comparison and analysis for this new criterion are made. It is shown that the proposed criterion has a high efficiency in discriminating geometrically nonisomorphic designs and an advantage in computation. 相似文献
4.
In this paper, we consider the multivariate normality test based on measure of multivariate sample skewness defined by Srivastava (1984). Srivastava derived asymptotic expectation up to the order N−1 for the multivariate sample skewness and approximate χ2 test statistic, where N is sample size. Under normality, we derive another expectation and variance for Srivastava's multivariate sample skewness in order to obtain a better test statistic. From this result, improved approximate χ2 test statistic using the multivariate sample skewness is also given for assessing multivariate normality. Finally, the numerical result by Monte Carlo simulation is shown in order to evaluate accuracy of the obtained expectation, variance and improved approximate χ2 test statistic. Furthermore, upper and lower percentiles of χ2 test statistic derived in this paper are compared with those of χ2 test statistic derived by Mardia (1974) which is used multivariate sample skewness defined by Mardia (1970). 相似文献
5.
Box and Behnken [1958. Some new three level second-order designs for surface fitting. Statistical Technical Research Group Technical Report No. 26. Princeton University, Princeton, NJ; 1960. Some new three level designs for the study of quantitative variables. Technometrics 2, 455–475.] introduced a class of 3-level second-order designs for fitting the second-order response surface model. These 17 Box–Behnken designs (BB designs) are available for 3–12 and 16 factors. Although BB designs were developed nearly 50 years ago, they and the central-composite designs of Box and Wilson [1951. On the experimental attainment of optimum conditions. J. Royal Statist. Soc., Ser. B 13, 1–45.] are still the most often recommended response surface designs. Of the 17 aforementioned BB designs, 10 were constructed from balanced incomplete block designs (BIBDs) and seven were constructed from partially BIBDs (PBIBDs). In this paper we show that these seven BB designs constructed from PBIBDs can be improved in terms of rotatability as well as average prediction variance, D- and G-efficiency. In addition, we also report new orthogonally blocked solutions for 5, 8, 9, 11 and 13 factors. Note that an 11-factor BB design is available but cannot be orthogonally blocked. All new designs can be found at http://www.math.montana.edu/∼jobo/bbd/. 相似文献
6.
E-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1). For number of blocks b of the form b=3n+1, there are two distinct optimal designs depending on the value of ρ, with the best design being unequally replicated for large ρ. For other values of b, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established. 相似文献
7.
This paper introduces W-tests for assessing homogeneity in mixtures of discrete probability distributions. A W-test statistic depends on the data solely through parameter estimators and, if a penalized maximum likelihood estimation framework is used, has a tractable asymptotic distribution under the null hypothesis of homogeneity. The large-sample critical values are quantiles of a chi-square distribution multiplied by an estimable constant for which we provide an explicit formula. In particular, the estimation of large-sample critical values does not involve simulation experiments or random field theory. We demonstrate that W-tests are generally competitive with a benchmark test in terms of power to detect heterogeneity. Moreover, in many situations, the large-sample critical values can be used even with small to moderate sample sizes. The main implementation issue (selection of an underlying measure) is thoroughly addressed, and we explain why W-tests are well-suited to problems involving large and online data sets. Application of a W-test is illustrated with an epidemiological data set. 相似文献
8.
This article develops test statistics for the homogeneity of the means of several treatment groups of count data in the presence of over-dispersion or under-dispersion when there is no likelihood available. The C(α) or score type tests based on the models that are specified by only the first two moments of the counts are obtained using quasi-likelihood, extended quasi-likelihood, and double extended quasi-likelihood. Monte Carlo simulations are then used to study the comparative behavior of these C(α) statistics compared to the C(α) statistic based on a parametric model, namely, the negative binomial model, in terms of the following: size; power; robustness for departures from the data distribution as well as dispersion homogeneity. These simulations demonstrate that the C(α) statistic based on the double extended quasi-likelihood holds the nominal size at the 5% level well in all data situations, and it shows some edge in power over the other statistics, and, in particular, it performs much better than the commonly used statistic based on the quasi-likelihood. This C(α) statistic also shows robustness for moderate heterogeneity due to dispersion. Finally, applications to ecological, toxicological and biological data are given. 相似文献
9.
For a sequence of strictly stationary random fields that are uniformly ρ′-mixing and satisfy a Lindeberg condition, a central limit theorem is obtained for sequences of “rectangular” sums from the given random fields. The “Lindeberg CLT” is then used to prove a CLT for some kernel estimators of probability density for some strictly stationary random fields satisfying ρ′-mixing, and whose probability density and joint densities are absolutely continuous. 相似文献
10.
Alexander N. Donev Randy Tobias Farinaz Monadjemi 《Journal of statistical planning and inference》2008
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from. 相似文献
11.
It is shown that the Simes inequality is reversed for a broad class of negatively dependent distributions. 相似文献
12.
13.
The problem of selecting the correct subset of predictors within a linear model has received much attention in recent literature. Within the Bayesian framework, a popular choice of prior has been Zellner's g-prior which is based on the inverse of empirical covariance matrix of the predictors. An extension of the Zellner's prior is proposed in this article which allow for a power parameter on the empirical covariance of the predictors. The power parameter helps control the degree to which correlated predictors are smoothed towards or away from one another. In addition, the empirical covariance of the predictors is used to obtain suitable priors over model space. In this manner, the power parameter also helps to determine whether models containing highly collinear predictors are preferred or avoided. The proposed power parameter can be chosen via an empirical Bayes method which leads to a data adaptive choice of prior. Simulation studies and a real data example are presented to show how the power parameter is well determined from the degree of cross-correlation within predictors. The proposed modification compares favorably to the standard use of Zellner's prior and an intrinsic prior in these examples. 相似文献
14.
We present the theoretical background and the numerical procedure for calculating optimum experimental designs for non-linear model discrimination in the presence of constraints. The design support points consist of two kinds of factors: a continuous function of time and discrete levels of other quantitative factors. That is, some of the experimental conditions are allowed to continually vary during the experimental run. We implement the theory in a chemical kinetic model discrimination problem. 相似文献
15.
16.
It is shown how to condense the information contained in a series of studies, each constituted by an objects by variables matrix and a pair of weight matrices, into a structure vector and a sum of sums of squares of residuals. Based on this condensation we propose to carry out ANOVA-like inference for matched series of studies associated with the level combinations of some factors. It is shown how to validate the assumptions underlying the inference. An application to the results of local elections in Portugal is given. 相似文献
17.
18.
Non-parametric procedures are sometimes in use even in cases where the corresponding parametric procedure is preferable. This is mainly due to the fact that in practical applications of statistical methods too much attention is paid to any violation of the normality assumption–normal distribution is, however, primarily supposed in order to easily derive the exact distribution of the statistic used within parametric approaches. 相似文献
19.
It is often necessary to conduct a pilot study to determine the sample size required for a clinical trial. Due to differences in sampling environments, the pilot data are usually discarded after sample size calculation. This paper tries to use the pilot information to modify the subsequent testing procedure when a two-sided t-test or a regression model is used to compare two treatments. The new test maintains the required significance level regardless of the dissimilarity between the pilot and the target populations, but increases the power when the two are similar. The test is constructed based on the posterior distribution of the parameters given the pilot study information, but its properties are investigated from a frequentist's viewpoint. Due to the small likelihood of an irrelevant pilot population, the new approach is a viable alternative to the current practice. 相似文献
20.
Gregory E. Wilding Govind S. Mudholkar Georgia D. Kollia 《Journal of statistical planning and inference》2007
Isotones are a deterministic graphical device introduced by Mudholkar et al. [1991. A graphical procedure for comparing goodness-of-fit tests. J. Roy. Statist. Soc. B 53, 221–232], in the context of comparing some tests of normality. An isotone of a test is a contour of p values of the test applied to “ideal samples”, called profiles, from a two-shape-parameter family representing the null and the alternative distributions of the parameter space. The isotone is an adaptation of Tukey's sensitivity curves, a generalization of Prescott's stylized sensitivity contours, and an alternative to the isodynes of Stephens. The purpose of this paper is two fold. One is to show that the isotones can provide useful qualitative information regarding the behavior of the tests of distributional assumptions other than normality. The other is to show that the qualitative conclusions remain the same from one two-parameter family of alternatives to another. Towards this end we construct and interpret the isotones of some tests of the composite hypothesis of exponentiality, using the profiles of two Weibull extensions, the generalized Weibull and the exponentiated Weibull families, which allow IFR, DFR, as well as unimodal and bathtub failure rate alternatives. Thus, as a by-product of the study, it is seen that a test due to Csörg? et al. [1975. Application of characterizations in the area of goodness-of-fit. In: Patil, G.P., Kotz, S., Ord, J.K. (Eds.), Statistical Distributions in Scientific Work, vol. 2. Reidel, Boston, pp. 79–90], and Gnedenko's Q(r) test [1969. Mathematical Methods of Reliability Theory. Academic Press, New York], are appropriate for detecting monotone failure rate alternatives, whereas a bivariate F test due to Lin and Mudholkar [1980. A test of exponentiality based on the bivariate F distribution. Technometrics 22, 79–82] and their entropy test [1984. On two applications of characterization theorems to goodness-of-fit. Colloq. Math. Soc. Janos Bolyai 45, 395–414] can detect all alternatives, but are especially suitable for nonmonotone failure rate alternatives. 相似文献