排序方式: 共有65条查询结果,搜索用时 0 毫秒
31.
E-optimal designs for comparing three treatments in blocks of size three are identified, where intrablock observations are correlated according to a first order autoregressive error process with parameter ρ∈(0,1). For number of blocks b of the form b=3n+1, there are two distinct optimal designs depending on the value of ρ, with the best design being unequally replicated for large ρ. For other values of b, binary, equireplicate designs with specified within-block assignment patterns are best. In many cases, the stronger majorization optimality is established. 相似文献
32.
33.
Han-Ying Liang Jacobo de Uña-Álvarez 《Journal of statistical planning and inference》2011,141(11):3475-3488
In this paper, the empirical likelihood method is used to define a new estimator of conditional quantile in the presence of auxiliary information for the left-truncation model. The asymptotic normality of the estimator is established when the data exhibit some kind of dependence. It is assumed that the lifetime observations with multivariate covariates form a stationary α‐mixing sequence. The result shows that the asymptotic variance of the proposed estimator is not larger than that of standard kernel estimator. Finite sample behavior of the estimator is investigated via simulations too. 相似文献
34.
Zhidong Bai Shurong Zheng Baoxue Zhang Guorong Hu 《Journal of statistical planning and inference》2009
When random variables do not take discrete values, observed data are often the rounded values of continuous random variables. Errors caused by rounding of data are often neglected by classical statistical theories. While some pioneers have identified and made suggestions to rectify the problem, few suitable approaches were proposed. In this paper, we propose an approximate MLE (AMLE) procedure to estimate the parameters and discuss the consistency and asymptotic normality of the estimates. For our illustration, we shall consider the estimates of the parameters in AR(p) and MA(q) models for rounded data. 相似文献
35.
The literature describing operations research in the community is somewhat of a puzzle. On the one hand, several authors have denigrated the use of traditional operations approaches in addressing community problems, yet several studies document successful applications. Arguing that the operations research mindset is itself a great strength, we will review several examples where operations research methods have been employed creatively to the benefit of the community and beyond. 相似文献
36.
This paper introduces W-tests for assessing homogeneity in mixtures of discrete probability distributions. A W-test statistic depends on the data solely through parameter estimators and, if a penalized maximum likelihood estimation framework is used, has a tractable asymptotic distribution under the null hypothesis of homogeneity. The large-sample critical values are quantiles of a chi-square distribution multiplied by an estimable constant for which we provide an explicit formula. In particular, the estimation of large-sample critical values does not involve simulation experiments or random field theory. We demonstrate that W-tests are generally competitive with a benchmark test in terms of power to detect heterogeneity. Moreover, in many situations, the large-sample critical values can be used even with small to moderate sample sizes. The main implementation issue (selection of an underlying measure) is thoroughly addressed, and we explain why W-tests are well-suited to problems involving large and online data sets. Application of a W-test is illustrated with an epidemiological data set. 相似文献
37.
Logistic functions are used in different applications, including biological growth studies and assay data analysis. Locally D-optimal designs for logistic models with three and four parameters are investigated. It is shown that these designs are minimally supported. Efficiencies are computed for equally spaced and uniform designs. 相似文献
38.
39.
This work introduces specific tools based on phi-divergences to select and check generalized linear models with binary data. A backward selection criterion that helps to reduce the number of explanatory variables is considered. Diagnostic methods based on divergence measures such as a new measure to detect leverage points and two indicators to detect influential points are introduced. As an illustration, the diagnostics are applied to human psychology data. 相似文献
40.