首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Probability generating function of a random variable which has Generalized Polya Eggenberger Distribution of the second kind (GPED 2) is obtained. The probability density function of the range R, in random sampling from a uniform distribution on (k, l) and exponential distribution with parameter λ is obtained, when the sample size is a random variable from GPED 2. The results of Bazargan-Lari (2004) follow as special cases.  相似文献   

2.
The probability density function of the range R, in random sampling from a uniform distribution on (k, l) and exponential distribution with parameter λ is obtained, when the sample size is a random variable having the Generalized Polya Eggenberger Distribution of the first kind (GPED 1). The results of Raghunandanan and Patil (1972) and Bazargan-lari (1999) follow as special cases. The p.d.f of rangeR is obtained, when the distribution of the sample sizeN belongs to Katz family of distributions, as a special case. An erratum to this article is available at .  相似文献   

3.
In this paper we considered a generalized additive model with second-order interaction terms. A local scoring algorithm (with backfitting) based on local linear kernel smoothers was used to estimate the model. Our main aim was to obtain procedures for testing second-order interaction terms. Backfitting theory is difficult in this context, and a bootstrap procedure is therefore provided for estimating the distribution of the test statistics. Given the high computational cost involved, binning techniques were used to speed up the computation in the estimation and testing process. A simulation study was carried out in order to assess the validity of the bootstrap-based tests. Lastly, our method was applied to real data drawn from an SO2 binary time series.  相似文献   

4.
For measuring the goodness of 2 m 41 designs, Wu and Zhang (1993) proposed the minimum aberration (MA) criterion. MA 2 m 41 designs have been constructed using the idea of complementary designs when the number of two-level factors, m, exceeds n/2, where n is the total number of runs. In this paper, the structures of MA 2 m 41 designs are obtained when m>5n/16. Based on these structures, some methods are developed for constructing MA 2 m 41 designs for 5n/16<m<n/2 as well as for n/2≤m<n. When m≤5n/16, there is no general method for constructing MA 2 m 41 designs. In this case, we obtain lower bounds for A 30 and A 31, where A 30 and A 31 are the numbers of type 0 and type 1 words with length three respectively. And a method for constructing weak minimum aberration (WMA) 2 m 41 designs (A 30 and A 31 achieving the lower bounds) is demonstrated. Some MA or WMA 2 m 41 designs with 32 or 64 runs are tabulated for practical use, which supplement the tables in Wu and Zhang (1993), Zhang and Shao (2001) and Mukerjee and Wu (2001).  相似文献   

5.
Exact permutation testing of effects in unreplicated two-level multifactorial designs is developed based on the notion of realigning observations and on paired permutations. This approach preserves the exchangeability of error components for testing up tok effects. Advantages and limitations of exact permutation procedures for unreplicated factorials are discussed and a simulation study on paired permutation testing is presented.  相似文献   

6.
Originally, the exponentially weighted moving average (EWMA) control chart was developed for detecting changes in the process mean. The average run length (ARL) became the most popular performance measure for schemes with this objective. When monitoring the mean of independent and normally distributed observations the ARL can be determined with high precision. Nowadays, EWMA control charts are also used for monitoring the variance. Charts based on the sample variance S2 are an appropriate choice. The usage of ARL evaluation techniques known from mean monitoring charts, however, is difficult. The most accurate method—solving a Fredholm integral equation with the Nyström method—fails due to an improper kernel in the case of chi-squared distributions. Here, we exploit the collocation method and the product Nyström method. These methods are compared to Markov chain based approaches. We see that collocation leads to higher accuracy than currently established methods.  相似文献   

7.
In high-dimensional data, one often seeks a few interesting low-dimensional projections which reveal important aspects of the data. Projection pursuit for classification finds projections that reveal differences between classes. Even though projection pursuit is used to bypass the curse of dimensionality, most indexes will not work well when there are a small number of observations relative to the number of variables, known as a large p (dimension) small n (sample size) problem. This paper discusses the relationship between the sample size and dimensionality on classification and proposes a new projection pursuit index that overcomes the problem of small sample size for exploratory classification.  相似文献   

8.
In the present paper the distribution theory of maximum and minimum of ther th concomitants from k independent subgroups each of same size m from the Morgenstern family is investigated. Some applications of the results in estimation of the scale parameter of a marginal variable in the bivariate uniform distribution and a selection problem are discussed.  相似文献   

9.
The r largest order statistics approach is widely used in extreme value analysis because it may use more information from the data than just the block maxima. In practice, the choice of r is critical. If r is too large, bias can occur; if too small, the variance of the estimator can be high. The limiting distribution of the r largest order statistics, denoted by GEV\(_r\), extends that of the block maxima. Two specification tests are proposed to select r sequentially. The first is a score test for the GEV\(_r\) distribution. Due to the special characteristics of the GEV\(_r\) distribution, the classical chi-square asymptotics cannot be used. The simplest approach is to use the parametric bootstrap, which is straightforward to implement but computationally expensive. An alternative fast weighted bootstrap or multiplier procedure is developed for computational efficiency. The second test uses the difference in estimated entropy between the GEV\(_r\) and GEV\(_{r-1}\) models, applied to the r largest order statistics and the \(r-1\) largest order statistics, respectively. The asymptotic distribution of the difference statistic is derived. In a large scale simulation study, both tests held their size and had substantial power to detect various misspecification schemes. A new approach to address the issue of multiple, sequential hypotheses testing is adapted to this setting to control the false discovery rate or familywise error rate. The utility of the procedures is demonstrated with extreme sea level and precipitation data.  相似文献   

10.
Model-based clustering typically involves the development of a family of mixture models and the imposition of these models upon data. The best member of the family is then chosen using some criterion and the associated parameter estimates lead to predicted group memberships, or clusterings. This paper describes the extension of the mixtures of multivariate t-factor analyzers model to include constraints on the degrees of freedom, the factor loadings, and the error variance matrices. The result is a family of six mixture models, including parsimonious models. Parameter estimates for this family of models are derived using an alternating expectation-conditional maximization algorithm and convergence is determined based on Aitken’s acceleration. Model selection is carried out using the Bayesian information criterion (BIC) and the integrated completed likelihood (ICL). This novel family of mixture models is then applied to simulated and real data where clustering performance meets or exceeds that of established model-based clustering methods. The simulation studies include a comparison of the BIC and the ICL as model selection techniques for this novel family of models. Application to simulated data with larger dimensionality is also explored.  相似文献   

11.
Let X be a N(μ, σ 2) distributed characteristic with unknown σ. We present the minimax version of the two-stage t test having minimal maximal average sample size among all two-stage t tests obeying the classical two-point-condition on the operation characteristic. We give several examples. Furthermore, the minimax version of the two-stage t test is compared with the corresponding two-stage Gauß test.  相似文献   

12.
In the presence of multicollinearity, the rk class estimator is proposed as an alternative to the ordinary least squares (OLS) estimator which is a general estimator including the ordinary ridge regression (ORR), the principal components regression (PCR) and the OLS estimators. Comparison of competing estimators of a parameter in the sense of mean square error (MSE) criterion is of central interest. An alternative criterion to the MSE criterion is the Pitman’s (1937) closeness (PC) criterion. In this paper, we compare the rk class estimator to the OLS estimator in terms of PC criterion so that we can get the comparison of the ORR estimator to the OLS estimator under the PC criterion which was done by Mason et al. (1990) and also the comparison of the PCR estimator to the OLS estimator by means of the PC criterion which was done by Lin and Wei (2002).  相似文献   

13.
A finite mixture model using the Student's t distribution has been recognized as a robust extension of normal mixtures. Recently, a mixture of skew normal distributions has been found to be effective in the treatment of heterogeneous data involving asymmetric behaviors across subclasses. In this article, we propose a robust mixture framework based on the skew t distribution to efficiently deal with heavy-tailedness, extra skewness and multimodality in a wide range of settings. Statistical mixture modeling based on normal, Student's t and skew normal distributions can be viewed as special cases of the skew t mixture model. We present analytically simple EM-type algorithms for iteratively computing maximum likelihood estimates. The proposed methodology is illustrated by analyzing a real data example.  相似文献   

14.
15.
This note shows that the asymptotic properties of the quasi-maximum likelihood estimation for dynamic panel models can be easily derived by following the approach of Grassetti (Stat Methods Appl 20:221–240, 2011) to take the long difference to remove the time-invariant individual specific effects.  相似文献   

16.
The skew t-distribution includes both the skew normal and the normal distributions as special cases. Inference for the skew t-model becomes problematic in these cases because the expected information matrix is singular and the parameter corresponding to the degrees of freedom takes a value at the boundary of its parameter space. In particular, the distributions of the likelihood ratio statistics for testing the null hypotheses of skew normality and normality are not asymptotically \(\chi ^2\). The asymptotic distributions of the likelihood ratio statistics are considered by applying the results of Self and Liang (J Am Stat Assoc 82:605–610, 1987) for boundary-parameter inference in terms of reparameterizations designed to remove the singularity of the information matrix. The Self–Liang asymptotic distributions are mixtures, and it is shown that their accuracy can be improved substantially by correcting the mixing probabilities. Furthermore, although the asymptotic distributions are non-standard, versions of Bartlett correction are developed that afford additional accuracy. Bootstrap procedures for estimating the mixing probabilities and the Bartlett adjustment factors are shown to produce excellent approximations, even for small sample sizes.  相似文献   

17.
Based on an FQ-System for continuous unimodal distributions, which was introduced by Scheffner (1998), we propose a pure data-driven method for density estimation, which provides good results even for small samples. This procedure does not involve any problems or uncertainties as e.g. bandwidth selection for kernel density estimates.  相似文献   

18.
Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K-fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K-fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K-fold cross-validation.  相似文献   

19.
This paper (i) discusses theR-chart with asymmetric probability control limits under the assumption that the distribution of the quality characteristic under study is either exponential, Laplace, or logistic, (ii) examines the effect of the estimated probability limits on the performance of theR-chart, and (iii) obtains the desired probability limits of theR-chart that has a specified false alarm rate when probability limits must be estimated from preliminary samples taken from either the exponential, Laplace, or logistic processes.  相似文献   

20.
A new algorithm is presented and studied in this paper for fast computation of the nonparametric maximum likelihood estimate of a U-shaped hazard function. It successfully overcomes a difficulty when computing a U-shaped hazard function, which is only properly defined by knowing its anti-mode, and the anti-mode itself has to be found during the computation. Specifically, the new algorithm maintains the constant hazard segment, regardless of its length being zero or positive. The length varies naturally, according to what mass values are allocated to their associated knots after each updating. Being an appropriate extension of the constrained Newton method, the new algorithm also inherits its advantage of fast convergence, as demonstrated by some real-world data examples. The algorithm works not only for exact observations, but also for purely interval-censored data, and for data mixed with exact and interval-censored observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号