首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Local quasi-likelihood estimation is a useful extension of local least squares methods, but its computational cost and algorithmic convergence problems make the procedure less appealing, particularly when it is iteratively used in methods such as the back-fitting algorithm, cross-validation and bootstrapping. A one-step local quasi-likelihood estimator is introduced to overcome the computational drawbacks of the local quasi-likelihood method. We demonstrate that as long as the initial estimators are reasonably good, the one-step estimator has the same asymptotic behaviour as the local quasi-likelihood method. Our simulation shows that the one-step estimator performs at least as well as the local quasi-likelihood method for a wide range of choices of bandwidths. A data-driven bandwidth selector is proposed for the one-step estimator based on the pre-asymptotic substitution method of Fan and Gijbels. It is then demonstrated that the data-driven one-step local quasi-likelihood estimator performs as well as the maximum local quasi-likelihood estimator by using the ideal optimal bandwidth.  相似文献   

2.
In this paper, an attribute control chart under repetitive group sampling is designed for monitoring the production process where the lifetime of the product is considered as quality of the product. We assume that the lifetime follows the Pareto distribution of second kind with known shape parameter. The performance of the proposed chart is evaluated by average run length. The control limits coefficients as well as the repetitive group sampling parameter such as sample size are determined such that the in-control average run length is as close as to the specified average run length. Out-of-control average run length is also reported for different shift constants with corresponding optimal parameters. In addition, performance of proposed control chart is compared with the performance of existing chart. An economical designing of proposed control chart is also discussed.  相似文献   

3.
Some results on the estimation of a symmetric density function are given. For the case when the point of symmetry, θ, is known it is shown that a symmetrized kernel estimator is, as measured by MISE, approximately as good as a non-symmetrized one based on twice as many observations. This result remains ’true if the estimated density is a normal one and θ is estimated by the sample mean. Some Monte Carlo results for several densities and sample sizes are given for the case when θ is estimated by the sample median.  相似文献   

4.
A three-parameter F approximation to the distribution of a positive linear combination of central chi-squared variables is described. It is about as easy to implement as the Satterthwaite-Welsh and Hall-Buckley-Eagleson approximations. Some reassuring properties of the F approximation are derived, and numerical results are presented. The numerical results indicate that the new approximation is superior to the Satterthwaite approximation and, for some purposes, better than the Hall-Buckley-Eagleson approximation. It is not quite as good as the Gamma-Weibull approximation due to Solomon and Stephens, but is easier to implement because iterative methods are not required.  相似文献   

5.
The taking and the interpretation of something as big and as complicated as the national census is more than an exercise in statistical thinking. It involves other diverse fields such as ethics, epistemology, law, and politics. This article shows that a national census is more akin to so-called ill-structured problems. Unlike well-structured problems, the formulation of an ill-structured problem varies from field to field and from person to person, and the various aspects of an ill-structured problem (i.e., ethics, epistemology, etc.) cannot be clearly separated from one another. The 1980 census is discussed as an ill-structured problem, and a method for treating such problems is presented, within which statistical information is only one component.  相似文献   

6.
A new jackknife test is proposed to test the equality of variances in several populations. The new test is based on jackknifing one group of observations at a time, instead of one observation in each group as recommended by Miller for a two sample case, and by Layard for several samples. The proposed test is examined, and compared with other tests, in terms of power and robustness with respect to a wide variety of non-normal distributions. It is found that the new test is robust and has reasonably high power for normal as well as for non-normal observations, irrespective of the sample size. Furthermore, the proposed test is certainly superior to all other tests considered here in small to moderate size samples, and is as good as or better than the other tests in large samples, irrespective of the distribution of sampling observations.  相似文献   

7.
A simple confidence region is proposed for the multinomial parameter. It is designed for situations having zero cell counts. Simulation studies as well as a real data application show that it performs at least as well as than at least two of the most common confidence regions.  相似文献   

8.
For two-way layouts in a between subjects ANOVA design the aligned rank transform (ART) is compared with the parametric F-test as well as six other nonparametric methods: rank transform (RT), inverse normal transform (INT), a combination of ART and INT, Puri & Sen's L statistic, van der Waerden and Akritas & Brunners ATS. The type I error rates are computed for the uniform and the exponential distributions, both as continuous and in several variations as discrete distribution. The computations had been performed for balanced and unbalanced designs as well as for several effect models. The aim of this study is to analyze the impact of discrete distributions on the error rate. And it is shown that this scaling impact is restricted to the ART- as well as the combination of ART- and INT-method. There are two effects: first with increasing cell counts their error rates rise beyond any acceptable limit up to 20 percent and more. And secondly their rates rise when the number of distinct values of the dependent variable decreases. This behavior is more severe for underlying exponential distributions than for uniform distributions. Therefore there is a recommendation not to apply the ART if the mean cell frequencies exceed 10.  相似文献   

9.
A modification of the sequential probability ratio test is proposed in which Wald's parallel boundaries are broken at some preassigned point of the sample number axis and Anderson's converging boundaries are used prior to that. Read's partial sequential probability ratio test can be considered as a special case of the proposed procedure. As far as 'the maximum average sample number reducing property is concerned, the procedure is as good as Anderson's modified sequential probability ratio test.  相似文献   

10.
11.
In this article, we explore a new two-parameter family of distribution, which is derived by suitably replacing the exponential term in the Gompertz distribution with a hyperbolic sine term. The resulting new family of distribution is referred to as the Gompertz-sinh distribution, and it possesses a thicker and longer lower tail than the Gompertz family, which is often used to model highly negatively skewed data. Moreover, we introduce a useful generalization of this model by adding a second shape parameter to accommodate a variety of density shapes as well as nondecreasing hazard shapes. The flexibility and better fitness of the new family, as well as its generalization, is demonstrated by providing well-known examples that involve complete, group, and censored data.  相似文献   

12.
This paper deals with the problem of finding nearly D-optimal designs for multivariate quadratic regression on a cube which take as few observations as possible and still allow estimation of all parameters. It is shown that among the class of all such designs taking as many observations as possible on the corners of the cube there is one which is asymptotically efficient as the dimension of the cube increases. Methods for constructing designs in this class, using balanced arrays, are given. It is shown that the designs so constructed for dimensions ≤6 compare well with existing computer generated designs, and in dimensions 5 and 6 are better than those in literature prior to 1978.  相似文献   

13.
A rank test based on the number of ‘near-matches’ among within-block rankings is proposed for stochastically ordered alternatives in a randomized block design with t treatments and b blocks. The asymptotic relative efficiency of this test with respect to the Page test is computed as number of blocks increases to infinity. A sequential analog of the above test procedure is also considered. A repeated significance test procedure is developed and average sample number is computed asymptotically under the null hypothesis as well as under a sequence of contiguous alternatives.  相似文献   

14.
The scaled (two-parameter) Type I generalized logistic distribution (GLD) is considered with the known shape parameter. The ML method does not yield an explicit estimator for the scale parameter even in complete samples. In this article, we therefore construct a new linear estimator for scale parameter, based on complete and doubly Type-II censored samples, by making linear approximations to the intractable terms of the likelihood equation using least-squares (LS) method, a new approach of linearization. We call this as linear approximate maximum likelihood estimator (LAMLE). We also construct LAMLE based on Taylor series method of linear approximation and found that this estimator is slightly biased than that based on the LS method. A Monte Carlo simulation is used to investigate the performance of LAMLE and found that it is almost as efficient as MLE, though biased than MLE. We also compare unbiased LAMLE with BLUE based on the exact variances of the estimators and interestingly this new unbiased LAMLE is found just as efficient as the BLUE in both complete and Type-II censored samples. Since MLE is known as asymptotically unbiased, in large samples we compare unbiased LAMLE with MLE and found that this estimator is almost as efficient as MLE. We have also discussed interval estimation of the scale parameter from complete and Type-II censored samples. Finally, we present some numerical examples to illustrate the construction of the new estimators developed here.  相似文献   

15.
In this article, we consider experimental situations where a blocked regular two-level fractional factorial initial design is used. We investigate the use of the semi-fold technique as a follow-up strategy for de-aliasing effects that are confounded in the initial design as well as an alternative method for constructing blocked fractional factorial designs. A construction method is suggested based on the full foldover technique and sufficient conditions are obtained when the semi-fold yields as many estimable effects as the full foldover.  相似文献   

16.
The area under the ROC curve (AUC) can be interpreted as the probability that the classification scores of a diseased subject is larger than that of a non-diseased subject for a randomly sampled pair of subjects. From the perspective of classification, we want to find a way to separate two groups as distinctly as possible via AUC. When the difference of the scores of a marker is small, its impact on classification is less important. Thus, a new diagnostic/classification measure based on a modified area under the ROC curve (mAUC) is proposed, which is defined as a weighted sum of two AUCs, where the AUC with the smaller difference is assigned a lower weight, and vice versa. Using mAUC is robust in the sense that mAUC gets larger as AUC gets larger as long as they are not equal. Moreover, in many diagnostic situations, only a specific range of specificity is of interest. Under normal distributions, we show that if the AUCs of two markers are within similar ranges, the larger mAUC implies the larger partial AUC for a given specificity. This property of mAUC will help to identify the marker with the higher partial AUC, even when the AUCs are similar. Two nonparametric estimates of an mAUC and their variances are given. We also suggest the use of mAUC as the objective function for classification, and the use of the gradient Lasso algorithm for classifier construction and marker selection. Application to simulation datasets and real microarray gene expression datasets show that our method finds a linear classifier with a higher ROC curve than some other existing linear classifiers, especially in the range of low false positive rates.  相似文献   

17.
When Shannon entropy is used as a criterion in the optimal design of experiments, advantage can be taken of the classical identity representing the joint entropy of parameters and observations as the sum of the marginal entropy of the observations and the preposterior conditional entropy of the parameters. Following previous work in which this idea was used in spatial sampling, the method is applied to standard parameterized Bayesian optimal experimental design. Under suitable conditions, which include non-linear as well as linear regression models, it is shown in a few steps that maximizing the marginal entropy of the sample is equivalent to minimizing the preposterior entropy, the usual Bayesian criterion, thus avoiding the use of conditional distributions. It is shown using this marginal formulation that under normality assumptions every standard model which has a two-point prior distribution on the parameters gives an optimal design supported on a single point. Other results include a new asymptotic formula which applies as the error variance is large and bounds on support size.  相似文献   

18.
Recently Hsieh considered three nonparametric tests for the scalechange problem and showed them to be asymptotically as efficient as their parametric competitors. Here the class of so called sum-type statistics is studied which contains the statistics of Hsieh. It is proved that max-type statistics can be constructed that are at least as efficient in the sense of Bahadur as the sum-type statistics.  相似文献   

19.
In this paper, a discrete counterpart of the general class of continuous beta-G distributions is introduced. A discrete analog of the beta generalized exponential distribution of Barreto-Souza et al. [2], as an important special case of the proposed class, is studied. This new distribution contains some previously known discrete distributions as well as two new models. The hazard rate function of the new model can be increasing, decreasing, bathtub-shaped and upside-down bathtub. Some distributional and moment properties of the new distribution as well as its order statistics are discussed. Estimation of the parameters is illustrated using the maximum likelihood method and, finally, the model with a real data set is examined.  相似文献   

20.
An approach to teaching linear regression with unbalanced data is outlined that emphasizes its role as a method of adjustment for associated regressors. The method is introduced via direct standardization, a simple form of regression for categorical regressors. Properties of regression in the presence of association and interaction are emphasized. Least squares is introduced as a more efficient way of calculating adjusted effects for which exact decompositions of the variance are possible. Interval-scaled regressors are initially grouped and treated as categorical; polynomial regression and analysis of covariance can be introduced later as alternative methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号