首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
In response surface analysis, a second order polynomial model is often used for inference on the stationary point of the response function. The standard confidence regions for the stationary point are due to Box & Hunter (1954). The authors propose an alternative parametrization, in which the stationary point is the parameter of interest; likelihood techniques and Bayesian analysis are then easier to perform. The authors also suggest an approximate method to get highest posterior density regions for the maximum point (not simply for the stationary point). Furthermore, they study the coverage probabilities of these Bayesian regions through simulations.  相似文献   

2.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

3.
Pre-estimation is a technique for adjusting a standard approximate P-value to be close to exact. While conceptually simple, it can become computationally intensive. Second order pivotals [N. Reid, Asymptotics and the theory of inference, Ann. Statist. 31 (2003), pp. 1695–1731] are constructed to be closer to exact than standard approximate pivotals. The theory behind these pivotals is complex, and their properties are unclear for discrete models. However, since they are typically given in closed form they are easy to compute. For the special case of non-inferiority trials, we investigate Wald, Score, likelihood ratio and second order pivotals. Each of the basic pivotals are used to generate an exact test by maximising with respect to the nuisance parameter. We also study the effect of pre-estimating the nuisance parameter, as described in Lloyd [C.J. Lloyd, Exact P-values for discrete models obtained by estimation and maximisation, Aust. N. Z. J. Statist. 50 (2008), pp. 329–346]. It appears that second order methods are not as close to exact as might have been hoped. On the other hand, P-values, based on pre-estimation are very close to exact, are more powerful than competitors and are hardly affected by the basic generating statistic chosen.  相似文献   

4.
A class of sampling two units without replacement with inclusion probability proportional to size is proposed in this article. Many different well known probability proportional to size sampling designs are special cases from this class. The first and second inclusion probabilities of this class satisfy important properties and provide a non-negative variance estimator of the Horvitz and Thompson estimator for the population total. Suitable choice for the first and second inclusion probabilities from this class can be used to reduce the variance estimator of the Horvitz and Thompson estimator. Comparisons between different proportional to size sampling designs through real data and artificial examples are given. Examples show that the minimum variance of the Horvitz and Thompson estimator obtained from the proposed design is not attainable for the most cases at any of the well known designs.  相似文献   

5.
6.
This paper puts the case for the inclusion of point optimal tests in the econometrician's repertoire. They do not suit every testing situation but the current evidence, which is reviewed here, indicates that they can have extremely useful Small-sample power properties. As well as being most powerful at a nominated point in the alternative hypothesis parameter space, they may also have optimum power at a number of other points and indeed be uniformly most powerful when such a test exists. Point optimal tests can also be used to trace out the maxemum attainable power envelope for a given testing problem, thus providing a benchmark against which test procedures can be evaluated. In some cases, point optimal tests can be constructed from tests of simple null hypothesis against a simple alternative. For a wide range of models of interst to econometricians, this paper shows how one can check whether a point optimal test can be constructed in this way. When it cannot, one may wish to consider approximately point optimal tests. As an illustration, the approach is applied to the non-nested problem of testing for AR(1) distrubances against MA(1) distrubances in the linear regression model.  相似文献   

7.
This paper puts the case for the inclusion of point optimal tests in the econometrician's repertoire. They do not suit every testing situation but the current evidence, which is reviewed here, indicates that they can have extremely useful Small-sample power properties. As well as being most powerful at a nominated point in the alternative hypothesis parameter space, they may also have optimum power at a number of other points and indeed be uniformly most powerful when such a test exists. Point optimal tests can also be used to trace out the maxemum attainable power envelope for a given testing problem, thus providing a benchmark against which test procedures can be evaluated. In some cases, point optimal tests can be constructed from tests of simple null hypothesis against a simple alternative. For a wide range of models of interst to econometricians, this paper shows how one can check whether a point optimal test can be constructed in this way. When it cannot, one may wish to consider approximately point optimal tests. As an illustration, the approach is applied to the non-nested problem of testing for AR(1) distrubances against MA(1) distrubances in the linear regression model.  相似文献   

8.
Longitudinal data with non-response occur in studies where the same subject is followed over time but data for each subject may not be available at every time point. When the response is categorical and the response at time t depends on the response at the previous time points, it may be appropriate to model the response using a Markov model. We generalize a second-order Markov model to include a non-ignorable non-response mechanism. Simulation is used to study the properties of the estimators. Large sample sizes are necessary to ensure that the algorithm converges and that the asymptotic properties of the estimators can be used.  相似文献   

9.
In the model of progressive type II censoring, point and interval estimation as well as relations for single and product moments are considered. Based on two-parameter exponential distributions, maximum likelihood estimators (MLEs), uniformly minimum variance unbiased estimators (UMVUEs) and best linear unbiased estimators (BLUEs) are derived for both location and scale parameters. Some properties of these estimators are shown. Moreover, results for single and product moments of progressive type II censored order statistics are presented to obtain recurrence relations from exponential and truncated exponential distributions. These relations may then be used to compute all the means, variances and covariances of progressive type II censored order statistics based on exponential distributions for arbitrary censoring schemes. The presented recurrence relations simplify those given by Aggarwala and Balakrishnan (1996)  相似文献   

10.
We study minimum contrast estimation for parametric stationary determinantal point processes. These processes form a useful class of models for repulsive (or regular, or inhibitive) point patterns and are already applied in numerous statistical applications. Our main focus is on minimum contrast methods based on the Ripley's K‐function or on the pair correlation function. Strong consistency and asymptotic normality of theses procedures are proved under general conditions that only concern the existence of the process and its regularity with respect to the parameters. A key ingredient of the proofs is the recently established Brillinger mixing property of stationary determinantal point processes. This work may be viewed as a complement to the study of Y. Guan and M. Sherman who establish the same kind of asymptotic properties for a large class of Cox processes, which in turn are models for clustering (or aggregation).  相似文献   

11.
Several authors have suggested the method of minimum bias estimation for estimating response surfaces. The minimum bias estimation procedure achieves minimum average squared bias of the fitted model without depending on the values of the unknown parameters of the true surface. The only requirement is that the design satisfies a simple estimability condition. Subject to providing minimum average squared bias, the minimum bias estimator also provides minimum average variance of ?(x) where ?(x) is the estimate of the response at the point x.

To support the estimation of the parameters in the fitted model, very little has been suggested in the way of experimental designs except to say that a full rank matrix X of independent variables should be used. This paper presents a closer look at the estimability conditions that are required for minimum bias estimation, and from the form of the matrix X, a formula is derived which measures the amount of design flexibility available. The design flexibility is termed “the degrees of freedom” of the X matrix and it is shown how the degrees of freedom can be used to decide if other design optimality criteria might be considered along with minimum bias estimation. Several examples are provided.  相似文献   

12.
Two-step estimation for inhomogeneous spatial point processes   总被引:1,自引:0,他引:1  
Summary.  The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties ( K -function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests.  相似文献   

13.
Consider a large number of econometric investigations using different estimation techniques and/or different subsets of all available data to estimate a fixed set of parameters. The resulting empirical distribution of point estimates can be shown - under suitable conditions - to coincide with a Bayesian posterior measure on the parameter space induced by a minimum information procedure. This Bayesian interpretation makes it easier to combine the results of various empirical exercises for statistical decision making. The collection of estimators may be generated by one investigator to ensure the satisfaction of our conditions, or they may be collected from published works, where behavioral assumptions need to be made regarding the dependence structure of econometric studies.  相似文献   

14.
B. Klar 《Statistics》2013,47(6):505-515
Surles and Padgett recently introduced two-parameter Burr Type X distribution, which can also be described as the generalized Rayleigh distribution. It is observed that the generalized Rayleigh and log-normal distributions have many common properties and both the distributions can be used quite effectively to analyze skewed data set. For a given data set the problem of selecting either generalized Rayleigh or log-normal distribution is discussed in this paper. The ratio of maximized likelihood (RML) is used in discriminating between the two distributing functions. Asymptotic distributions of the RML under null hypotheses are obtained and they are used to determine the minimum sample size required in discriminating between these two families of distributions for a used specified probability of correct selection and the tolerance limit.  相似文献   

15.
We present a local density estimator based on first-order statistics. To estimate the density at a point, x, the original sample is divided into subsets and the average minimum sample distance to x over all such subsets is used to define the density estimate at x. The tuning parameter is thus the number of subsets instead of the typical bandwidth of kernel or histogram-based density estimators. The proposed method is similar to nearest-neighbor density estimators but it provides smoother estimates. We derive the asymptotic distribution of this minimum sample distance statistic to study globally optimal values for the number and size of the subsets. Simulations are used to illustrate and compare the convergence properties of the estimator. The results show that the method provides good estimates of a wide variety of densities without changes of the tuning parameter, and that it offers competitive convergence performance.  相似文献   

16.
In the existing statistical literature, the almost default choice for inference on inhomogeneous point processes is the most well‐known model class for inhomogeneous point processes: reweighted second‐order stationary processes. In particular, the K‐function related to this type of inhomogeneity is presented as the inhomogeneous K‐function. In the present paper, we put a number of inhomogeneous model classes (including the class of reweighted second‐order stationary processes) into the common general framework of hidden second‐order stationary processes, allowing for a transfer of statistical inference procedures for second‐order stationary processes based on summary statistics to each of these model classes for inhomogeneous point processes. In particular, a general method to test the hypothesis that a given point pattern can be ascribed to a specific inhomogeneous model class is developed. Using the new theoretical framework, we reanalyse three inhomogeneous point patterns that have earlier been analysed in the statistical literature and show that the conclusions concerning an appropriate model class must be revised for some of the point patterns.  相似文献   

17.
Recently, wavelet has been used for copula density estimation. A known characteristic of wavelet functions is that they cannot be symmetric, orthogonal, and compact support at the same time while multiwavelets overcome this disadvantage. This article highlights the usefulness of the multiwavelet in order to approximate copula density functions. Possessing three appropriate properties at the same time, high smoothness, and high approximation order properties, multiwavelet can be more precise in copula density approximation. We make this approximation method more accurate by using multiresolution analysis. Finally, we apply our proposed method to approximate the copula density in actuarial data.  相似文献   

18.
Under some very reasonable hypotheses, it becomes evident that randomizing the run order of a factorial experiment does not always neutralize the effect of undesirable factors. Yet, these factors do have an influence on the response, depending on the order in which the experiments are conducted. On the other hand, changing the factor levels is many times costly; therefore it is not reasonable to leave to chance the number of changes necessary. For this reason, run orders that offer the minimum number of factor level changes and at the same time minimize the possible influence of undesirable factors on the experimentation have been sought. Sequences which are known to produce the desired properties in designs with 8 and 16 experiments can be found in the literature. In this paper, we provide the best possible sequences for designs with 32 experiments, as well as sequences that offer excellent properties for designs with 64 and 128 experiments. The method used to find them is based on a mixture of algorithmic searches and an augmentation of smaller designs.  相似文献   

19.
Response surface methodology aims at finding the combination of factor levels which optimizes a response variable. A second order polynomial model is typically employed to make inference on the stationary point of the true response function. A suitable reparametrization of the polynomial model, where the coordinates of the stationary point appear as the parameter of interest, is used to derive unconstrained confidence regions for the stationary point. These regions are based on the asymptotic normal approximation to the sampling distribution of the maximum likelihood estimator of the stationary point. A simulation study is performed to evaluate the coverage probabilities of the proposed confidence regions. Some comparisons with the standard confidence regions due to Box and Hunter are also showed.  相似文献   

20.
When events of temporal point processes are too close to each other they can be erased by dead-time effects. Among various possible mechanisms of dead-time, the output dead-time is the most important. Dead-time effects modify the statistical properties of point processes and some of these modifications are analyzed in this article. To do so, we note that a point process is defined by the distance between its successive points called life-time which constitutes a discrete time positive signal. The dead-time mechanism is a system which transforms such a signal into another discrete time positive signal. Except in very specific cases this transformation cannot be expressed in closed form. We show, however, that it can be written in a recursive form analogous to the state representation of systems. By using this recursion, various statistical properties of point processes with dead-time are analyzed in computer experiments. In this study, we focus on the probability distribution of the intervals between points and the coincidence function which describes the second-order properties of the point process. For the rare processes where theoretical calculations are possible there is an excellent agreement between experiment and theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号