首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A unified approach of parameter-estimation and goodness-of-fit testing is proposed. The new procedures may be applied to arbitrary laws with continuous distribution function. Specifically, both the method of estimation and the goodness-of-fit test are based on the idea of optimally transforming the original data to the uniform distribution, the criterion of optimality being an L2-type distance between the empirical characteristic function of the transformed data, and the characteristic function of the uniform (0,1)(0,1) distribution. Theoretical properties of the new estimators and tests are studied and some connections with classical statistics, moment-based procedures and non-parametric methods are investigated. Comparison with standard procedures via Monte Carlo is also included, along with a real-data application.  相似文献   

2.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

3.
In this article we discuss various strategies for constructing bivariate Kumaraswamy distributions. As alternatives to the Nadarajah et al. (2011) bivariate model, four different models are introduced utilizing a conditional specification approach, a conditional survival function approach, and an Arnold–Ng bivariate beta distribution construction approach. Distributional properties for such bivariate distributions are investigated. Parameter estimation strategies for the models are discussed, as are the consequences of fitting two of the models to a particular data set involving the proportion of foggy days at two different airports in Colombia.  相似文献   

4.
Progressively Type-II right censored order statistics from continuous distributions have been studied rather extensively in the literature; see Balakrishnan and Aggarwala [2000. Progressive Censoring: Theory, Methods and Applications. Birkhäuser, Boston]. In this paper, we derive the joint and marginal distributions of progressively Type-II right censored order statistics from discrete distributions. We then use these distributions to show the non-Markovian property as well as to discuss some properties in the special case of the geometric distribution.  相似文献   

5.
In this paper we propose a new robust estimator in the context of two-stage estimation methods directed towards the correction of endogeneity problems in linear models. Our estimator is a combination of Huber estimators for each of the two stages, with scale corrections implemented using preliminary median absolute deviation estimators. In this way we obtain a two-stage estimation procedure that is an interesting compromise between concerns of simplicity of calculation, robustness and efficiency. This method compares well with other possible estimators such as two-stage least-squares (2SLS) and two-stage least-absolute-deviations (2SLAD), asymptotically and in finite samples. It is notably interesting to deal with contamination affecting more heavily the distribution tails than a few outliers and not losing as much efficiency as other popular estimators in that case, e.g. under normality. An additional originality resides in the fact that we deal with random regressors and asymmetric errors, which is not often the case in the literature on robust estimators.  相似文献   

6.
We study the distribution of the adaptive LASSO estimator [Zou, H., 2006. The adaptive LASSO and its oracle properties. J. Amer. Statist. Assoc. 101, 1418–1429] in finite samples as well as in the large-sample limit. The large-sample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the large-sample distributions are typically highly nonnormal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than n-1/2n-1/2 in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the ‘oracle’ property of the adaptive LASSO estimator established in Zou [2006. The adaptive LASSO and its oracle properties. J. Amer. Statist. Assoc. 101, 1418–1429]. Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator. The theoretical results, which are obtained for a regression model with orthogonal design, are complemented by a Monte Carlo study using nonorthogonal regressors.  相似文献   

7.
The number of components is an important feature in finite mixture models. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic does not have a chi-square limit distribution. It is very difficult to find a test with a specified significance level, and this is especially true for testing k — 1 versus k components. Most of the existing work has concentrated on finding a comparable approximation to the limit distribution of the log-likelihood-ratio statistic. In this paper, we use a statistic similar to the usual log likelihood ratio, but its null distribution is asymptotically normal. A simulation study indicates that the method has good power at detecting extra components. We also discuss how to improve the power of the test, and some simulations are performed.  相似文献   

8.
Let {X n:n ≥ 1} be an i.i.d. sequence of random variables with a continuous distribution function F. Under the assumption that the upper tail of Fis regularly varying with exponent 1/α, α > 0, we study the asymptotic properties of an estimator of α based on k-record values.  相似文献   

9.
We discuss the general form of a first-order correction to the maximum likelihood estimator which is expressed in terms of the gradient of a function, which could for example be the logarithm of a prior density function. In terms of Kullback–Leibler divergence, the correction gives an asymptotic improvement over maximum likelihood under rather general conditions. The theory is illustrated for Bayes estimators with conjugate priors. The optimal choice of hyper-parameter to improve the maximum likelihood estimator is discussed. The results based on Kullback–Leibler risk are extended to a wide class of risk functions.  相似文献   

10.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   

11.
In this paper, we discuss the problem of estimating the mean and standard deviation of a logistic population based on multiply Type-II censored samples. First, we discuss the best linear unbiased estimation and the maximum likelihood estimation methods. Next, by appropriately approximating the likelihood equations we derive approximate maximum likelihood estimators for the two parameters and show that these estimators are quite useful as they do not need the construction of any special tables (as required for the best linear unbiased estimators) and are explicit estimators (unlike the maximum likelihood estimators which need to be determined by numerical methods). We show that these estimators are also quite efficient, and derive the asymptotic variances and covariance of the estimators. Finally, we present an example to illustrate the methods of estimation discussed in this paper.  相似文献   

12.
The Pareto distribution is found in a large number of real world situations and is also a well-known model for extreme events. In the spirit of Neyman [1937. Smooth tests for goodness of fit. Skand. Aktuarietidskr. 20, 149–199] and Thomas and Pierce [1979. Neyman's smooth goodness-of-fit test when the hypothesis is composite. J. Amer. Statist. Assoc. 74, 441–445], we propose a smooth goodness of fit test for the Pareto distribution family which is motivated by LeCam's theory of local asymptotic normality (LAN). We establish the behavior of the associated test statistic firstly under the null hypothesis that the sample follows a Pareto distribution and secondly under local alternatives using the LAN framework. Finally, simulations are provided in order to study the finite sample behavior of the test statistic.  相似文献   

13.
The weighted likelihood is a generalization of the likelihood designed to borrow strength from similar populations while making minimal assumptions. If the weights are properly chosen, the maximum weighted likelihood estimate may perform better than the maximum likelihood estimate (MLE). In a previous article, the minimum averaged mean squared error (MAMSE) weights are proposed and simulations show that they allow to outperform the MLE in many cases. In this paper, we study the asymptotic properties of the MAMSE weights. In particular, we prove that the MAMSE-weighted mixture of empirical distribution functions converges uniformly to the target distribution and that the maximum weighted likelihood estimate is strongly consistent. A short simulation illustrates the use of bootstrap in this context.  相似文献   

14.
Bias reduction estimation for tail index has been studied in the literature. One method is to reduce bias with an external estimator of the second order regular variation parameter; see Gomes and Martins [2002. Asymptotically unbiased estimators of the tail index based on external estimation of the second order parameter. Extremes 5(1), 5–31]. It is known that negative extreme value index implies that the underlying distribution has a finite right endpoint. As far as we know, there exists no bias reduction estimator for the endpoint of a distribution. In this paper, we study the bias reduction method with an external estimator of the second order parameter for both the negative extreme value index and endpoint simultaneously. Surprisingly, we find that this bias reduction method for negative extreme value index requires a larger order of sample fraction than that for positive extreme value index. This finding implies that this bias reduction method for endpoint is less attractive than that for positive extreme value index. Nevertheless, our simulation study prefers the proposed bias reduction estimators to the biased estimators in Hall [1982. On estimating the endpoint of a distribution. Ann. Statist. 10, 556–568].  相似文献   

15.
Ranked set sampling (RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Res. 3, 385–390] as an effective way to estimate the unknown population mean. Chuiv and Sinha [1998. On some aspects of ranked set sampling in parametric estimation. In: Balakrishnan, N., Rao, C.R. (Eds.), Handbook of Statistics, vol. 17. Elsevier, Amsterdam, pp. 337–377] and Chen et al. [2004. Ranked Set Sampling—Theory and Application. Lecture Notes in Statistics, vol. 176. Springer, New York] have provided excellent surveys of RSS and various inferential results based on RSS. In this paper, we use the idea of order statistics from independent and non-identically distributed (INID) random variables to propose ordered ranked set sampling (ORSS) and then develop optimal linear inference based on ORSS. We determine the best linear unbiased estimators based on ORSS (BLUE-ORSS) and show that they are more efficient than BLUE-RSS for the two-parameter exponential, normal and logistic distributions. Although this is not the case for the one-parameter exponential distribution, the relative efficiency of the BLUE-ORSS (to BLUE-RSS) is very close to 1. Furthermore, we compare both BLUE-ORSS and BLUE-RSS with the BLUE based on order statistics from a simple random sample (BLUE-OS). We show that BLUE-ORSS is uniformly better than BLUE-OS, while BLUE-RSS is not as efficient as BLUE-OS for small sample sizes (n<5n<5).  相似文献   

16.
We consider a continuous-time model for the evolution of social networks. A social network is here conceived as a (di-) graph on a set of vertices, representing actors, and the changes of interest are creation and disappearance over time of (arcs) edges in the graph. Hence we model a collection of random edge indicators that are not, in general, independent. We explicitly model the interdependencies between edge indicators that arise from interaction between social entities. A Markov chain is defined in terms of an embedded chain with holding times and transition probabilities. Data are observed at fixed points in time and hence we are not able to observe the embedded chain directly. Introducing a prior distribution for the parameters we may implement an MCMC algorithm for exploring the posterior distribution of the parameters by simulating the evolution of the embedded process between observations.  相似文献   

17.
In this paper, we use a smoothed empirical likelihood method to investigate the difference of quantiles under censorship. An empirical log-likelihood ratio is derived and its asymptotic distribution is shown to be chi-squared. Approximate confidence regions based on this method are constructed. Simulation studies are used to compare the empirical likelihood and the normal approximation method in terms of its coverage accuracy. It is found that the empirical likelihood method provides a much better performance. The research is supported by NSFC (10231030) and RFDP.  相似文献   

18.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   

19.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

20.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号