首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper the problem of estimating the scale matrix in a complex elliptically contoured distribution (complex ECD) is addressed. An extended Haff–Stein identity for this model is derived. It is shown that the minimax estimators of the covariance matrix obtained under the complex normal model remain robust under the complex ECD model when the Stein loss function is employed.  相似文献   

2.
General linear models with a common design matrix and with various structures of the variance–covariance matrix are considered. We say that a model is perfect for a linearly estimable parametric function, or the function is perfect in the model, if there exists the best linear unbiased estimator. All perfect models for a given function and all perfect functions in a given model are characterized.  相似文献   

3.
The Pareto distribution is found in a large number of real world situations and is also a well-known model for extreme events. In the spirit of Neyman [1937. Smooth tests for goodness of fit. Skand. Aktuarietidskr. 20, 149–199] and Thomas and Pierce [1979. Neyman's smooth goodness-of-fit test when the hypothesis is composite. J. Amer. Statist. Assoc. 74, 441–445], we propose a smooth goodness of fit test for the Pareto distribution family which is motivated by LeCam's theory of local asymptotic normality (LAN). We establish the behavior of the associated test statistic firstly under the null hypothesis that the sample follows a Pareto distribution and secondly under local alternatives using the LAN framework. Finally, simulations are provided in order to study the finite sample behavior of the test statistic.  相似文献   

4.
In a general parametric setup, a multivariate regression model is considered when responses may be missing at random while the explanatory variables and covariates are completely observed. Asymptotic optimality properties of maximum likelihood estimators for such models are linked to the Fisher information matrix for the parameters. It is shown that the information matrix is well defined for the missing-at-random model and that it plays the same role as in the complete-data linear models. Applications of the methodologic developments in hypothesis-testing problems, without any imputation of missing data, are illustrated. Some simulation results comparing the proposed method with Rubin's multiple imputation method are presented.  相似文献   

5.
This article concerns the variance estimation in the central limit theorem for finite recurrent Markov chains. The associated variance is calculated in terms of the transition matrix of the Markov chain. We prove the equivalence of different matrix forms representing this variance. The maximum likelihood estimator for this variance is constructed and it is proved that it is strongly consistent and asymptotically normal. The main part of our analysis consists in presenting closed matrix forms for this new variance. Additionally, we prove the asymptotic equivalence between the empirical and the maximum likelihood estimation (MLE) for the stationary distribution.  相似文献   

6.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   

7.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

8.
This article investigates the large sample interval mapping method for genetic trait loci (GTL) in a finite non-linear regression mixture model. The general model includes most commonly used kernel functions, such as exponential family mixture, logistic regression mixture and generalized linear mixture models, as special cases. The populations derived from either the backcross or intercross design are considered. In particular, unlike all existing results in the literature in the finite mixture models, the large sample results presented in this paper do not require the boundness condition on the parametric space. Therefore, the large sample theory presented in this article possesses general applicability to the interval mapping method of GTL in genetic research. The limiting null distribution of the likelihood ratio test statistics can be utilized easily to determine the threshold values or p-values required in the interval mapping. The limiting distribution is proved to be free of the parameter values of null model and free of the choice of a kernel function. Extension to the multiple marker interval GTL detection is also discussed. Simulation study results show favorable performance of the asymptotic procedure when sample sizes are moderate.  相似文献   

9.
We study the distribution of the adaptive LASSO estimator [Zou, H., 2006. The adaptive LASSO and its oracle properties. J. Amer. Statist. Assoc. 101, 1418–1429] in finite samples as well as in the large-sample limit. The large-sample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the large-sample distributions are typically highly nonnormal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than n-1/2n-1/2 in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the ‘oracle’ property of the adaptive LASSO estimator established in Zou [2006. The adaptive LASSO and its oracle properties. J. Amer. Statist. Assoc. 101, 1418–1429]. Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator. The theoretical results, which are obtained for a regression model with orthogonal design, are complemented by a Monte Carlo study using nonorthogonal regressors.  相似文献   

10.
We consider a continuous-time model for the evolution of social networks. A social network is here conceived as a (di-) graph on a set of vertices, representing actors, and the changes of interest are creation and disappearance over time of (arcs) edges in the graph. Hence we model a collection of random edge indicators that are not, in general, independent. We explicitly model the interdependencies between edge indicators that arise from interaction between social entities. A Markov chain is defined in terms of an embedded chain with holding times and transition probabilities. Data are observed at fixed points in time and hence we are not able to observe the embedded chain directly. Introducing a prior distribution for the parameters we may implement an MCMC algorithm for exploring the posterior distribution of the parameters by simulating the evolution of the embedded process between observations.  相似文献   

11.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

12.
A unified approach of parameter-estimation and goodness-of-fit testing is proposed. The new procedures may be applied to arbitrary laws with continuous distribution function. Specifically, both the method of estimation and the goodness-of-fit test are based on the idea of optimally transforming the original data to the uniform distribution, the criterion of optimality being an L2-type distance between the empirical characteristic function of the transformed data, and the characteristic function of the uniform (0,1)(0,1) distribution. Theoretical properties of the new estimators and tests are studied and some connections with classical statistics, moment-based procedures and non-parametric methods are investigated. Comparison with standard procedures via Monte Carlo is also included, along with a real-data application.  相似文献   

13.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   

14.
This paper studies the estimation in the proportional odds model based on randomly truncated data. The proposed estimators for the regression coefficients include a class of minimum distance estimators defined through weighted empirical odds function. We have investigated the asymptotic properties like the consistency and the limiting distribution of the proposed estimators under mild conditions. The finite sample properties were investigated through simulation study making comparison of some of the estimators in the class. We conclude with an illustration of our proposed method to a well-known AIDS data.  相似文献   

15.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

16.
Bias reduction estimation for tail index has been studied in the literature. One method is to reduce bias with an external estimator of the second order regular variation parameter; see Gomes and Martins [2002. Asymptotically unbiased estimators of the tail index based on external estimation of the second order parameter. Extremes 5(1), 5–31]. It is known that negative extreme value index implies that the underlying distribution has a finite right endpoint. As far as we know, there exists no bias reduction estimator for the endpoint of a distribution. In this paper, we study the bias reduction method with an external estimator of the second order parameter for both the negative extreme value index and endpoint simultaneously. Surprisingly, we find that this bias reduction method for negative extreme value index requires a larger order of sample fraction than that for positive extreme value index. This finding implies that this bias reduction method for endpoint is less attractive than that for positive extreme value index. Nevertheless, our simulation study prefers the proposed bias reduction estimators to the biased estimators in Hall [1982. On estimating the endpoint of a distribution. Ann. Statist. 10, 556–568].  相似文献   

17.
In this article we discuss various strategies for constructing bivariate Kumaraswamy distributions. As alternatives to the Nadarajah et al. (2011) bivariate model, four different models are introduced utilizing a conditional specification approach, a conditional survival function approach, and an Arnold–Ng bivariate beta distribution construction approach. Distributional properties for such bivariate distributions are investigated. Parameter estimation strategies for the models are discussed, as are the consequences of fitting two of the models to a particular data set involving the proportion of foggy days at two different airports in Colombia.  相似文献   

18.
The weighted likelihood is a generalization of the likelihood designed to borrow strength from similar populations while making minimal assumptions. If the weights are properly chosen, the maximum weighted likelihood estimate may perform better than the maximum likelihood estimate (MLE). In a previous article, the minimum averaged mean squared error (MAMSE) weights are proposed and simulations show that they allow to outperform the MLE in many cases. In this paper, we study the asymptotic properties of the MAMSE weights. In particular, we prove that the MAMSE-weighted mixture of empirical distribution functions converges uniformly to the target distribution and that the maximum weighted likelihood estimate is strongly consistent. A short simulation illustrates the use of bootstrap in this context.  相似文献   

19.
R. Van de Ven  N. C. Weber 《Statistics》2013,47(3-4):345-352
Upper and lower bounds are obtained for the mean of the negative binomial distribution. These bounds are simple functions of a percentile determined by the shape parameter. The result is then used to obtain a robust estimate of the mean when the shape parameter is known.  相似文献   

20.
We generalize the factor stochastic volatility (FSV) model of Pitt and Shephard [1999. Time varying covariances: a factor stochastic volatility approach (with discussion). In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statistics, vol. 6, Oxford University Press, London, pp. 547–570.] and Aguilar and West [2000. Bayesian dynamic factor models and variance matrix discounting for portfolio allocation. J. Business Econom. Statist. 18, 338–357.] in two important directions. First, we make the FSV model more flexible and able to capture more general time-varying variance–covariance structures by letting the matrix of factor loadings to be time dependent. Secondly, we entertain FSV models with jumps in the common factors volatilities through So, Lam and Li's [1998. A stochastic volatility model with Markov switching. J. Business Econom. Statist. 16, 244–253.] Markov switching stochastic volatility model. Novel Markov Chain Monte Carlo algorithms are derived for both classes of models. We apply our methodology to two illustrative situations: daily exchange rate returns [Aguilar, O., West, M., 2000. Bayesian dynamic factor models and variance matrix discounting for portfolio allocation. J. Business Econom. Statist. 18, 338–357.] and Latin American stock returns [Lopes, H.F., Migon, H.S., 2002. Comovements and contagion in emergent markets: stock indexes volatilities. In: Gatsonis, C., Kass, R.E., Carriquiry, A.L., Gelman, A., Verdinelli, I. Pauler, D., Higdon, D. (Eds.), Case Studies in Bayesian Statistics, vol. 6, pp. 287–302].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号