首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 15 毫秒
1.
Application of the minimum distance (MD) estimation method to the linear regression model for estimating regression parameters is a difficult and time-consuming process due to the complexity of its distance function, and hence, it is computationally expensive. To deal with the computational cost, this paper proposes a fast algorithm which makes the best use of coordinate-wise minimization technique in order to obtain the MD estimator. R package (KoulMde) based on the proposed algorithm and written in Rcpp is available online.  相似文献   

2.
3.
We consider a continuous time random process with functional autoregressive representation. We state statistical results on a mean functional estimator determining a minimum distance estimator of the period giving consistency and a limit law stated in Mourid and Benyelles [13 Mourid, T. and Benyelles, W. 2002. Estimation de la p ériode de la représentation autorégressive d'un processus à temps continu. : 89101. Annales de l'I.S.U.P, Volume XXXXVI-Fascicule 1–2 [Google Scholar]]. Then we discuss their performance on numerical simulations and on real data analyzing the cycle of a climatic phenomena.  相似文献   

4.
Rp of a linear regression model of the type Y = Xθ + ɛ, where X is the design matrix, Y the vector of the response variable and ɛ the random error vector that follows an AR(1) correlation structure. These estimators are asymptotically analyzed, by proving their strong consistency, asymptotic normality and asymptotic efficiency. In a simulation study, a better behaviour of the Mean Squared Error of the proposed estimator with respect to that of the generalized least squares estimators is observed. Received: November 16, 1998; revised version: May 10, 2000  相似文献   

5.
A robust procedure is developed for testing the equality of means in the two sample normal model. This is based on the weighted likelihood estimators of Basu et al. (1993). When the normal model is true the tests proposed have the same asymptotic power as the two sample Student's t-statistic in the equal variance case. However, when the normality assumptions are only approximately true the proposed tests can be substantially more powerful than the classical tests. In a Monte Carlo study for the equal variance case under various outlier models the proposed test using Hellinger distance based weighted likelihood estimator compared favorably with the classical test as well as the robust test proposed by Tiku (1980).  相似文献   

6.
Abstract. We consider the problem of efficiently estimating multivariate densities and their modes for moderate dimensions and an abundance of data. We propose polynomial histograms to solve this estimation problem. We present first‐ and second‐order polynomial histogram estimators for a general d‐dimensional setting. Our theoretical results include pointwise bias and variance of these estimators, their asymptotic mean integrated square error (AMISE), and optimal binwidth. The asymptotic performance of the first‐order estimator matches that of the kernel density estimator, while the second order has the faster rate of O(n?6/(d+6)). For a bivariate normal setting, we present explicit expressions for the AMISE constants which show the much larger binwidths of the second order estimator and hence also more efficient computations of multivariate densities. We apply polynomial histogram estimators to real data from biotechnology and find the number and location of modes in such data.  相似文献   

7.
ABSTRACT

In this paper two probability distributions are analyzed which are formed by compounding inverse Weibull with zero-truncated Poisson and geometric distributions. The distributions can be used to model lifetime of series system where the lifetimes follow inverse Weibull distribution and the subgroup size being random follows either geometric or zero-truncated Poisson distribution. Some of the important statistical and reliability properties of each of the distributions are derived. The distributions are found to exhibit both monotone and non-monotone failure rates. The parameters of the distributions are estimated using the expectation-maximization algorithm and the method of minimum distance estimation. The potentials of the distributions are explored through three real life data sets and are compared with similar compounded distributions, viz. Weibull-geometric, Weibull-Poisson, exponential-geometric and exponential-Poisson distributions.  相似文献   

8.
Uniformly minimum variance unbiased estimator (UMVUE) of reliability in stress-strength model (known stress) is obtained for a multicomponent survival model based on exponential distributions for parallel system. The variance of this estimator is compared with Cramer-Rao lower bound (CRB) for the variance of unbiased estimator of reliability, and the mean square error (MSE) of maximum likelihood estimator of reliability in case of two component system.  相似文献   

9.
A class of minimum-distance methods based on empirical transforms is considered. This class includes the minimum-chi-squared method, the K-L method for empirical characteristic functions, and the analogous method for empirical moment generating functions. Asymptotic properties of the minimum-distance estimators and goodness-of-fit test statistics are derived. A general analogue of the Rao-Robson statistic is formulated.  相似文献   

10.
The paper shows that many estimation methods, including ML, moments, even-points, empirical c.f. and minimum chi-square, can be regarded as scoring procedures using weighted sums of the discrepancies between observed and expected frequencies The nature of the weights is investigated for many classes of distributions; the study of approximations to the weights clarifies the relationships between estimation methods, and also leads to useful formulae for initial values for ML iteration.  相似文献   

11.
A completely nonparametric approach to population bioequivalence in crossover trials has been suggested by Munk and Czado (1999). It is based on the Mallows (1972) metric as a nonparametric distance measure which allows the comparison between the entire distribution functions of test and reference formulations. It was shown that a separation between carry-over and period effects is not possible in the nonparametric setting. However when carry-over effects can be excluded, treatment effects can be assessed when period effects are or not. Munk and Czado (1999) proved bootstrap limit laws of the corresponding test statistics because estimation of the limiting variance of the test statistic is very cumbersome. The purpose of this paper is to investigate the small sample behavior of various bootstrap methods and to compare it with the asymptotic test obtained by estimation of the limiting variance. The percentile (PC) and bias correct- ed and accelerated (BCA) bootstrap were compared for multivariate normal and nonnormal populations. From the simulation results presented, the BCA bootstrap is found to be less conservative and provides higher power compared to the PC bootstrap, especially when skewed multivariate populations are present.  相似文献   

12.
《Econometric Reviews》2013,32(4):293-323
Abstract

This paper studies the efficient estimation of seemingly unrelated linear models with integrated regressors and stationary errors. We consider two cases. The first one has no common regressor among the equations. In this case, we show that by adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. In the second case we consider, there is a common regressor to all equations, and we discuss efficient minimum distance estimation in this context. Simulation results suggests that our new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of the proportionality and symmetry conditions implied by purchasing power parity (PPP) among the G-7 countries. The tests based on the efficient estimates easily reject the joint hypotheses of proportionality and symmetry for all countries with either the United States or Germany as numeraire. Based on individual tests, our results suggest that Canada and Germany are the most likely countries for which the proportionality condition holds, and that Italy and Japan for the symmetry condition relative to the United States.  相似文献   

13.
Except in special cases optimum smoothing parameters of kernel methods are difficult to obtain for small samples, and large sample results are often used. Simulation is used to obtain finite sample optimum smoothing parameters and mean integrated square errors for the bivariate normal density. For this example, comparison is made of finite and asymptotic results, and of fixed and adaptive kernel methods. Further comparisons are made of fixed and adaptive methods by considering four other different types of density. Finally, some examples are given.  相似文献   

14.
The authors consider the problem of estimating a regression function go involving several variables by the closest functional element of a prescribed class G that is closest to it in the L1 norm. They propose a new estimator ? based on independent observations and give explicit finite sample bounds for the L1distance between ?g and go. They apply their estimation procedure to the problem of selecting the smoothing parameter in nonparametric regression.  相似文献   

15.
16.
刘灿  吴垠 《统计研究》2008,25(5):61-64
 “实现人均GDP到二0二0年比二000年翻两番”是十七大报告中的一个崭新提法,它为我们清晰地描绘了中国在重要战略机遇期应当达到的基本发展目标。本文在考虑人口自然增长率时间序列数据变化的基础上,采用移动平均数法和增长率的数学分解法,对达到上述发展目标的人均GDP年平均增长率与实际GDP年平均增长率进行了区间估算,估算的结果表明:必须继续严格执行计划生育政策、控制好各个时期的经济增长速度、保证宏观调控政策落到实处并完成十七大提出的各项配套任务,才能使人均GDP翻两番的发展目标在科学发展观的框架下得到充分实现。  相似文献   

17.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   

18.

Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号