首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
空间回归模型由于引入了空间地理信息而使得其参数估计变得复杂,因为主要采用最大似然法,致使一般人认为在空间回归模型参数估计中不存在最小二乘法。通过分析空间回归模型的参数估计技术,研究发现,最小二乘法和最大似然法分别用于估计空间回归模型的不同的参数,只有将两者结合起来才能快速有效地完成全部的参数估计。数理论证结果表明,空间回归模型参数最小二乘估计量是最佳线性无偏估计量。空间回归模型的回归参数可以在估计量为正态性的条件下而实施显著性检验,而空间效应参数则不可以用此方法进行检验。  相似文献   

2.
This work compares various hypothesis testing procedures in the case of familial clustered data. Specifically, we use likelihood ratio and Wald's tests for maximum likelihood estimators, and Wald-type tests for moment and quasi-least squares estimators. Using simulations, we estimate significance levels for various hypotheses concerning the one-parent auto-regressive and two-parent equi-correlated dependence structures. We show that the likelihood ratio test performs best for certain simple hypotheses in the one-parent case, whereas the Wald-type test for the quasi-least squares procedure is optimal in the more complex two-parent case.  相似文献   

3.
Integer-valued time series models and their applications have attracted a lot of attention over the last years. In this paper, we introduce a class of observation-driven random coefficient integer-valued autoregressive processes based on negative binomial thinning, where the autoregressive parameter depends on the observed values of the previous moment. Basic probability and statistics properties of the process are established. The unknown parameters are estimated by the conditional least squares and empirical likelihood methods. Specially, we consider three aspects of the empirical likelihood method: maximum empirical likelihood estimate, confidence region and EL test. The performance of the two estimation methods is compared through simulation studies. Finally, an application to a real data example is provided.  相似文献   

4.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

5.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   

6.
Several approaches have been suggested for fitting linear regression models to censored data. These include Cox's propor­tional hazard models based on quasi-likelihoods. Methods of fitting based on least squares and maximum likelihoods have also been proposed. The methods proposed so far all require special purpose optimization routines. We describe an approach here which requires only a modified standard least squares routine.

We present methods for fitting a linear regression model to censored data by least squares and method of maximum likelihood. In the least squares method, the censored values are replaced by their expectations, and the residual sum of squares is minimized. Several variants are suggested in the ways in which the expect­ation is calculated. A parametric (assuming a normal error model) and two non-parametric approaches are described. We also present a method for solving the maximum likelihood equations in the estimation of the regression parameters in the censored regression situation. It is shown that the solutions can be obtained by a recursive algorithm which needs only a least squares routine for optimization. The suggested procesures gain considerably in computational officiency. The Stanford Heart Transplant data is used to illustrate the various methods.  相似文献   

7.
The spectral analysis of Gaussian linear time-series processes is usually based on uni-frequential tools because the spectral density functions of degree 2 and higher are identically zero and there is no polyspectrum in this case. In finite samples, such an approach does not allow the resolution of closely adjacent spectral lines, except by using autoregressive models of excessively high order in the method of maximum entropy. In this article, multi-frequential periodograms designed for the analysis of discrete and mixed spectra are defined and studied for their properties in finite samples. For a given vector of frequencies ω, the sum of squares of the corresponding trigonometric regression model fitted to a time series by unweighted least squares defines the multi-frequential periodogram statistic IM(ω). When ω is unknown, it follows from the properties of nonlinear models whose parameters separate (i.e., the frequencies and the cosine and sine coefficients here) that the least-squares estimator of frequencies is obtained by maximizing I M(ω). The first-order, second-order and distribution properties of I M(ω) are established theoretically in finite samples, and are compared with those of Schuster's uni-frequential periodogram statistic. In the multi-frequential periodogram analysis, the least-squares estimator of frequencies is proved to be theoretically unbiased in finite samples if the number of periodic components of the time series is correctly estimated. Here, this number is estimated at the end of a stepwise procedure based on pseudo-Flikelihood ratio tests. Simulations are used to compare the stepwise procedure involving I M(ω) with a stepwise procedure using Schuster's periodogram, to study an approximation of the asymptotic theory for the frequency estimators in finite samples in relation to the proximity and signal-to-noise ratio of the periodic components, and to assess the robustness of I M(ω) against autocorrelation in the analysis of mixed spectra. Overall, the results show an improvement of the new method over the classical approach when spectral lines are adjacent. Finally, three examples with real data illustrate specific aspects of the method, and extensions (i.e., unequally spaced observations, trend modeling, replicated time series, periodogram matrices) are outlined.  相似文献   

8.
Estimation in conditional first order autoregression with discrete support   总被引:1,自引:0,他引:1  
We consider estimation in the class of first order conditional linear autoregressive models with discrete support that are routinely used to model time series of counts. Various groups of estimators proposed in the literature are discussed: moment-based estimators; regression-based estimators; and likelihood-based estimators. Some of these have been used previously and others not. In particular, we address the performance of new types of generalized method of moments estimators and propose an exact maximum likelihood procedure valid for a Poisson marginal model using backcasting. The small sample properties of all estimators are comprehensively analyzed using simulation. Three situations are considered using data generated with: a fixed autoregressive parameter and equidispersed Poisson innovations; negative binomial innovations; and, additionally, a random autoregressive coefficient. The first set of experiments indicates that bias correction methods, not hitherto used in this context to our knowledge, are some-times needed and that likelihood-based estimators, as might be expected, perform well. The second two scenarios are representative of overdispersion. Methods designed specifically for the Poisson context now perform uniformly badly, but simple, bias-corrected, Yule-Walker and least squares estimators perform well in all cases.  相似文献   

9.
In this paper we consider inference of parameters in time series regression models. In the traditional inference approach, the heteroskedasticity and autocorrelation consistent (HAC) estimation is often involved to consistently estimate the asymptotic covariance matrix of regression parameter estimator. Since the bandwidth parameter in the HAC estimation is difficult to choose in practice, there has been a recent surge of interest in developing bandwidth-free inference methods. However, existing simulation studies show that these new methods suffer from severe size distortion in the presence of strong temporal dependence for a medium sample size. To remedy the problem, we propose to apply the prewhitening to the inconsistent long-run variance estimator in these methods to reduce the size distortion. The asymptotic distribution of the prewhitened Wald statistic is obtained and the general effectiveness of prewhitening is shown through simulations.  相似文献   

10.
We provide methods to robustly estimate the parameters of stationary ergodic short-memory time series models in the potential presence of additive low-frequency contamination. The types of contamination covered include level shifts (changes in mean) and monotone or smooth time trends, both of which have been shown to bias parameter estimates toward regions of persistence in a variety of contexts. The estimators presented here minimize trimmed frequency domain quasi-maximum likelihood (FDQML) objective functions without requiring specification of the low-frequency contaminating component. When proper sample size-dependent trimmings are used, the FDQML estimators are consistent and asymptotically normal, asymptotically eliminating the presence of any spurious persistence. These asymptotic results also hold in the absence of additive low-frequency contamination, enabling the practitioner to robustly estimate model parameters without prior knowledge of whether contamination is present. Popular time series models that fit into the framework of this article include autoregressive moving average (ARMA), stochastic volatility, generalized autoregressive conditional heteroscedasticity (GARCH), and autoregressive conditional heteroscedasticity (ARCH) models. We explore the finite sample properties of the trimmed FDQML estimators of the parameters of some of these models, providing practical guidance on trimming choice. Empirical estimation results suggest that a large portion of the apparent persistence in certain volatility time series may indeed be spurious. Supplementary materials for this article are available online.  相似文献   

11.
Abstract

This paper compares three estimators for periodic autoregressive (PAR) models. The first is the classical periodic Yule-Walker estimator (YWE). The second is a robust version of YWE (RYWE) which uses the robust autocovariance function in the periodic Yule-Walker equations, and the third is the robust least squares estimator (RLSE) based on iterative least squares with robust versions of the original time series. The daily mean particulate matter concentration (PM10) data is used to illustrate the methodologies in a real application, that is, in the Air Quality area.  相似文献   

12.
Real count data time series often show the phenomenon of the underdispersion and overdispersion. In this paper, we develop two extensions of the first-order integer-valued autoregressive process with Poisson innovations, based on binomial thinning, for modeling integer-valued time series with equidispersion, underdispersion, and overdispersion. The main properties of the models are derived. The methods of conditional maximum likelihood, Yule–Walker, and conditional least squares are used for estimating the parameters, and their asymptotic properties are established. We also use a test based on our processes for checking if the count time series considered is overdispersed or underdispersed. The proposed models are fitted to time series of the weekly number of syphilis cases and monthly counts of family violence illustrating its capabilities in challenging the overdispersed and underdispersed count data.  相似文献   

13.
This article examines methods to efficiently estimate the mean response in a linear model with an unknown error distribution under the assumption that the responses are missing at random. We show how the asymptotic variance is affected by the estimator of the regression parameter, and by the imputation method. To estimate the regression parameter, the ordinary least squares is efficient only if the error distribution happens to be normal. If the errors are not normal, then we propose a one step improvement estimator or a maximum empirical likelihood estimator to efficiently estimate the parameter.To investigate the imputation’s impact on the estimation of the mean response, we compare the listwise deletion method and the propensity score method (which do not use imputation at all), and two imputation methods. We demonstrate that listwise deletion and the propensity score method are inefficient. Partial imputation, where only the missing responses are imputed, is compared to full imputation, where both missing and non-missing responses are imputed. Our results reveal that, in general, full imputation is better than partial imputation. However, when the regression parameter is estimated very poorly, the partial imputation will outperform full imputation. The efficient estimator for the mean response is the full imputation estimator that utilizes an efficient estimator of the parameter.  相似文献   

14.
This article develops three recursive on-line algorithms, based on a two-stage least squares scheme for estimating generalized autoregressive conditionally heteroskedastic (GARCH) models. The first one, denoted by 2S-RLS, is an adaptation of the recursive least squares method for estimating autoregressive conditionally heteroskedastic (ARCH) models. The second and the third ones (denoted, respectively, by 2S-PLR and 2S-RML) are adapted versions of the pseudolinear regression (PLR) and the recursive maximum likelihood (RML) methods to the GARCH case. We show that the proposed algorithms give consistent estimators and that the 2S-RLS and the 2S-RML estimators are asymptotically Gaussian. These methods seem very adequate for modeling the sequential feature of financial time series, which are observed on a high-frequency basis. The performance of these algorithms is shown via a simulation study.  相似文献   

15.
Abstract.  We propose an easy to implement method for making small sample parametric inference about the root of an estimating equation expressible as a quadratic form in normal random variables. It is based on saddlepoint approximations to the distribution of the estimating equation whose unique root is a parameter's maximum likelihood estimator (MLE), while substituting conditional MLEs for the remaining (nuisance) parameters. Monotoncity of the estimating equation in its parameter argument enables us to relate these approximations to those for the estimator of interest. The proposed method is equivalent to a parametric bootstrap percentile approach where Monte Carlo simulation is replaced by saddlepoint approximation. It finds applications in many areas of statistics including, nonlinear regression, time series analysis, inference on ratios of regression parameters in linear models and calibration. We demonstrate the method in the context of some classical examples from nonlinear regression models and ratios of regression parameter problems. Simulation results for these show that the proposed method, apart from being generally easier to implement, yields confidence intervals with lengths and coverage probabilities that compare favourably with those obtained from several competing methods proposed in the literature over the past half-century.  相似文献   

16.
Abstract.  We propose and study a class of regression models, in which the mean function is specified parametrically as in the existing regression methods, but the residual distribution is modelled non-parametrically by a kernel estimator, without imposing any assumption on its distribution. This specification is different from the existing semiparametric regression models. The asymptotic properties of such likelihood and the maximum likelihood estimate (MLE) under this semiparametric model are studied. We show that under some regularity conditions, the MLE under this model is consistent (when compared with the possibly pseudo-consistency of the parameter estimation under the existing parametric regression model), is asymptotically normal with rate and efficient. The non-parametric pseudo-likelihood ratio has the Wilks property as the true likelihood ratio does. Simulated examples are presented to evaluate the accuracy of the proposed semiparametric MLE method.  相似文献   

17.
Existing research on mixtures of regression models are limited to directly observed predictors. The estimation of mixtures of regression for measurement error data imposes challenges for statisticians. For linear regression models with measurement error data, the naive ordinary least squares method, which directly substitutes the observed surrogates for the unobserved error-prone variables, yields an inconsistent estimate for the regression coefficients. The same inconsistency also happens to the naive mixtures of regression estimate, which is based on the traditional maximum likelihood estimator and simply ignores the measurement error. To solve this inconsistency, we propose to use the deconvolution method to estimate the mixture likelihood of the observed surrogates. Then our proposed estimate is found by maximizing the estimated mixture likelihood. In addition, a generalized EM algorithm is also developed to find the estimate. The simulation results demonstrate that the proposed estimation procedures work well and perform much better than the naive estimates.  相似文献   

18.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

19.
Monte Carlo methods are used to compare the methods of maximum likelihood and least squares to estimate a cumulative distribution function. When the probabilistic model used is correct or nearly correct, the two methods produce similar results with the MLE usually slightly superior When an incorrect model is used, or when the data is contaminated, the least squares technique often gives substantially superior results.  相似文献   

20.
Binomial thinning operator has a major role in modeling one-dimensional integer-valued autoregressive time series models. The purpose of this article is to extend the use of such operator to define a new stationary first-order spatial non negative, integer-valued autoregressive SINAR(1, 1) model. We study some properties of this model like the mean, variance and autocorrelation function. Yule-Walker estimator of the model parameters is also obtained. Some numerical results of the model are presented and, moreover, this model is applied to a real data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号