首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an easy-to-compute semi-parametric (SP) method to estimate a simple disequilibrium model proposed by Fair and Jaffee (1972). The proposed approach is based on a non-parametric interpretation of the EM (Expectation and Maximization) principle (Dempster et al; 1977) and the least squares method. The simple disequilibrium model includes the demand equation, the supply equation, and the condition that only the minimum of quantity demanded and quantity supplied is observed. The method used here allows one to consistently estimate the disequilibrium model without fully specifying the distribution of error terms in both demand and supply equations. Our Monte Carlo study suggests that the proposedestimator is better than the normal maximum likelihood estimator under asymmetric error distributions. and comparable to the nlaximunl likelihood estimator under synirnetric error distributions in finite samples. Aggregate U.S. labor market data from Quandt and Rosen (1988) is used to illustrate the procedure.  相似文献   

2.
In this paper, a penalized weighted composite quantile regression estimation procedure is proposed to estimate unknown regression parameters and autoregression coefficients in the linear regression model with heavy-tailed autoregressive errors. Under some conditions, we show that the proposed estimator possesses the oracle properties. In addition, we introduce an iterative algorithm to achieve the proposed optimization problem, and use a data-driven method to choose the tuning parameters. Simulation studies demonstrate that the proposed new estimation method is robust and works much better than the least squares based method when there are outliers in the dataset or the autoregressive error distribution follows heavy-tailed distributions. Moreover, the proposed estimator works comparably to the least squares based estimator when there are no outliers and the error is normal. Finally, we apply the proposed methodology to analyze the electricity demand dataset.  相似文献   

3.
As an alternative to the local partial likelihood method of Tibshirani and Hastie and Fan, Gijbels, and King, a global partial likelihood method is proposed to estimate the covariate effect in a nonparametric proportional hazards model, λ(t|x) = exp{ψ(x)}λ(0)(t). The estimator, ψ?(x), reduces to the Cox partial likelihood estimator if the covariate is discrete. The estimator is shown to be consistent and semiparametrically efficient for linear functionals of ψ(x). Moreover, Breslow-type estimation of the cumulative baseline hazard function, using the proposed estimator ψ?(x), is proved to be efficient. The asymptotic bias and variance are derived under regularity conditions. Computation of the estimator involves an iterative but simple algorithm. Extensive simulation studies provide evidence supporting the theory. The method is illustrated with the Stanford heart transplant data set. The proposed global approach is also extended to a partially linear proportional hazards model and found to provide efficient estimation of the slope parameter. This article has the supplementary materials online.  相似文献   

4.
Nonparametric models with jump points have been considered by many researchers. However, most existing methods based on least squares or likelihood are sensitive when there are outliers or the error distribution is heavy tailed. In this article, a local piecewise-modal method is proposed to estimate the regression function with jump points in nonparametric models, and a piecewise-modal EM algorithm is introduced to estimate the proposed estimator. Under some regular conditions, the large-sample theory is established for the proposed estimators. Several simulations are presented to evaluate the performances of the proposed method, which shows that the proposed estimator is more efficient than the local piecewise-polynomial regression estimator in the presence of outliers or heavy tail error distribution. What is more, the proposed procedure is asymptotically equivalent to the local piecewise-polynomial regression estimator under the assumption that the error distribution is a Gaussian distribution. The proposed method is further illustrated via the sea-level pressures.  相似文献   

5.
A semiparametric method is developed to estimate the dependence parameter and the joint distribution of the error term in the multivariate linear regression model. The nonparametric part of the method treats the marginal distributions of the error term as unknown, and estimates them using suitable empirical distribution functions. Then the dependence parameter is estimated by either maximizing a pseudolikelihood or solving an estimating equation. It is shown that this estimator is asymptotically normal, and a consistent estimator of its large sample variance is given. A simulation study shows that the proposed semiparametric method is better than the parametric ones available when the error distribution is unknown, which is almost always the case in practice. It turns out that there is no loss of asymptotic efficiency as a result of the estimation of regression parameters. An empirical example on portfolio management is used to illustrate the method.  相似文献   

6.
为了研究中国信贷市场供求配适性状况,以及造成中国信贷投放总量错配的主要因素,文章利用1997-2009年2季度中国信贷市场季度数据,采用最大似然方法估计信贷供求非均衡模型参数,实证结果表明:(1)信贷供给小于信贷需求为32个季度,信贷供给大于信贷需求为15个季度,其中1997-2001年以及2005-2007年存在严重的供小于求现象;而2002-2004年及2008年3季度以来存在信贷供大于求现象,其中2009年第1季度信贷超额供给占观察到的实际信贷量的比例为18.37%;(2)中国信贷市场上银行信贷能力是影响信贷供给的重要变量,信贷能力越高,社会上的贷款就越多,而2009年来的信贷大幅投放已经超过了银行的实际信贷能力。  相似文献   

7.
This article addresses estimation and prediction problems for the two-parameter half-logistic distribution based on pivotal quantities when a sample is available from the progressively Type-II censoring scheme. An unbiased estimator of the location parameter based on a pivotal quantity is derived. To estimate the scale parameter, a new method based on a pivotal quantity is proposed. The proposed method provides a simpler estimation equation than the maximum likelihood equation. In addition, confidence intervals for the location and scale parameters are derived from these pivotal quantities. In the prediction of censored failure times, the shortest-length predictive intervals for the censored failure times are derived using a pivotal quantity. Finally, the validity of the proposed method is assessed through Monte Carlo simulations and a real data set is presented for illustration purposes.  相似文献   

8.
The problem of estimation of an unknown common scale parameter of several Pareto distributions with unknown and possibly unequal shape parameters in censored samples is considered. A new class of estimators which includes both the maximum likelihood estimator (MLE) and the uniformly minimum variance unbiased estimator (UMVUE) is proposed and examined under a squared error loss.  相似文献   

9.
A likelihood based approach to obtaining non-parametric estimates of the failure time distribution is developed for the copula based model of Wang et al. (Lifetime Data Anal 18:434–445, 2012) for current status data under dependent observation. Maximization of the likelihood involves a generalized pool-adjacent violators algorithm. The estimator coincides with the standard non-parametric maximum likelihood estimate under an independence model. Confidence intervals for the estimator are constructed based on a smoothed bootstrap. It is also shown that the non-parametric failure distribution is only identifiable if the copula linking the observation and failure time distributions is fully-specified. The method is illustrated on a previously analyzed tumorigenicity dataset.  相似文献   

10.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
The method of moments has been widely used as a simple alternative to the maximum likelihood method, mainly because of its efficiency and simplicity in obtaining parameter estimators of a mixture of two binomial distributions. In this paper, an alternative estimate is proposed which is as competitive as of the method of moments when comparing the mean squared error and computational effort.  相似文献   

12.
An estimator, λ is proposed for the parameter λ of the log-zero-Poisson distribution. While it is not a consistent estimator of λ in the usual statistical sense, it is shown to be quite close to the maximum likelihood estimates for many of the 35 sets of data on which it is tried. Since obtaining maximum likelihood estimates is extremely difficult for this and other contagious distributions, this estimate will act at least as an initial estimate in solving the likelihood equations iteratively. A lesson learned from this experience is that in the area of contagious distributions, variability is so large that attention should be focused directly on the mean squared error and not on consistency or unbiasedness, whether for small samples or for the asymptotic case. Sample sizes for some of the data considered in the paper are in hundreds. The fact that the estimator which is not a consistent estimator of λ is closer to the maximum likeli-hood estimator than the consistent moment estimator shows that the variability is large enough to not permit consistency to materialize even for such large sample sizes usually available in actual practice.  相似文献   

13.
In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008).  相似文献   

14.
This paper introduces a new shrinkage estimator for the negative binomial regression model that is a generalization of the estimator proposed for the linear regression model by Liu [A new class of biased estimate in linear regression, Comm. Stat. Theor. Meth. 22 (1993), pp. 393–402]. This shrinkage estimator is proposed in order to solve the problem of an inflated mean squared error of the classical maximum likelihood (ML) method in the presence of multicollinearity. Furthermore, the paper presents some methods of estimating the shrinkage parameter. By means of Monte Carlo simulations, it is shown that if the Liu estimator is applied with these shrinkage parameters, it always outperforms ML. The benefit of the new estimation method is also illustrated in an empirical application. Finally, based on the results from the simulation study and the empirical application, a recommendation regarding which estimator of the shrinkage parameter that should be used is given.  相似文献   

15.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

16.
Abstract

In this paper, using estimating function approach, a new optimal volatility estimator is introduced and based on the recursive form of the estimator a data-driven generalized EWMA model for value at risk (VaR) forecast is proposed. An appropriate data-driven model for volatility is identified by the relationship between absolute deviation and standard deviation for symmetric distributions with finite variance. It is shown that the asymptotic variance of the proposed volatility estimator is smaller than that of conventional estimators and is more appropriate for financial data with larger kurtosis. For IBM, Microsoft, Apple stocks and SP 500 index the proposed method is used to identify the model, estimate the volatility, and obtain minimum mean square error(MMSE) forecasts of VaR.  相似文献   

17.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

18.
This article presents a semiparametric method for estimating receiver operating characteristic surface under density ratio model. The construction of the proposed method is based on the adjacent-category logit model and the empirical likelihood approach. A bootstrap approach for the VUS estimator inference is presented. In a simulation study, the proposed estimator is compared with the existing parametric and nonparametric estimators in terms of bias, standard error, and mean square error. Finally, a real data example and some discussions on the proposed method are provided.  相似文献   

19.
Mixed effects models and Berkson measurement error models are widely used. They share features which the author uses to develop a unified estimation framework. He deals with models in which the random effects (or measurement errors) have a general parametric distribution, whereas the random regression coefficients (or unobserved predictor variables) and error terms have nonparametric distributions. He proposes a second-order least squares estimator and a simulation-based estimator based on the first two moments of the conditional response variable given the observed covariates. He shows that both estimators are consistent and asymptotically normally distributed under fairly general conditions. The author also reports Monte Carlo simulation studies showing that the proposed estimators perform satisfactorily for relatively small sample sizes. Compared to the likelihood approach, the proposed methods are computationally feasible and do not rely on the normality assumption for random effects or other variables in the model.  相似文献   

20.
Emrah Altun 《Statistics》2019,53(2):364-386
In this paper, we introduce a new distribution, called generalized Gudermannian (GG) distribution, and its skew extension for GARCH models in modelling daily Value-at-Risk (VaR). Basic structural properties of the proposed distribution are obtained including probability density and cumulative distribution functions, moments, and stochastic representation. The maximum likelihood method is used to estimate unknown parameters of the proposed model and finite sample performance of maximum likelihood estimates are evaluated by means of Monte-Carlo simulation study. The real data application on Nikkei 225 index is given to demonstrate the performance of GARCH model specified under skew extension of GG innovation distribution against normal, Student's-t, skew normal and generalized error and skew generalized error distributions in terms of the accuracy of VaR forecasts. The empirical results show that the GARCH model with GG innovation distribution produces the most accurate VaR forecasts for all confidence levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号