首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes an estimation procedure for a class of semi-varying coefficient regression models when the covariates of the linear part are subject to measurement errors. Initial estimates for the regression and varying coefficients are first constructed by the profile least-squares procedure without input from heteroscedasticity, a bias-corrected kernel estimate for the variance function then is proposed, which in turn is used to define re-weighted bias-corrected estimates of the regression and varying coefficients. Large sample properties of the proposed estimates are thoroughly investigated. The finite-sample performance of the proposed estimates is assessed by an extensive simulation study and an application to the Boston housing data set. The simulation results show that the re-weighted bias-corrected estimates outperform the initial estimates and the naive estimates.  相似文献   

2.
A method is proposed for estimating regression parameters from data containing covariate measurement errors by using Stein estimates of the unobserved true covariates. The method produces consistent estimates for the slope parameter in the classical linear errors-in-variables model and applies to a broad range of nonlinear regression problems, provided the measurement error is Gaussian with known variance. Simulations are used to examine the performance of the estimates in a nonlinear regression problem and to compare them with the usual naive ones obtained by ignoring error and with other estimates proposed recently in the literature.  相似文献   

3.
We derive the minimum risk estimates of the scalar means for Normal, Exponential, and Gamma distributions, under the convex combination of SEL and LINEX loss functions. The functional forms of the proposed estimates for the three examples are general in nature, and for the boundary conditions provide us with the corresponding estimates under SEL and LINEX loss, respectively. We authenticate our proposed models using different iterative as well as meta-heuristic techniques, and through extensive simulation as well as application of live data sets, validate the efficacy of our proposed results.  相似文献   

4.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

5.
Based on hybrid censored data, the problem of making statistical inference on parameters of a two parameter Burr Type XII distribution is taken up. The maximum likelihood estimates are developed for the unknown parameters using the EM algorithm. Fisher information matrix is obtained by applying missing value principle and is further utilized for constructing the approximate confidence intervals. Some Bayes estimates and the corresponding highest posterior density intervals of the unknown parameters are also obtained. Lindley’s approximation method and a Markov Chain Monte Carlo (MCMC) technique have been applied to evaluate these Bayes estimates. Further, MCMC samples are utilized to construct the highest posterior density intervals as well. A numerical comparison is made between proposed estimates in terms of their mean square error values and comments are given. Finally, two data sets are analyzed using proposed methods.  相似文献   

6.
The data cloning method is a new computational tool for computing maximum likelihood estimates in complex statistical models such as mixed models. This method is synthesized with integrated nested Laplace approximation to compute maximum likelihood estimates efficiently via a fast implementation in generalized linear mixed models. Asymptotic behavior of the hybrid data cloning method is discussed. The performance of the proposed method is illustrated through a simulation study and real examples. It is shown that the proposed method performs well and rightly justifies the theory. Supplemental materials for this article are available online.  相似文献   

7.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   

8.
Time series data observed at unequal time intervals (irregular data) occur quite often and this usually poses problems in its analysis. A recursive form of the exponentially smoothed estimated is here proposed for a nonlinear model with irregularly observed data and its asymptotic properties are discussed An alternative smoother to that of Wright (1985) is also derived. Numerical comparison is made between the resulting estimates and other smoothed estimates.  相似文献   

9.
In linear quantile regression, the regression coefficients for different quantiles are typically estimated separately. Efforts to improve the efficiency of estimators are often based on assumptions of commonality among the slope coefficients. We propose instead a two-stage procedure whereby the regression coefficients are first estimated separately and then smoothed over quantile level. Due to the strong correlation between coefficient estimates at nearby quantile levels, existing bandwidth selectors will pick bandwidths that are too small. To remedy this, we use 10-fold cross-validation to determine a common bandwidth inflation factor for smoothing the intercept as well as slope estimates. Simulation results suggest that the proposed method is effective in pooling information across quantile levels, resulting in estimates that are typically more efficient than the separately obtained estimates and the interquantile shrinkage estimates derived using a fused penalty function. The usefulness of the proposed method is demonstrated in a real data example.  相似文献   

10.
In this paper we present two robust estimates for GARCH models. The first is defined by the minimization of a conveniently modified likelihood and the second is similarly defined, but includes an additional mechanism for restricting the propagation of the effect of one outlier on the next estimated conditional variances. We study the asymptotic properties of our estimates proving consistency and asymptotic normality. A Monte Carlo study shows that the proposed estimates compare favorably with respect to other robust estimates. Moreover, we consider some real examples with financial data that illustrate the behavior of these estimates.  相似文献   

11.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   

12.
Abstract

A method is proposed for the estimation of missing data in analysis of covariance models. This is based on obtaining an estimate of the missing observation that minimizes the error sum of squares. Specific derivation of this estimate is carried out for the one-factor analysis of covariance, and numerical examples are given to show the nature of the estimates produced. Parameter estimates of the imputed data are then compared with those of the incomplete data.  相似文献   

13.
This article introduces principal component analysis for multidimensional sparse functional data, utilizing Gaussian basis functions. Our multidimensional model is estimated by maximizing a penalized log-likelihood function, while previous mixed-type models were estimated by maximum likelihood methods for one-dimensional data. The penalized estimation performs well for our multidimensional model, while maximum likelihood methods yield unstable parameter estimates and some of the parameter estimates are infinite. Numerical experiments are conducted to investigate the effectiveness of our method for some types of missing data. The proposed method is applied to handwriting data, which consist of the XY coordinates values in handwritings.  相似文献   

14.
In this paper, we extend the censored linear regression model with normal errors to Student-t errors. A simple EM-type algorithm for iteratively computing maximum-likelihood estimates of the parameters is presented. To examine the performance of the proposed model, case-deletion and local influence techniques are developed to show its robust aspect against outlying and influential observations. This is done by the analysis of the sensitivity of the EM estimates under some usual perturbation schemes in the model or data and by inspecting some proposed diagnostic graphics. The efficacy of the method is verified through the analysis of simulated data sets and modelling a real data set first analysed under normal errors. The proposed algorithm and methods are implemented in the R package CensRegMod.  相似文献   

15.
Density estimates that are expressible as the product of a base density function and a linear combination of orthogonal polynomials are considered in this paper. More specifically, two criteria are proposed for determining the number of terms to be included in the polynomial adjustment component and guidelines are suggested for the selection of a suitable base density function. A simulation study reveals that these stopping rules produce density estimates that are generally more accurate than kernel density estimates or those resulting from the application of the Kronmal–Tarter criterion. Additionally, it is explained that the same approach can be utilized to obtain multivariate density estimates. The proposed orthogonal polynomial density estimation methodology is applied to several univariate and bivariate data sets, some of which have served as benchmarks in the statistical literature on density estimation.  相似文献   

16.
A boxplot is a simple and effective exploratory data analysis tool for graphically summarizing a distribution of data. However, in cases where the quartiles in a boxplot are inaccurately estimated, these estimates can affect subsequent analyses. In this paper, we consider the problem of constructing boxplots in a bivariate setting with a categorical covariate with multiple subgroups, and assume that some of these boxplots can be clustered. We propose to use this grouping property to improve the estimation of the quartiles. We demonstrate that the proposed method more accurately estimates the quartiles compared to the usual boxplot. It is also shown that the proposed method identifies outliers effectively as a consequence of accurate quartiles, and possesses a clustering effect due to the group property. We then apply the proposed method to annual maximum precipitation data in South Korea and present its clustering results.  相似文献   

17.
In this paper, the estimation of parameters, reliability and hazard functions of a inverted exponentiated half logistic distribution (IEHLD) from progressive Type II censored data has been considered. The Bayes estimates for progressive Type II censored IEHLD under asymmetric and symmetric loss functions such as squared error, general entropy and linex loss function are provided. The Bayes estimates for progressive Type II censored IEHLD parameters, reliability and hazard functions are also obtained under the balanced loss functions. However, the Bayes estimates cannot be obtained explicitly, Lindley approximation method and importance sampling procedure are considered to obtain the Bayes estimates. Furthermore, the asymptotic normality of the maximum likelihood estimates is used to obtain the approximate confidence intervals. The highest posterior density credible intervals of the parameters based on importance sampling procedure are computed. Simulations are performed to see the performance of the proposed estimates. For illustrative purposes, two data sets have been analyzed.  相似文献   

18.
The estimation of the land equivalent ratios is proposed to be done by the (sum of) ratios of means of intercrop yield to sole crop yield. The bias and standard error of the estimates are obtained for large samples. Comparisons of the cropping systems have been made on the basis of these estimates and illustrated with field data.  相似文献   

19.
The commonly used method of small area estimation (SAE) under a linear mixed model may not be efficient if data contain substantial proportion of zeros than would be expected under standard model assumptions (hereafter zero-inflated data). The authors discuss the SAE for zero-inflated data under a two-part random effects model that account for excess zeros in the data. Empirical results show that proposed method for SAE works well and produces an efficient set of small area estimates. An application to real survey data from the National Sample Survey Office of India demonstrates the satisfactory performance of the method. The authors describe a parametric bootstrap method to estimate the mean squared error (MSE) of the proposed estimator of small areas. The bootstrap estimates of the MSE are compared to the true MSE in simulation study.  相似文献   

20.
A new solution is proposed for a sparse data problem arising in nonparametric estimation of a bivariate survival function. Prior information, if available, can be used to obtain initial values for the EM algorithm. Initial values will completely determine estimates of portions of the distribution which are not identifiable from the data, while having a minimal effect on estimates of portions of the distribution for which the data provide sufficient information. Methods are applied to the distribution of women's age at first marriage and age at birth of first child, using data from the Current Population Surveys of 1975 and 1986.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号