首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
We derive the minimum risk estimates of the scalar means for Normal, Exponential, and Gamma distributions, under the convex combination of SEL and LINEX loss functions. The functional forms of the proposed estimates for the three examples are general in nature, and for the boundary conditions provide us with the corresponding estimates under SEL and LINEX loss, respectively. We authenticate our proposed models using different iterative as well as meta-heuristic techniques, and through extensive simulation as well as application of live data sets, validate the efficacy of our proposed results.  相似文献   

2.
The present study proposes a method to estimate the yield of a crop. The proposed Gaussian quadrature (GQ) method makes it possible to estimate the crop yield from a smaller subsample. Identification of plots and corresponding weights to be assigned to the yield of plots comprising a subsample is done with the help of information about the full sample on certain auxiliary variables relating to biometrical characteristics of the plant. Computational experience reveals that the proposed method leads to about 78% reduction in sample size with absolute percentage error of 2.7%. Performance of the proposed method has been compared with that of random sampling on the basis of the values of average absolute percentage error and standard deviation of yield estimates obtained from 40 samples of comparable size. Interestingly, average absolute percentage error as well as standard deviation is considerably smaller for the GQ estimates than for the random sample estimates. The proposed method is quite general and can be applied for other crops as well-provided information on auxiliary variables relating to yield contributing biometrical characteristics is available.  相似文献   

3.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   

4.
In linear quantile regression, the regression coefficients for different quantiles are typically estimated separately. Efforts to improve the efficiency of estimators are often based on assumptions of commonality among the slope coefficients. We propose instead a two-stage procedure whereby the regression coefficients are first estimated separately and then smoothed over quantile level. Due to the strong correlation between coefficient estimates at nearby quantile levels, existing bandwidth selectors will pick bandwidths that are too small. To remedy this, we use 10-fold cross-validation to determine a common bandwidth inflation factor for smoothing the intercept as well as slope estimates. Simulation results suggest that the proposed method is effective in pooling information across quantile levels, resulting in estimates that are typically more efficient than the separately obtained estimates and the interquantile shrinkage estimates derived using a fused penalty function. The usefulness of the proposed method is demonstrated in a real data example.  相似文献   

5.
A method is proposed for estimating regression parameters from data containing covariate measurement errors by using Stein estimates of the unobserved true covariates. The method produces consistent estimates for the slope parameter in the classical linear errors-in-variables model and applies to a broad range of nonlinear regression problems, provided the measurement error is Gaussian with known variance. Simulations are used to examine the performance of the estimates in a nonlinear regression problem and to compare them with the usual naive ones obtained by ignoring error and with other estimates proposed recently in the literature.  相似文献   

6.
This paper proposes an estimation procedure for a class of semi-varying coefficient regression models when the covariates of the linear part are subject to measurement errors. Initial estimates for the regression and varying coefficients are first constructed by the profile least-squares procedure without input from heteroscedasticity, a bias-corrected kernel estimate for the variance function then is proposed, which in turn is used to define re-weighted bias-corrected estimates of the regression and varying coefficients. Large sample properties of the proposed estimates are thoroughly investigated. The finite-sample performance of the proposed estimates is assessed by an extensive simulation study and an application to the Boston housing data set. The simulation results show that the re-weighted bias-corrected estimates outperform the initial estimates and the naive estimates.  相似文献   

7.
Density estimates that are expressible as the product of a base density function and a linear combination of orthogonal polynomials are considered in this paper. More specifically, two criteria are proposed for determining the number of terms to be included in the polynomial adjustment component and guidelines are suggested for the selection of a suitable base density function. A simulation study reveals that these stopping rules produce density estimates that are generally more accurate than kernel density estimates or those resulting from the application of the Kronmal–Tarter criterion. Additionally, it is explained that the same approach can be utilized to obtain multivariate density estimates. The proposed orthogonal polynomial density estimation methodology is applied to several univariate and bivariate data sets, some of which have served as benchmarks in the statistical literature on density estimation.  相似文献   

8.
This paper studies cyclic long-memory processes with Gegenbauer-type spectral densities. For a semiparametric statistical model, new simultaneous estimates for singularity location and long-memory parameters are proposed. This generalized filtered method-of-moments approach is based on general filter transforms that include wavelet transformations as a particular case. It is proved that the estimates are almost surely convergent to the true values of parameters. Solutions of the estimation equations are studied, and adjusted statistics are proposed. Monte-Carlo study results are presented to confirm the theoretical findings.  相似文献   

9.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

10.
The data cloning method is a new computational tool for computing maximum likelihood estimates in complex statistical models such as mixed models. This method is synthesized with integrated nested Laplace approximation to compute maximum likelihood estimates efficiently via a fast implementation in generalized linear mixed models. Asymptotic behavior of the hybrid data cloning method is discussed. The performance of the proposed method is illustrated through a simulation study and real examples. It is shown that the proposed method performs well and rightly justifies the theory. Supplemental materials for this article are available online.  相似文献   

11.
We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.  相似文献   

12.
As an applicable and flexible lifetime model, the two-parameter generalized half-normal (GHN) distribution has been received wide attention in the field of reliability analysis and lifetime study. In this paper maximum likelihood estimates of the model parameters are discussed and we also proposed corresponding bias-corrected estimates. Unweighted and weighted least squares estimates for the parameters of the GHN distribution are also presented for comparison purpose. Moreover, the likelihood ratio test is provided as complementary. Simulation study and illustrative examples are provided to compare the performance of the proposed methods.  相似文献   

13.
We recently proposed a representation of the bivariate survivor function as a mapping of the hazard function for truncated failure time variates. The representation led to a class of estimators that includes van der Laan’s repaired nonparametric maximum likelihood estimator (NPMLE) as an important special case. We proposed a Greenwood-like variance estimator for the repaired NPMLE but found somewhat poor agreement between the empirical variance estimates and these analytic estimates for the sample sizes and bandwidths considered in our simulation study. The simulation results also confirmed those of others in showing slightly inferior performance for the repaired NPMLE compared to other competing estimators as well as a sensitivity to bandwidth choice in moderate sized samples. Despite its attractive asymptotic properties, the repaired NPMLE has drawbacks that hinder its practical application. This paper presents a modification of the repaired NPMLE that improves its performance in moderate sized samples and renders it less sensitive to the choice of bandwidth. Along with this modified estimator, more extensive simulation studies of the repaired NPMLE and Greenwood-like variance estimates are presented. The methods are then applied to a real data example. This revised version was published online in September 2005 with a correction to the second author's name.  相似文献   

14.
Two approximations recovering the functions from their transformed moments are proposed. The upper bounds for the uniform rate of convergence are derived. In addition, the comparisons of the estimates of the cumulative distribution function and its density function with the empirical distribution and the kernel density estimates are conducted via a simulation study. The plots of recovered functions are presented for several examples as well.  相似文献   

15.
This paper considers regression models for mixed binary and continuous outcomes, when the true predictor is measured with error and the binary responses are subject to classification errors. The focus of the paper is to study the effects of these errors on the estimates of the model parameters and also to propose a model that incorporates both these errors. The proposed model results in a substantial improvement in the estimates as shown by extensive simulation studies.  相似文献   

16.
Based on hybrid censored data, the problem of making statistical inference on parameters of a two parameter Burr Type XII distribution is taken up. The maximum likelihood estimates are developed for the unknown parameters using the EM algorithm. Fisher information matrix is obtained by applying missing value principle and is further utilized for constructing the approximate confidence intervals. Some Bayes estimates and the corresponding highest posterior density intervals of the unknown parameters are also obtained. Lindley’s approximation method and a Markov Chain Monte Carlo (MCMC) technique have been applied to evaluate these Bayes estimates. Further, MCMC samples are utilized to construct the highest posterior density intervals as well. A numerical comparison is made between proposed estimates in terms of their mean square error values and comments are given. Finally, two data sets are analyzed using proposed methods.  相似文献   

17.
A boxplot is a simple and effective exploratory data analysis tool for graphically summarizing a distribution of data. However, in cases where the quartiles in a boxplot are inaccurately estimated, these estimates can affect subsequent analyses. In this paper, we consider the problem of constructing boxplots in a bivariate setting with a categorical covariate with multiple subgroups, and assume that some of these boxplots can be clustered. We propose to use this grouping property to improve the estimation of the quartiles. We demonstrate that the proposed method more accurately estimates the quartiles compared to the usual boxplot. It is also shown that the proposed method identifies outliers effectively as a consequence of accurate quartiles, and possesses a clustering effect due to the group property. We then apply the proposed method to annual maximum precipitation data in South Korea and present its clustering results.  相似文献   

18.
Abstract

In this work, we propose and investigate a family of non parametric quantile regression estimates. The proposed estimates combine local linear fitting and double kernel approaches. More precisely, we use a Beta kernel when covariate’s support is compact and Gamma kernel for left-bounded supports. Finite sample properties together with asymptotic behavior of the proposed estimators are presented. It is also shown that these estimates enjoy the property of having finite variance and resistance to sparse design.  相似文献   

19.
This paper is concerned with the analysis of data obtained from a designed experiment where the experimental design cannot be implemented exactly as planned, because errors in the levels of the variables cannot be avoided or measured. When the primary interest of the investigator lies In obtaining a satisfactory response surface model for the investigated relationship, the precision of the model estimates is essential for successful model building and accurate prediction of the response. An iterative procedure is proposed which estimates the effect of the variable in errors and obtains efficient weighted least squares estimates of the parameters of Interest.  相似文献   

20.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号